5 trends to watch in Embedded Vision and Edge AI
Article by: Jeff Bier
What is the state of innovation in embedded vision?
While deep learning remains a dominant force, deep neural networks alone don’t make a product.
Presented as a virtual event in May, the Embedded Vision Summit examined the latest developments in practical computer vision and cutting edge AI processing. In my role as general chair of the summit, I have reviewed over 300 excellent session proposals for the conference. Here are the trends I see in the on-board vision space.
Domination in deep learning
First, unsurprisingly, deep learning continues to be a dominant force in the field. He radically changed what is possible with computer vision. It made development more data driven than code driven, and it changed the tools and techniques we use. But the data is painful. Where do you get it How much do you need? How can I get more? How do you know you have the right kind of data?
Complex vision pipelines
Second, despite the deep learning revolution, product developers are increasingly realizing that deep neural networks (DNNs) are not, in themselves, a product. Real-world products require a complex vision pipeline, often including camera and image processing, DSPs, Kalman filters, classic computer vision, and maybe even multiple DNNs, all combined the right way. way to get the results you need.
The third trend is democratization. It’s easier than ever to develop an embedded vision application; with a proliferation of tools and libraries, you don’t have to develop your algorithm from scratch in assembler or C. A good example is Edge Impulse, which offers easy-to-use software tools that allow developers to quickly and easily develop AI models and deploy them on low-cost microprocessors, all with very little coding required.
Additionally, we’re starting to see vendors stepping up to take over the entire pipeline (Lattice and Qualcomm are good examples here). It’s not hard to imagine a future in which a semiconductor company with great tools for one component in the pipeline – DNNs, for example – but nothing for the other critical elements will lose market share in the market. profit from competitors offering more complete solutions.
The rise of practical systems
Fourth, this is what I would call the maturation of the domain: we are moving beyond the “wow, this is so cool” stage and asking how we are deploying this technology in a commercially viable and maintainable way.
Containerization is a good example. The approach has been good practice in cloud development for over a decade, but we’re starting to see it used to accelerate the development of practical embedded systems, including vision and AI systems (which bring their own challenges, with potentially frequent overruns – aerial model updates).
Likewise, the specters of security and privacy loom. How do we design systems that are secure against hackers and protect user privacy? Relatedly, how do we meet functional safety requirements – indeed, how do we even test such things? These are issues that don’t arise in science fair projects, but arise when you are shipping real products to serious customers.
The fifth is, honestly, a spoiler for the richness of the processor. A year or two ago, I observed that we were in a Cambrian explosion of processors for AI. Today, on the contrary, this trend has accelerated and spread: it seems that everyone who makes a processor – whether it’s a dollar MCU or a big server processor multi-core, multi-gigahertz on-premises – target cutting edge AI and visual applications
That said, it’s a big space, and processor companies often target different areas in terms of performance, price, and power. For system developers, while it’s great to have a choice, it can be difficult to choose, especially when considering not only the technical factors (such as performance and power consumption), but also other critical issues, such as price, activity and supply chain risk. .
If there is one megatrend here, it’s this: We live in a golden era of innovation in integrated vision. There has never been a better time to create products based on vision.
This article was originally published on EE time.
Jeff Bier is Chairman of the consulting firm BDTI, Founder of Edge AI and Vision Alliance and General Chairman of the Embedded Vision Summit.