Low Power FPGAs Enable Embedded Vision

This blog post was originally published at Lattice Semiconductor’s website. It is reprinted here with the permission of Lattice Semiconductor.

One of the most intriguing new applications of technology is giving machines the ability to see—something called machine vision or embedded vision. Thanks to the low-cost and wide availability of camera sensors, new advancements in artificial intelligence and machine learning software algorithms, and the creation of semiconductor chips optimized to run these workloads, it’s now relatively easy and inexpensive to add intelligent vision to an enormous variety of devices. In fact, applications ranging from industrial machines to automotive systems and consumer electronics have started to integrate embedded vision capabilities into their products.

At a high level, embedded vision is intended to function much like our own human vision. Sensors in the form of tiny cameras “see” objects around them, just as our eyes do for us, and then those signals are passed along to a computer “brain” that interprets what it sees and takes actions accordingly. It’s a powerful technology that only recently was much closer to science fiction than reality but is already starting to enable a wide range of capabilities. In vehicles, for example, the combination of camera sensors, software, and specialized chips are making cars “intelligent” by allowing them to recognize objects in the environment around them and react accordingly. It’s obviously critical for something like fully autonomous cars, but these embedded vision capabilities are also powering important new assisted driving features in today’s new vehicles, such as automatic braking, lane veering avoidance and more.

In consumer products like security cameras or smart doorbells, embedded vision capabilities are refining the products so that, for example, you don’t get an intruder warning every time your dog runs around the house. Thanks to built-in “intelligence”, these cameras now recognize the dog, understand that it isn’t a threat, and avoid creating unnecessary warnings that impact the functionality and trustworthiness of the system. In industrial environments, embedded vision is being used to do things like quickly sort items or look for visual flaws at a significantly faster and more efficient pace than a human being, thereby reducing costs and improving quality.

Over the last few years, numerous approaches have been taken toward the recognition and interpretation portion of embedded vision work, and several different types of chip architectures have been used to power these capabilities. Many of the earliest efforts were very expensive and power-hungry. This made them unsuitable for applications in which costs were critical or for those that needed better power efficiency, perhaps because they were battery powered. It turns out, however, that low-power FPGAs (field-programmable gate arrays) have proven to be an ideal match for much of this work.

Thanks to their ability to be programmed and then reprogrammed to do specific tasks very efficiently (see “An FPGA Primer” for more), FPGAs are an excellent tool for machine vision applications at many stages of the product lifecycle. During initial product development, embedded vision software algorithms are in a nearly constant state of evolution and refinement, so the requirements for the hardware intended to run them can evolve quickly as well. The “reprogrammability” of FPGAs makes them the perfect choice in these rapidly changing environments. Plus, even when the product specifications are completed, work on enhancing and evolving the algorithms to further tweak the performance are ongoing, again, making FPGAs’ flexible architectures very important. In some instances, this flexibility even allows new capabilities to be added over the life of the product. New versions of the algorithms and architectural adjustments to the FPGAs can be slipped in via over-the-air updates.

In addition to these basic architectural benefits, additional features offered by certain FPGAs make it possible to add embedded vision capabilities to existing designs. For example, Lattice Semiconductor’s new Crosslink-NX FPGAs have inputs for multiple sensor or camera inputs, allowing much more sophisticated systems to be built with them. Many low-power CPUs only have one sensor/camera input, which limits their capabilities for certain applications. In addition, FPGAs often function as a bridge device that sits between the camera sensors and the CPU, providing essential connectivity between them, thus allowing the CPU to focus on the tasks it needs to do and letting the FPGAs do the image interpretation work. For time-sensitive applications, FPGAs also have the added benefit of always performing a given function in exactly the same amount of time. For systems trying to use a CPU to do the image recognition work, depending on what else the CPU may need to be doing at a given moment, the machine vision task could be put into a queue that slightly alters the timing of when the work is done.

Given the right kind of interfaces and software tools, it’s also possible to integrate FPGA-powered embedded vision “modules” into existing machine or device designs. For example, thanks to Lattice’s long history with interface bridging, the company offers these kinds of modules with integrated support for the low-cost MIPI (Mobile Industry Processor Interface) standard. The MIPI standard is often used to connect between the processor and image sensors in smartphones and millions of other devices. As a result, it’s significantly easier and less expensive for companies to integrate these kinds of modules into their products, essentially allowing them to “add” smart vision to their machines.

Of course, hardware is just one of the critical parts required to implement an embedded vision system. To make sure developers have the design software and other resources they need to quickly and easily develop embedded vision systems, Lattice provides solutions stacks that bundle the required hardware, design software, IP portfolio, and reference designs. For the embedded vision market, Lattice calls their offering the mVision solutions stack, and it gives customers everything they need to build an embedded system that can support emerging technologies. While it’s easy to overlook these software elements, the truth is that most companies don’t have a lot of experience with embedded vision. As a result, the mVision stack is an essential part of making the process of designing, using, and integrating these capabilities as easy as possible.

The opportunities for bringing more “smarts” and intelligence to machines and devices is something virtually every industrial or consumer device maker is now exploring and the potential for new applications of embedded vision are exploding. It does take a bit of work to get things started, but by having easy kits to get things going, companies can start to make their “vision” real at a faster pace than they likely ever imagined.

Bob O’Donnell
President and Chief Analyst, TECHnalysis Research, LLC

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top