Create ‘Machines That See’ Using Industry Resources

This article is an expanded and updated version of one originally published at Design News. It is reprinted here with the permission of Design News.

This article explores the opportunity for incorporating visual intelligence in electronic products, introduces an industry alliance created to help engineers implement such capabilities, and describes an upcoming technical education forum sponsored by the alliance, along with other technical resources that the alliance provides.

By Jeff Bier
Founder, Embedded Vision Alliance

and

Brian Dipert
Editor-in-Chief, Embedded Vision Alliance

It's now practical to incorporate computer vision into a wide range of systems, enabling those systems to analyze their environments via video and still image inputs. Historically, such image analysis technology has mainly been found in complex, expensive systems, such as military equipment and manufacturing quality-control inspection systems. However, cost, performance and power consumption advances in digital integrated circuits such as processors, memory devices and image sensors, along with the emergence of robust software algorithms, are now paving the way for the proliferation of visual intelligence into high-volume applications. The term "embedded vision" refers to the widespread use of computer vision in embedded systems, mobile devices, PCs and the cloud.

Embedded Vision Goes Mainstream

Similar to the way that digital wireless communication technology has become pervasive over the past 10 years, embedded vision technology is poised to become widely deployed over the next 10 years. High-speed wireless connectivity began as a costly niche technology; advances in digital integrated circuits were critical in enabling it to evolve from exotic to mainstream. When chips got fast enough, inexpensive enough, and energy efficient enough, high-speed wireless became a mass-market technology. Similarly, advances in digital chips are now paving the way for the proliferation of embedded vision into high-volume applications.

For example, smartphones and tablets typically include multiple image sensors.  These sensors are primarily intended for photography, but are increasingly also being used for embedded-vision-enabled applications such as measuring your heart rate or translating text from one language to another.  Vision-based safety systems have been shipping in high-end cars for several years, and are now migrating into higher-volume mainstream models, using multiple cameras to monitor blind spots, assist in parking and other maneuvers, and provide early warning of impending collisions and other hazards.

Current-generation still and video cameras go beyond simple image capture and processing functions to incorporate more advanced analysis-and-response features such as face-detection-driven focus and exposure compensation. Advanced cameras will even delay the shutter activation until they discern that the subject is smiling. Similarly, video surveillance systems use pedestrian detection and other techniques not only to activate the video recording function but also send alerts to their owners. As such, they not only “see” but also are beginning to “understand” the environments in which they operate.

The diversity of emerging vision applications is impressive.  In gaming, Microsoft's Kinect peripheral for the Xbox 360 game console and PC has begun to show the potential of embedded vision, selling over 24 million units as of late February. Medical systems are increasingly supplementing human intelligence with computer vision-based algorithm analysis to assist in patient diagnosis and treatment.  And market researchers and educators are exploring the ability to assess a person’s emotional state from images of a face.

With embedded vision, the semiconductor industry is entering a virtuous circle of the sort that has characterized many other digital signal processing application domains. Although there are few chips specifically designed for embedded vision applications today, these applications are increasingly adopting ICs (including image sensors and various processor types) developed for other applications. As these chips continue to deliver more performance per dollar and per watt, they will enable the creation of more high-volume embedded vision products. Those high-volume applications, in turn, will attract more attention from silicon providers, who will deliver even better performance, power efficiency, and cost-effectiveness.

The Embedded Vision Alliance

Embedded vision technology has the potential to enable a wide range of electronic products that are more intelligent and responsive than before, and thus more valuable to users. It can add helpful features to existing products. And it can provide significant new markets for hardware, software and semiconductor manufacturers. The Embedded Vision Alliance, a worldwide organization of technology developers and providers, is working to empower engineers to transform this potential into reality.

First and foremost, the Alliance's mission is to provide engineers with practical education, information, and insights to help them incorporate embedded vision capabilities into new and existing products. To execute this mission, the Alliance maintains a website providing tutorial articles, videos, code downloads and a discussion forum staffed by a diversity of technology experts. Registered website users can also receive the Alliance’s twice-monthly email newsletter, among other benefits.

In addition, the Embedded Vision Alliance offers a free online training facility for embedded vision product developers: the Embedded Vision Academy.  This area of the Alliance website provides in-depth technical training and other resources to help engineers integrate visual intelligence into next-generation embedded and consumer devices. Course material in the Embedded Vision Academy spans a wide range of vision-related subjects, from basic vision algorithms to image pre-processing, image sensor interfaces, and software development techniques and tools such as OpenCV.

Academy courses incorporate training videos, interviews, demonstrations, downloadable code, and other developer resources, all oriented toward developing embedded vision products. Nearly seventy video seminars and interviews, forty tutorial articles, and two downloadable software tool suites are currently available at the Academy, along with an archive of presentations from past Alliance events. The Embedded Vision Alliance is continuously expanding the curriculum of the Embedded Vision Academy, so engineers will be able to return to the site on an ongoing basis for new courses and resources. Access is free to all through a simple registration process.

The Embedded Vision Summit

On Thursday, May 29, 2014, in Santa Clara, California, the Alliance will hold its fourth Embedded Vision Summit. Embedded Vision Summits are technical educational forums for engineers interested in incorporating visual intelligence into electronic systems and software. They provide how-to presentations, inspiring keynote talks, demonstrations, and opportunities to interact with technical experts from Alliance member companies. These events are intended to:

  • Inspire engineers' imaginations about the potential applications for embedded vision technology through exciting presentations and demonstrations.
  • Offer practical know-how for engineers to help them incorporate vision capabilities into their products, and
  • Provide opportunities for engineers to meet and talk with leading embedded vision technology companies and learn about their offerings.

The Embedded Vision Summit West will be co-located with the Augmented World Expo, and attendees of the Embedded Vision Summit West will have the option of also attending the full Augmented World Expo conference or accessing the Augmented World Expo exhibit floor at a discounted price.

The day before, on Wednesday, May 28, 2014, Embedded Vision Alliance member companies will present workshops exploring hardware and software for various embedded vision implementations. These workshops are an excellent starting point for vision application developers interested in exploring a wide range of embedded vision applications.

These hands-on workshops will provide you with in-depth tutorials on the basics of designing systems and applications that incorporate computer vision techniques. Taught by embedded vision experts from Alliance member companies, these hands-on workshops will explore a variety of topics, ranging from embedded vision algorithms to software implementation and hardware acceleration.

Please revisit the event overview page in the coming weeks for more information on the Embedded Vision Summit West and associated hands-on workshops, such as detailed agendas, keynote, technical tutorial and other presentation details, speaker biographies, and online registration.

Jeff Bier is founder of the Embedded Vision Alliance. He is also co-founder and president of Berkeley Design Technology, Inc. (BDTI), a trusted resource for independent analysis and specialized engineering services in the realm of embedded digital signal processing technology. Jeff oversees BDTI’s benchmarking and analysis of chips, tools, and other technology. Jeff is also a key contributor to BDTI’s consulting services, which focus on product-development, marketing, and strategic advice for companies using and developing embedded digital signal processing technologies.

Brian Dipert is Editor-In-Chief of the Embedded Vision Alliance. He is also a Senior Analyst at BDTI, and Editor-In-Chief of InsideDSP, the company's online newsletter dedicated to digital signal processing technology. Brian has a B.S. degree in Electrical Engineering from Purdue University in West Lafayette, IN. His professional career began at Magnavox Electronics Systems in Fort Wayne, IN; Brian subsequently spent eight years at Intel Corporation in Folsom, CA. He then spent 14 years at EDN Magazine.

 

 

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top