Upgrade Your Vision to Three Dimensions at the Embedded Vision Summit

Three-dimensional computer vision, once reserved for exotic applications like capturing the movements of actors for animated movies, became a mainstream technology with the introduction of the Microsoft Kinect in 2010. Since then, numerous 3D vision devices have emerged for cost-sensitive markets, such as the Dell Venue 8 7000 tablet (which uses Intel's RealSense depth camera and software) and Subaru's "EyeSight" driver assistance system.

Some experts believe that 3D vision is a game-changer, enabling robust, practical solutions to problems that are difficult-to-impossible to solve with conventional 2D vision. Clearly, 3D does simplify many problems for vision algorithm developers, enabling easier discrimination between objects and background, for example. It can also enable more reliable and more precise gesture interfaces. And perhaps most importantly, it simplifies a critical function for many systems: understand where they are located in relation to other objects.

This year's Embedded Vision Summit features several presentations addressing 3D vision. First is the keynote from Mike Aldred, Electronics Lead at Dyson. Aldred's talk, "Bringing Computer Vision to the Consumer," will highlight the Dyson 360 Eye robot vacuum cleaner, which uses 3D vision to map a room and determine its position in the room. Aldred's talk will chart some of the high and lows of the project, the challenges of bridging between academia and business, and how to use a diverse team to take an idea from the lab into real homes.

Next is "3D from 2D: Theory, Implementation and Applications of Structure from Motion," a technical presentation by Marco Jacobs of videantis. Whereas other techniques for discerning depth leverage a 3D-capable image sensor technology (stereo, structured light or time-of-flight), Jacob's talk will explore the structure from motion technique, which extracts depth information using only a single, moving 2D camera. Jacobs will introduce the theory behind the approach and explore an efficient implementation for embedded applications.

Finally, Ken Lee, president of VanGogh Imaging will present "Bringing New Capabilities to Users and Industries with Mobile 3D Vision." Lee will explore how diverse applications such as 3D printing, gaming, medical diagnosis, parts inspection, and ecommerce benefit from the ability of 3D computer vision to separate a scene into discrete objects and then recognize and analyze them reliably. He will explain how 3D vision differs from traditional approaches, highlight techniques that make 3D vision feasible in mobile devices, and show how this technology is being used today to change industries.

In addition to these 3D vision-focused presentations, the Embedded Vision Summit includes 22 other presentations by vision technology, application and market experts, along with a second keynote talk from Dr. Ren Wu of Baidu and dozens of demos by leading vision technology suppliers. The Embedded Vision Summit takes place on May 12, 2015 at the Santa Clara (California) Convention Center. Half- and full-day workshops will be presented on May 11 and 13. Register today, while space is still available!

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top