Embedded Vision Insights: May 15, 2014 Edition

EVA180x100

In this edition of Embedded Vision Insights:

LETTER FROM THE EDITOR

Dear Colleague,Embedded Vision Summit West

Two weeks from today, my colleagues at the Embedded Vision Alliance and I will kick off the next, and biggest and best yet, iteration of the Embedded Vision Summit, taking place on May 29 at the Santa Clara (California) Convention Center. Yann LeCun, Director of AI Research at Facebook, will deliver the morning keynote, "Convolutional Networks: Unleashing the Potential of Machine Learning for Robust Perception Systems." Machine learning, found in some of the most sophisticated image-understanding systems deployed today, provides a framework that enables system training through examples. It is at the forefront of applications such as face recognition, visual navigation, and handwriting recognition, and LeCun will discuss a breakthrough method for implementing such tasks.

Nathaniel Fairfield, technical lead at Google, will deliver the afternoon keynote, "Self-Driving Cars." Google recently announced that its autonomous car fleet has logged more than 700,000 miles and is increasingly capable of self-navigating complex city street settings. Dr. Fairfield will discuss Google's overall approach to solving the driving problem, the capabilities of the car, progress so far, and remaining challenges. The Embedded Vision Summit will also include two tracks' worth of sixteen total technical presentations revolving around the themes of visual recognition and visual intelligence, and technology demonstrations from nearly two dozen Alliance member companies. If you haven't registered yet, do so today without further delay; keep in mind that last year's Summit sold out!

While you're registering, don't forget about the two in-depth technical workshops also taking place at the Santa Clara Convention Center, the prior day (May 28). The first workshop, from Alliance founding member BDTI, is entitled "Implementing Computer Vision and Embedded Vision: A Technical Introduction". It will provide a practical tutorial on processors, sensors, algorithms, and development techniques for vision-based application and system design, including OpenCV and OpenCL. The second workshop is co-presented by BDTI and fellow Alliance members Analog Devices and Avnet Electronics. It will explore hardware and software for image processing and video analytics in a hands-on fashion, featuring the Avnet/Analog Devices Embedded Vision Starter Kit.

And while you're up on the Alliance website, make sure you check out all the other great new content published there in recent weeks. One particular highlight is the presentation "Vision-Based Navigation Applications: From Planetary Exploration to Consumer Devices," delivered by NASA's Larry Matthies at the March Alliance Member Meeting. Dr. Matthies is a Senior Research Scientist at JPL, and Supervisor of the Computer Vision Group in the Mobility and Robotic Systems Section. His talk discussed in detail the various vision processing-inclusive projects he's worked on over the years, some familiar (Mars Exploration Rover and Mars Pathfinder) and others likely more of a surprise to you (Google's Project Tango 3D mapping smartphone, for example).

I'd also like to draw your attention to a recently published article on the CENTR 360-degree panorama camera, a compelling case study of the embedded vision opportunity, and an example of a system uniquely enable by the technologies and products that will be on display at the upcoming Embedded Vision Summit. Thanks as always for your support of the Embedded Vision Alliance, and for your interest in and contributions to embedded vision technologies, products and applications. Whenever you come up with an idea as to how the Alliance can better service your needs, you know where to find me.

Brian Dipert
Editor-In-Chief, Embedded Vision Alliance

FEATURED VIDEOS

March 2014 Embedded Vision Alliance Member Meeting Presentation: "Vision-Based Navigation Applications: From Planetary Exploration to Consumer Devices," Larry Matthies, NASANASA
Larry Matthies, Supervisor of the Computer Vision Group at NASA's Jet Propulsion Laboratory, delivers the technology presentation, "Vision-Based Navigation Applications: From Planetary Exploration to Consumer Devices," at the March 2014 Embedded Vision Alliance Member Meeting. Dr. Matthies is a Senior Research Scientist at JPL and is the Supervisor of the Computer Vision Group in the Mobility and Robotic Systems Section. He has been involved in the development of vision systems for Mars Exploration Rover and Mars Pathfinder, as well as a range of challenging earth-based applications.

March 2014 Game Developer Conference Demonstration: SoftKineticSoftKinetic
Eric Krzeslo, Chief Marketing Officer of SoftKinetic, demonstrates the company's 3D time-of-flight vision sensor technology and its gesture-control middleware at the March 2014 Game Developer Conference. The demonstrations took place on a range of platforms, from an Android tablet to the NVIDIA Shield portable gaming device to a set of Oculus VR virtual reality goggles. Eric fielded questions from Jeff Bier, Founder of the Embedded Vision Alliance, about applications of 3D embedded vision and trade-offs between various 3D sensing approaches.

More Videos

FEATURED ARTICLES

CENTR: An Embedded Vision Case Study WinnerCENTR
Computational photography is one of the most visible examples of embedded vision processing for the masses. And panorama "stitching" is one of the most common computational photography features. Most of today's implementations focus on still image processing of multiple sequentially captured frames and are therefore relatively insensitive to stitching delays. What, however, if you were interested in stitching together the outputs of four simultaneously captured video images, at 30 fps or 60 fps processing rates, and at 720p or 1080p per-camera resolutions? What if you wanted to do the processing directly within a device that fit in the palm of your hand? And what if you wanted that device to deliver multi-hour battery life? More

High Expectation for Next Generation HD CCTV TechnologyIHS
Growing slice of the $13 billion video surveillance equipment market or small-time niche which can never keep up with the double-digit growth rates for network surveillance? It's a very different outlook for HD CCTV technology, depending on who you speak to. There are those who see the growing revenues and a viable alternative to HD IP cameras for many applications, there are those who want to remain technology neutral by introducing the "third string to their bow" and there are those who simply don't believe in the technology at all. More

More Articles

FEATURED NEWS

Embedded Vision Alliance Announces Program for Embedded Vision Summit West

Upcoming Webcasts from CogniVue: Embedded Vision Information for You

NASA and Google: A "Tango" Most Fruitful

NVIDIA's Jetson TKI: Ready For Some Embedded Vision Fun?

Qualcomm Announces "The Ultimate Connected Computing" Next- Generation Snapdragon 810 and 808 Processors

More News

 

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top