Embedded Vision Insights: December 20, 2011 Edition

Eva

Dear Colleague,

Welcome to the fourth edition of Embedded Vision Insights, the newsletter of the Embedded Vision Alliance.

This past month has been a productive one for the Embedded Vision Alliance. In early December, the Alliance launched the Embedded Vision Academy, a free online training facility for embedded vision product developers. The Academy incorporates training videos, technical interviews, demonstrations, downloadable code and demos, and other developer resources. Access is free to all, with registration. The Academy makes it possible for engineers worldwide to gain the skills needed for embedded vision product development. I encourage you to check out the wealth of Academy content, which will steadily increase over time, at your earliest convenience.

Two weeks ago, the Alliance held its second quarterly Summit meeting for member companies, following up September's premier Summit. This well-attended event in Dallas, TX was sponsored by Texas Instruments, who generously provided not only facilities but also an enthusiastically received technical session on the BeagleBoard evaluation module series and its applicability to embedded vision applications. At the December Summit, Texas Instruments also announced that it will be upgrading its Alliance membership to the Platinum level, demonstrating the company's commitment to the Embedded Vision Alliance's mission of inspiring and empowering design engineers to create machines that see.

Speaking of enthusiastic receptions, Nik Gagvani from Cernium provided the mid-day keynote address. Nik and his team developed the Archerfish Solo, the first low-cost smart surveillance camera for consumer use. Nik shared his insightful perspective on the challenges faced by embedded vision system designers, and what these system designers need most from their suppliers. Stay tuned for Nik's video-recorded interview with me, along with an article about his keynote and a copy of his foil set, all to appear soon on the website and in next month's edition of Embedded Vision Insights for all registered website users.

Also in attendance at the December Summit event was Jim Donlon, the program manager of DARPA's Mind's Eye Program. Donlon is scheduled to deliver the keynote at the next Embedded Vision Alliance Member Summit, currently scheduled for March 29, 2012 in Silicon Valley, coincident with DESIGN West (formerly the Embedded Systems Conference Silicon Valley). As currently envisioned, a portion of the day will be open to invited press and industry analyst attendees; an evening cocktail reception will provide additional opportunity for Alliance member interactions with press and analyst representatives, including demonstrations. Alliance members, please mark this date in your calendar and plan to attend.

For Alliance members, registered website users and visitors alike, please send me an email with any and all thoughts regarding making the Alliance, this newsletter and the website better. Happy holiday wishes from the Embedded Vision Alliance!

Brian Dipert
Editor-In-Chief, Embedded Vision Alliance

FEATURED VIDEOS

Implementing an Image Signal Processing Pipeline using FPGAs
José Alvarez, Video Technology Engineering Director at Xilinx Corporation, follows up his premier video in this tutorial series with the discussion of a flexible ISP (image signal processing) implementation using readily available dynamic processing blocks in an FPGA.

A Conversation with Gene Frantz
Brian Dipert, Editor-In-Chief of the Embedded Vision Alliance, interviews Gene Frantz, Texas Instruments Principal Fellow. Gene, who has been with the company for nearly 40 years, did his early work with DSPs in association with the Speak & Spell speech synthesizer. In this two-part series, Gene and Brian discuss the history of the DSP both at Texas Instruments and in the broader technology sector, the opportunity for embedded vision to be the next in a long series of DSP success stories, what adaptations DSPs will need to make in order to be efficient embedded vision processing engines, and the potential future evolution both of the embedded vision application and of products used in it.

More Videos

FEATURED ARTICLES

Gesture Recognition: First Step Toward 3D UIs?
As touchscreen technologies become more pervasive, users are becoming more expert at interacting with machines. Gesture recognition takes human interaction with machines even further. It’s long been researched with 2D vision, but the advent of 3D sensor technology means gesture recognition will be used more widely and in more diverse applications. Soon a person sitting on the couch will be able to control the lights and TV with a wave of the hand, and a car will automatically detect if a pedestrian is close by. Development of 3D gesture recognition is not without its difficulties, however.   More

Dynamic Range And Edge Detection: An Example Of Embedded Vision Algorithms' Dependence On In-Camera Image Processing
Achieving natural or otherwise aesthetically pleasing camera images is normally considered distinct from the various tasks encompassed in embedded vision. But the human visual system has likely evolved to produce the images we perceive not for beauty per se, rather in order to optimize the brain's decision-making processes based on these inputs. It may well be, therefore, that we in the embedded vision industry can learn something by considering the image creation and image analysis tasks in combination.   More

More Articles

FEATURED DISCUSSIONS

OpenCV on TI’s DSP+ARM® Platforms: Mitigating the Challenges of Porting OpenCV to Embedded Platforms

Depth-From-Motion 3D Vision Algorithms

Open-Source Options For Whale Flukes

More Forum Discussions

FEATURED NEWS

Recent Investments: eyeSight Raises $4.2 Million From CEVA, Mitsui

Recent Acquisitions: Microsoft Reportedly Buys VideoSurf

Microsoft's Kinect: Startup Investments And PC Enhancements

Cameras In Taxis: Driver Security, Or Monitoring With Impunity?

Where's the Beef? Embedded Vision Gives Hamburger Bun Suppliers Some Relief

More News

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top