Embedded Vision Insights: March 12, 2013 Edition

EVA180x100

In this edition of Embedded Vision Insights:

LETTER FROM THE EDITOR

Dear Colleague,

First and foremost, I want to begin this edition of Embedded Vision Insights by alerting you to next week's (March 18-22) free five-day embedded vision webinar series co-delivered by the Embedded Vision Alliance and several of its member companies, in partnership with Design News Magazine. Entitled "Implementing Embedded Vision: Designing Systems That See & Understand Their Environments," it takes place each day at 11 AM PST/2 PM EST/6 PM GMT; I encourage you to attend the entire five-part series. Advance registration is necessary; please note that separate registration for each session is required. For more information, including registration links, please see this news writeup on the Alliance website.

Speaking of upcoming events, we're now only a bit more than a month away from the premier Silicon Valley iteration of the Embedded Vision Summit, a free day-long technical educational forum for engineers interested in incorporating visual intelligence into electronic systems and software. I've begun filling in the day's agenda with specific presentation titles, presenter biographies and (coming soon) presentation abstracts.  I again encourage you to reserve the day on your calendar and submit an online registration application now, while attendance spots are still available.

And if you look closely at the Embedded Vision Summit agenda, you might notice at least one presenting company that's a surprise. You're certainly forgiven if you didn't know that Qualcomm is a member of the Alliance, because the company only just joined a few days ago! In addition to supplying the well-known Snapdragon line of ARM-based application processors, Qualcomm develops a number of other key embedded vision innovations: the Vuforia augmented reality software platform, for example, and the FastCV algorithm library and API. Welcome, Qualcomm, to the Embedded Vision Alliance!

Thanks as always for your support of the Embedded Vision Alliance, and for your interest in and contributions to embedded vision technologies, products and applications. Whenever you come up with an idea as to how the Alliance can better service your needs, you know where to find me.

Brian Dipert
Editor-In-Chief, Embedded Vision Alliance

FEATURED VIDEOS

Embedded Vision Summit Presentation: "Addressing System Design Challenges in Embedded Vision," Mario Bergeron, Avnet
Mario Bergeron, Technical Marketing Engineer at Avnet Electronics Marketing, presents the "Addressing System Design Challenges in Embedded Vision" tutorial within the "Using Tools, APIs and Design Techniques for Embedded Vision" technical session at the September 2012 Embedded Vision Summit. Embedded vision applications need to move video data from the image sensor to processing and storage elements. When dealing with higher video resolutions and/or frame rates, this fact presents design challenges that need to be thought out early in the architecture phase. This presentation describes the challenges and some of the solutions that exist today.

Consumer Electronics Show Product Demonstration: Omek Interactive
Janine Kutliroff, CEO, and Eli Elhadad, VP of Game Development, demonstrate Omek Interactive's latest embedded vision technologies and products at the January 2013 Consumer Electronics Show. Specifically, they are demonstrating Omek’s close-range, high-resolution gesture recognition middleware, Grasp, in various example applications.

More Videos

FEATURED ARTICLES

Using Xilinx FPGAs to Solve Endoscope System Architecture Challenges
Image enhancement functions – noise reduction, edge enhancement, dynamic range correction, digital zoom, scaling, etc – are key elements of many embedded vision designs, in improving the ability for downstream algorithms to automatically extract meaning from the image. Interface flexibility and performance are also important attributes in many embedded vision systems. All of these concepts are discussed in this case study article, a reprint of a Xilinx-published white paper. More

Particle Board Quality Control Using Parallel Processing of National Instruments Smart Cameras
The challenge: automating portions of a particle board manufacturing line to prevent exposure to harmful dust and chemicals and to improve the accuracy of production. The solution: modify the existing particle board manufacturing machine to include three NI 1762 Smart Cameras, programmed with custom algorithms to completely integrate with a green factory environment. More

More Articles

FEATURED COMMUNITY DISCUSSIONS

Using the Face Detection OpenCV Java API

Some Good Books Covering Computer Vision

OpenCV Functions for Omnidirectional Cameras

More Community Discussions

FEATURED NEWS

UC Berkeley's Professor Pieter Abbeel: The Embedded Vision Summit's Keynoter, and an Artificial Intelligence and Robotics Pioneer

Tensilica's IVP: Your Embedded Vision Processing Core Candidate List Just Got Another Entry

Camera-Inclusive Systems: Don't Forget to Employ Sufficient Security Stratagems

More News

 

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top