Embedded Vision Insights: October 15, 2013 Edition

EVA180x100

In this edition of Embedded Vision Insights:

LETTER FROM THE EDITOR

Dear Colleague,

Another Embedded Vision Summit has come and gone, and I'm feeling no shortage of satisfaction. This year's Embedded Vision Summit East drew even more attendees than last year, and they seemed pleased with the expanded program; the overall 8.6 (out of 10) event rating matched last year's equally impressive score. We're busy editing the videos of the various presentations and demonstrations, which will begin appearing on the website shortly. Keep an eye on the "Videos" page of the Alliance website for the content as it's published; subscribe to the Alliance's Facebook, LinkedIn and Twitter social media channels and site RSS feed for proactive notification.

For now, I encourage you to check out Alliance founder Jeff Bier's interview with Michael Tusch, founder and CEO of Apical, which took place the night before the Summit. Apical chose the event to introduce its Assertive Vision processor core, a real-time detection, classification and tracking engine capable of accurate analysis of people and other objects, and designed for integration into SoCs. With the Assertive Engine, Apical formally expands its business beyond conventional image processing into embedded vision processing, and Michael Tusch shares interesting perspectives on differences between the two processing approaches, as well as how they may coexist going forward.

Speaking of conferences, the Alliance will be well represented at several upcoming shows, both physical and virtual. Both Jeff Bier and SoftKinetic's Tim Droz will present at the late-October Interactive Technology Summit (formerly Touch-Gesture-Motion Conference) put on by IHS (which acquired IMS Research in early 2012). In mid-November, Jeff Bier will represent the Alliance at AMD's Developer Summit. And while member company Nvidia's next GPU Technology Conference won't occur until next March, the company is supplementing the physical event with a yearlong series of online webinars, including one in early November on facial recognition algorithm acceleration given by Alliance advisor, University of Queensland, Australia Professor, and Imagus Technology CTO Brian Lovell.

Thanks as always for your support of the Embedded Vision Alliance, and for your interest in and contributions to embedded vision technologies, products and applications. I welcome your emailed suggestions on what the Alliance can do better, as well as what else the Alliance can do, to better service your needs.

Brian Dipert
Editor-In-Chief, Embedded Vision Alliance

FEATURED VIDEOS

Apical Unveils the Assertive Engine Embedded Vision Processor Core
Jeff Bier, founder of the Embedded Vision Alliance, interviews Michael Tusch, founder and CEO of Apical Imaging, on the eve of the public launch of Apical's Assertive Engine embedded vision processing core. Assertive Vision is a real-time detection, classification and tracking engine capable of accurate analysis of people and other objects. With the Assertive Engine, Apical formally expands its business beyond conventional image processing into embedded vision processing. Bier and Tusch discuss (among other things) the differences between conventional image processing and computer vision processing, the value of (when possible) leveraging "raw" image data coming off a sensor for vision processing, and the potential for future systems that do image processing and vision processing tasks in parallel.

CEVA and iOnRoad Lane Departure Warning Demonstration
In this demonstration, you will see how the CEVA-MM3101 is used to implement a lane departure warning system. This application uses software from CEVA partner iOnRoad, an award-winning technology provider for personal driving assistance. The two companies have integrated and optimized the iOnRoad software for use with camera-enabled devices. This software-based demonstration showcases the flexibility and performance of the CEVA programmable platform. It is entirely written in C, and no hardware acceleration is required. The scalable CEVA-MM3101 could be used to combine multiple ADAS applications on the same core, and can handle multiple cameras as well.

More Videos

FEATURED ARTICLES

3D Sensing Gets Personal
The last two embedded vision blog entries focused on new vision sensor technologies, high-performance system architectures, and algorithms that are gaining acceptance in robotics and automotive applications. This week, I want to turn to the consumer market. where one of the most interesting uses of new vision technologies is the creation of new types of user interfaces that are more natural for users. More

New Rules Will Not Slow UK CCTV Market Growth
While the British government recently launched a new video surveillance code of practice, the event will have limited impact on growth of the UK market for video surveillance equipment, according to a new report from IHS Inc., a leading global source of critical information and insight. More

More Articles

FEATURED NEWS

The IHS Interactive Technology Summit: See How Vision Fits Into It

Apical Announces the Release of Its New Smart Camera Engine, Assertive Vision

AMD's Upcoming Developer Summit: Vision is Central To It

Apple's iPhone 5S: The Camera's Once Again the Primary Focus

Face Recognition: Learn About GPU Acceleration

More News

 

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top