Embedded Vision Insights: July 15, 2014 Edition

EVA180x100

In this edition of Embedded Vision Insights:

LETTER FROM THE EDITOR

Dear Colleague,Embedded Vision Summit

The steady stream of new videos on the Alliance website from the recent Embedded Vision Summit continues unabated. Newly published technical tutorials cover augmented reality for wearable devices and the "Internet of Things" (from AugmentedReality.org), processor optimization for pedestrian detection (from Synopsys), and the implementation of HOG, the histogram of oriented gradients algorithm used in object detection (from videantis). And still to come are nearly two dozen product demonstration videos captured at the event. Regularly revisit the website and keep an eye out on the Summit content archive page for them. If you sign up for the Alliance's Facebook, LinkedIn or Twitter channels, or its RSS feed, you'll receive notification each time a new piece of content appears.

The day after the Summit, the Embedded Vision Alliance held its quarterly Member Meeting. Last time, I told you about the first video from this meeting, the presentation on "Project Tango" depth-sensing mobile devices from Google's Johnny Lee. And now I have the pleasure of sharing with you the other four published sessions from that day. They are:

And speaking of Embedded Vision Summits, I'll close out with a "teaser" about next year's event. It's currently scheduled for April 30, once again preceded one day earlier by Alliance member company-led workshops, and will feature an expanded technical program. An overview page is now up on the website; mark your calendars and check back periodically for updates. Thanks as always for your support of the Embedded Vision Alliance, and for your interest in and contributions to embedded vision technologies, products and applications. Please don't hesitate to let me know how the Alliance can better service your needs.

Brian Dipert
Editor-In-Chief, Embedded Vision Alliance

FEATURED VIDEOS

Embedded Vision Summit Technical Presentation: "Exploiting Synergy Between Image Improvement and Image Understanding Algorithms," Michael Tusch, ApicalApical
Michael Tusch, founder and CEO of Apical, presents the "Exploiting Synergy Between Image Improvement and Image Understanding Algorithms" tutorial within the "Image Sensors and Front-End Image Processing" technical session at the April 2013 Embedded Vision Summit.


Camera 3: The New Camera Subsystem in Android 4.4 "KitKat"videantis
Google's Android 4.4 (KitKat) O/S shipped with some hidden Java APIs for the new Camera 3 Framework. This slide deck was presented by Aptina's Balwinder Kaur at AnDevCon 2013, held in Burlingame, California on November 15, 2013. It covers the history of the Camera Framework, the new hidden APIs in Android 4.4, and some basics of the Camera Platform Framework.

More Videos

FEATURED ARTICLES

Embedded Vision Growth Predicted Across Various Application MarketsIHS
Shipments of embedded vision devices in the automotive, industrial automation, physical security and business intelligence markets are forecast to exceed 14 million units in 2018, up from almost four million units this year. Utilizing a combination of embedded systems and computer vision, embedded vision enables devices to use video inputs to better understand their environment, applying logic and decision making to video signals. More

Realizing the Benefits of GPU Compute for Real Applications with Mali GPUsARM
I have just returned from a fortnight spent hopping around Asia in support of a series of ARM hosted events we call the Multimedia Seminars, which took place in Seoul, Taipei and Shenzhen. Several hundred attendees joined in each location, a quality-dense cross-section from the local semiconductor and consumer industries, including many silicon vendors, OEMs, ODMs and ISVs. All of them were able to hear the great progress made by the ARM ecosystem partners who are developing the use of GPU Compute on ARM Mali GPUs. More

More Articles

FEATURED NEWS

SoftKinetic and Melexis First to Bring 3D Vision To Automobile Infotainment

Get the Best of Both Worlds: New Processor Family Brings Higher Performance with Real-Time Processing

Qualcomm Introduces Snapdragon Automotive Solutions for Connected In-Car Infotainment

CEVA and Visidon Partner to Bring Ultra-Low Power Always-On Face Activation Technology to Mobile Devices

ARM Enhances IP Suite for 2015 Mid-Range Mobile and Consumer Markets

More News

 

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top