Embedded Vision Insights: April 24, 2014 Edition

EVA180x100

In this edition of Embedded Vision Insights:

LETTER FROM THE EDITOR

Dear Colleague,HTC One M8

There’s been quite a burst of interesting news lately about vision technology being used in mobile devices, a topic which has also been regularly covered in past presentations and articles hosted on the Alliance website. A month back, for example, we discussed the rumored depth-sensing capabilities of the latest "M8" variant of HTC's One smartphone, capabilities that were confirmed at the handset's unveiling a short time later, complete with a product teardown. We also covered Google's revolutionary Project Tango handset, which showcases robust 3D mapping facilities.

Project Tango has recently also been disassembled and analyzed, found to contain an infrared projector and multiple embedded vision processors. And just a few days ago, the first photos of the claimed coming-soon Amazon-branded smartphone have surfaced, along with some intriguing claimed embedded resources: a beefy Qualcomm application processor, front and rear conventional cameras, and four front-mounted infrared sensors supposedly for head- and eye-tracking purposes.

These and other trendsetting embedded vision capabilities, not just for mobile electronics devices but a plethora of systems, will be on display at the Embedded Vision Summit West in just over a month. Taking place May 29th at the Santa Clara (California) Convention Center, its comprehensive program encompasses two tracks' worth of sixteen total technical presentations, hour-long keynotes from both Facebook and Google, and technology demonstrations from nearly two dozen Alliance member companies. Two in-depth technical workshops are additionally offered the prior day. And the Embedded Vision Summit West is also co-located with the Augmented World Expo, with special discounts available for Summit attendees.

Last year's Embedded Vision Summit sold out, so I encourage you to register today and not further delay! And while you're up on the Alliance website, make sure you check out all the other great new content published there in recent weeks. Thanks as always for your support of the Embedded Vision Alliance, and for your interest in and contributions to embedded vision technologies, products and applications. I welcome your emailed suggestions on what the Alliance can do better, as well as what else the Alliance can do, to better service your needs.

Brian Dipert
Editor-In-Chief, Embedded Vision Alliance

FEATURED VIDEOS

Embedded Vision Summit Technical Presentation: "Developing Low-Cost, Low-Power, Small Vision Systems," Simon Morris, CogniVueDeveloping Vision Systems
Simon Morris, CEO of CogniVue, presents the "Developing Low-Cost, Low-Power, Small Vision Systems" tutorial within the "Embedded Vision Applications" technical session at the April 2013 Embedded Vision Summit.

January 2014 Consumer Electronics Show Product Demonstration: GEO SemiconductorCES 2014 GEO Semiconductor
Zorawar Bassi, CTO, demonstrates GEO Semiconductor's latest embedded vision technologies and products at the January 2014 Consumer Electronics Show. Specifically, Zorawar demonstrates various ADAS applications running on the company's geometric processors.

More Videos

FEATURED ARTICLES

Augmented Reality: A Compelling Mobile Embedded Vision OpportunityIvan Sutherland
Although augmented reality was first proposed and crudely demonstrated nearly fifty years ago, its implementation was until recently only possible on bulky and expensive computers. Nowadays, however, the fast, low power and cost-effective processors and high resolution, high frame rate image sensors found in mobile electronics devices, along with the increasingly powerful software that these systems run, enables developers to bring AR to the masses. This article was originally published at Electronic Engineering Journal. More

 

How GPUs Help Computers Understand What They’re SeeingIvan Sutherland
Building on the work of neural network pioneers Kunihiko Fukushima and Yann LeCun – and more recent efforts by teams at the University of Toronto – New York University researchers have used GPUs to dramatically up the accuracy of earlier object recognition efforts. Rob Fergus, an NYU associate professor of computer science, told a packed room at NVIDIA’s GPU Technology Conference that GPUs helped his team build models that enable computers to understand what they’re looking 50 times faster than with CPUs alone. That, in turn, reduced the rate of error when identifying objects in complex images from the 26 percent achieved with earlier approaches to 16 percent, a result achieved at the 2012 ImageNet Large Scale Visual Recognition Challenge. More

More Articles

FEATURED NEWS

The Rise of (Embedded) Vision Systems: Using Image Analysis Intelligence to Solve Robotic Design Problems

Another Day, Another Vision-Enhanced Handset: Amazon's Rumored Entrant is a Nifty 3D-Effects Interface Gadget

TI Broadens Offerings for Advanced Driver Assistance Systems with New Vision Software Development Kit, Embedded Vision Engine and DSP Libraries

Peering Inside an Advanced Smartphone: Google's Project Tango Implements Technology Already Well-Known

videantis Wins Red Herring Award For Its Vision and Video Processor IP Technology

More News

 

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top