Embedded Vision Insights: February 2, 2016 Edition

EVA180x100

In this edition of Embedded Vision Insights:

LETTER FROM THE EDITOR

Dear Colleague,Embedded Vision Summit

With only three months to go until the Embedded Vision Summit, May 2-4 in Santa Clara, California, the conference organizers are making great progress on the presentation program. A number of titles, abstracts and speaker biographies are already published, and more session information is coming soon.

Check out this year's greatly expanded Business Insights track, along with the Technical Insights and Enabling Technologies tracks. And don't forget about the Vision Tank, a deep learning- and vision-based product competition whose finalists will present at the Summit; the deadline for entries is March 1. Register today for the Embedded Vision Summit, an educational forum for product creators interested in incorporating visual intelligence into electronic systems and software, and receive a 15% Early Bird discount by using promotional code 05EVI.

I'd also like to remind you of several other upcoming industry events. On February 22 from 9 AM to 5:30 PM in Santa Clara, California, the primary developers of the popular open-source Caffe convolutional neural network framework will present a one-day in-depth technical tutorial on deep learning for vision. See a recently published interview with these same developers for more information on Caffe, and register for the tutorial online while spots are still available. Nearer term, on February 9, Alliance member company Cadence is hosting an introductory series of deep learning presentations at the company's San Jose, California campus.

And of course, while you're on the Alliance website, make sure to also check out the other great recently published content there. Thanks as always for your support of the Embedded Vision Alliance, and for your interest in and contributions to embedded vision technologies, products and applications. Please don't hesitate to let me know how the Alliance can better serve your needs.

Brian Dipert
Editor-In-Chief, Embedded Vision Alliance

FEATURED VIDEOS

"Self-Driving Cars," a Presentation from GoogleGoogle
Nathaniel Fairfield, Technical Lead at Google, presents the "Self-Driving Cars" keynote at the May 2014 Embedded Vision Summit. Self-driving cars have the potential to transform how we move: they promise to make us safer, give freedom to millions of people who can't drive, and give people back their time. The Google Self-Driving Car project was created to rapidly advance autonomous driving technology and build on previous research. For the past four years, Google has been working to make cars that drive reliably on many types of roads, using lasers, cameras, and radar, together with a detailed map of the world. Fairfield describes how Google leverages maps to assist with challenging perception problems such as detecting traffic lights, and how the different sensors can be used to complement each other. Google's self-driving cars have now traveled more than a half a million miles autonomously. In this talk, Fairfield discusses Google's overall approach to solving the driving problem, the capabilities of the car, the company's progress so far, and the remaining challenges to be resolved.

"Computer Vision and Artificial Intelligence: Market Trends and Implications," a Presentation from TracticaTractica
Anand Joshi of Tractica delivers the presentation, "Computer Vision and Artificial Intelligence: Market Trends and Implications," at the December 2015 Embedded Vision Alliance Member Meeting. Joshi presents market forecasts for vision in robotics, consumer, automotive, medical and other sectors.

More Videos

FEATURED ARTICLES

Using Convolutional Neural Networks for Image RecognitionCadence
Convolutional neural networks (CNNs) are widely used in pattern- and image-recognition problems, as they have a number of advantages compared to other techniques. This technical article from Cadence covers the basics of CNNs, including a description of the various layers used. Using traffic sign recognition as an example, it discusses the challenges of the general problem and introduce algorithms and implementation software that can trade off computational burden and energy for a modest degradation in sign recognition rates. It outlines the challenges of using CNNs in embedded systems, and introduces the key characteristics of a new digital signal processor (DSP) for imaging and computer vision that is suitable for CNN applications across many imaging and related recognition tasks. More

Deep Learning at the Boston Image Processing Computer Vision GroupBoston Image Processing Computer Vision Group
Vin Ratford, Executive Director of the Embedded Vision Alliance and co-founder of Alliance member company Auviz Systems, provides an overview of the Boston Image Processing Computer Vision Group, along with a report of a recent group meeting. Editor note: the next meeting of the Boston Image Processing Computer Vision Group is this Thursday, February 4. More

More Articles

FEATURED NEWS

Google and Movidius Work Together to Enhance Deep Learning Capabilities in Next-Generation Devices

Xilinx Ships 16nm Virtex UltraScale+ Devices; Industry’s First High-End FinFET FPGAs

Videantis Partners with Almalence For Higher Quality Imaging

VeriSilicon Completes Acquisition of Vivante

The IEEE ICIP's Visual Innovation Award: Get Your Nomination Candidate On Board

More News

 

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top