Dear Colleague,
As any of you who watched last May's Embedded Vision Summit keynote from Facebook's Yann Lecun already know, convolutional neural networks (CNNs) are an increasingly popular means of extracting meaning from images. Not surprisingly, therefore, they're the focus of multiple presentations at the upcoming Embedded Vision Summit.
-
First off is the keynote from Dr. Ren Wu, distinguished scientist at Baidu's Institute of Deep Learning. Dr. Wu's talk, “Enabling Ubiquitous Visual Intelligence Through Deep Learning,” will focus on the use of CNN techniques to index multimedia content such as still images and videos.
-
Deshanand Singh, Director of Software Engineering at Altera, will deliver the technical presentation "Efficient Implementation of Convolutional Neural Networks using OpenCL on FPGAs." It gives a detailed explanation of how CNN algorithms can be expressed in OpenCL and compiled directly to FPGA hardware.
-
Nagesh Gupta, CEO and Founder of Auviz Systems, will speak on "Trade-offs in Implementing Deep Neural Networks on FPGAs." Gupta's talk will present alternative implementations of 3D convolutions (a core part of CNNs) on FPGAs, and discuss trade-offs among them.
-
Jeff Gehlhaar, Vice President of Technology at Qualcomm, will present "Deep-learning-based Visual Perception in Mobile and Embedded Devices: Opportunities and Challenges." Gehlhaar will explore applications and use cases where on-device deep-learning-based visual perception provides benefits, the challenges that these applications face, and techniques to overcome them.
-
And Bruno Lavigueur, Project Leader for Embedded Vision at Synopsys, will give the talk "Tailoring Convolutional Neural Networks for Low-Cost, Low-Power Implementation." Lavigueur will share his company's experience in reducing the complexity of CNN graphs to make the resulting algorithm amenable to low-cost and low-power computing platforms.
Check out the Alliance website for in-depth presenter biographies and abstracts of each of these talks, as well as information on the second keynote (from robot vision pioneer Mike Aldred of Dyson), other technical sessions, and the technology showcase. The Embedded Vision Summit, an educational forum for product creators interested in incorporating visual intelligence into electronic systems and software, takes place on May 12, 2015 at the Santa Clara (California) Convention Center; accompanying partial- and full-day workshops will occur on May 11 and 13. Register today, while the "early bird" limited-time 20% discount is still available!
And while you're on the Alliance website, check out all the other great content recently published there, including nearly a dozen new videos of Alliance member demonstrations at January's Consumer Electronics Show, showcasing applications and functions such as ADAS, face detection, and object tracking. Thanks as always for your support of the Embedded Vision Alliance, and for your interest in and contributions to embedded vision technologies, products and applications. As always, I welcome your suggestions on what the Alliance can do to better service your needs.
Brian Dipert
Editor-In-Chief, Embedded Vision Alliance
|
Embedded Vision Alliance Interview on Vision in ADAS with Ian Riches of Strategy Analytics
Brian Dipert, Editor-in-Chief of the Embedded Vision Alliance, interviews Ian Riches, Director of Global Automotive Practice at Strategy Analytics, at the January 2015 Consumer Electronics Show. Brian and Ian discuss vision-based ADAS (advanced driver assistance systems) historical developments, current status, and future trends. Also see "Smart In-Vehicle Cameras Increase Driver and Passenger Safety," a technical article co-authored by Ian and published by the Alliance.
CogniVue Demonstrations of Multiple Generations of APEX Vision Processors
Tom Wilson, VP of Product Management at CogniVue, demonstrates the company's latest embedded vision technologies and products at the January 2015 Consumer Electronics Show. Specifically, Wilson demonstrates both ADAS and consumer electronics applications powered by the company's multiple generations of APEX image cognition processors for embedded vision.
More Videos
|
FPGA Programmability and Performance Empower Machine Vision and Surveillance
Cameras and other equipment used in surveillance and machine vision perform a variety of different tasks, ranging from image signal processing, video transport and video format conversion to video compression and analytics. This reprint of a white paper from Altera, an Alliance member company, discusses the roles that FPGAs play in these applications and functions. More
Barrier-Free Parking with ANPR Technology: A Real Option?
Many cities globally continue to become more congested, driving increased investment into off-street parking facilities. In recent years, automated number plate recognition (ANPR) cameras and software have started to be used as part of the overall solution which also includes barriers, payment machines, and loop detectors. The main use of ANPR equipment is to alleviate issues related to ticket payments, where a license plate can be looked up in the system to verify entry times, or for security purposes. However, what if ANPR equipment was the sole solution to automate parking structures? This trend is starting to occur. More
More Articles
|