Automotive

“Choosing the Optimum Mix of Sensors for Driver Assistance and Autonomous Vehicles,” a Presentation from NXP Semiconductors

Ali Osman Ors, Director of Automotive Microcontrollers and Processors at NXP Semiconductors, presents the "Choosing the Optimum Mix of Sensors for Driver Assistance and Autonomous Vehicles" tutorial at the May 2017 Embedded Vision Summit. A diverse set of sensor technologies is available and emerging to provide vehicle autonomy or driver assistance. These sensor technologies often […]

“Choosing the Optimum Mix of Sensors for Driver Assistance and Autonomous Vehicles,” a Presentation from NXP Semiconductors Read More +

“Implementing an Optimized CNN Traffic Sign Recognition Solution,” a Presentation from NXP Semiconductors and Au-Zone Technologies

Rafal Malewski, Head of the Graphics Technology Engineering Center at NXP Semiconductors, and Sébastien Taylor, Vision Technology Architect at Au-Zone Technologies, present the "Implementing an Optimized CNN Traffic Sign Recognition Solution" tutorial at the May 2017 Embedded Vision Summit. Now that the benefits of using deep neural networks for image classification are well known, the

“Implementing an Optimized CNN Traffic Sign Recognition Solution,” a Presentation from NXP Semiconductors and Au-Zone Technologies Read More +

“Implementing an Optimized CNN Traffic Sign Recognition Solution,” a Presentation from NXP Semiconductors and Au-Zone Technologies

Rafal Malewski, Head of the Graphics Technology Engineering Center at NXP Semiconductors, and Sébastien Taylor, Vision Technology Architect at Au-Zone Technologies, present the "Implementing an Optimized CNN Traffic Sign Recognition Solution" tutorial at the May 2017 Embedded Vision Summit. Now that the benefits of using deep neural networks for image classification are well known, the

“Implementing an Optimized CNN Traffic Sign Recognition Solution,” a Presentation from NXP Semiconductors and Au-Zone Technologies Read More +

“Moving CNNs from Academic Theory to Embedded Reality,” a Presentation from Synopsys

Tom Michiels, System Architect for Embedded Vision Processors at Synopsys, presents the "Moving CNNs from Academic Theory to Embedded Reality" tutorial at the May 2017 Embedded Vision Summit. In this presentation, you will learn to recognize and avoid the pitfalls of moving from an academic CNN/deep learning graph to a commercial embedded vision design. You

“Moving CNNs from Academic Theory to Embedded Reality,” a Presentation from Synopsys Read More +

evsummit_logo

May 2017 Embedded Vision Summit Slides

The Embedded Vision Summit was held on May 1-3, 2017 in Santa Clara, California, as a educational forum for product creators interested in incorporating visual intelligence into electronic systems and software. The presentations delivered at the Summit are listed below. All of the slides from these presentations are included in… May 2017 Embedded Vision Summit

May 2017 Embedded Vision Summit Slides Read More +

Facial Analysis Delivers Diverse Vision Processing Capabilities

Computers can learn a lot about a person from their face – even if they don’t uniquely identify that person. Assessments of age range, gender, ethnicity, gaze direction, attention span, emotional state and other attributes are all now possible at real-time speeds, via advanced algorithms running on cost-effective hardware. This article provides an overview of

Facial Analysis Delivers Diverse Vision Processing Capabilities Read More +

“Making Existing Cars Smart Via Embedded Vision and Deep Learning,” a Presentation from NAUTO

Stefan Heck, CEO and co-founder of NAUTO, presents the "Making Existing Cars Smart Via Embedded Vision and Deep Learning" tutorial at the May 2016 Embedded Vision Summit. NAUTO is a system that consists of a device, network and app. It's an affordable way to upgrade any car to get network and safety features previously available

“Making Existing Cars Smart Via Embedded Vision and Deep Learning,” a Presentation from NAUTO Read More +

“Sensing Technologies for the Autonomous Vehicle,” a Presentation from NXP Semiconductors

Tom Wilson, ADAS Product Line Manager at NXP Semiconductors, presents the "Sensing Technologies for the Autonomous Vehicle" tutorial at the May 2016 Embedded Vision Summit. Autonomous vehicles will necessarily utilize a range of sensing technologies to see and react to their surroundings. We are witnessing dramatic advances not just for embedded vision, but also in

“Sensing Technologies for the Autonomous Vehicle,” a Presentation from NXP Semiconductors Read More +

“What’s Hot in Embedded Vision for Investors?,” an Embedded Vision Summit Panel Discussion

Jeff Bier of the Embedded Vision Alliance (moderator), Don Faria of Intel Capital, Jeff Hennig of Bank of America Merrill Lynch, Gabriele Jansen of Vision Ventures, Helge Seetzen of TandemLaunch, and Peter Shannon of Firelake Capital Management participate in the Investor Panel at the May 2016 Embedded Vision Summit. This moderated panel discussion addresses emerging

“What’s Hot in Embedded Vision for Investors?,” an Embedded Vision Summit Panel Discussion Read More +

“Computer Vision in Cars: Status, Challenges, and Trends,” a Presentation from videantis

Marco Jacobs, Vice President of Marketing at videantis, presents the "Computer Vision in Cars: Status, Challenges, and Trends" tutorial at the May 2016 Embedded Vision Summit. Just as horse carriages were replaced by cars in the 1920s, human operators in our cars will be replaced by electronics in the 2020s. The benefits are tremendous: self-driving

“Computer Vision in Cars: Status, Challenges, and Trends,” a Presentation from videantis Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top