Automotive

“Designing a Vision-based, Solar-powered Rear Collision Warning System,” a Presentation from Pearl Automation

Aman Sikka, Vision System Architect at Pearl Automation, presents the "Designing a Vision-based, Solar-powered Rear Collision Warning System" tutorial at the May 2017 Embedded Vision Summit. Bringing vision algorithms into mass production requires carefully balancing trade-offs between accuracy, performance, usability, and system resources. In this talk, Sikka describes the vision algorithms along with the system […]

“Designing a Vision-based, Solar-powered Rear Collision Warning System,” a Presentation from Pearl Automation Read More +

“Collaboratively Benchmarking and Optimizing Deep Learning Implementations,” a Presentation from General Motors

Unmesh Bordoloi, Senior Researcher at General Motors, presents the "Collaboratively Benchmarking and Optimizing Deep Learning Implementations" tutorial at the May 2017 Embedded Vision Summit. For car manufacturers and other OEMs, selecting the right processors to run deep learning inference for embedded vision applications is a critical but daunting task.  One challenge is the vast number

“Collaboratively Benchmarking and Optimizing Deep Learning Implementations,” a Presentation from General Motors Read More +

“Approaches for Vision-based Driver Monitoring,” a Presentation from PathPartner Technology

Jayachandra Dakala, Technical Architect at PathPartner Technology, presents the "Approaches for Vision-based Driver Monitoring" tutorial at the May 2017 Embedded Vision Summit. Since many road accidents are caused by driver inattention, assessing driver attention is important to preventing accidents. Distraction caused by other activities and sleepiness due to fatigue are the main causes of driver

“Approaches for Vision-based Driver Monitoring,” a Presentation from PathPartner Technology Read More +

“Automakers at a Crossroads: How Embedded Vision and Autonomy Will Reshape the Industry,” a Presentation from Lux Research

Mark Bünger, VP of Research at Lux Research, presents the "Automakers at a Crossroads: How Embedded Vision and Autonomy Will Reshape the Industry" tutorial at the May 2017 Embedded Vision Summit. The auto and telecom industries have been dreaming of connected cars for twenty years, but their results have been mediocre and mixed. Now, just

“Automakers at a Crossroads: How Embedded Vision and Autonomy Will Reshape the Industry,” a Presentation from Lux Research Read More +

“Computer-vision-based 360-degree Video Systems: Architectures, Algorithms and Trade-offs,” a Presentation from videantis

Marco Jacobs, VP of Marketing at videantis, presents the "Computer-vision-based 360-degree Video Systems: Architectures, Algorithms and Trade-offs" tutorial at the May 2017 Embedded Vision Summit. 360-degree video systems use multiple cameras to capture a complete view of their surroundings. These systems are being adopted in cars, drones, virtual reality, and online streaming systems. At first

“Computer-vision-based 360-degree Video Systems: Architectures, Algorithms and Trade-offs,” a Presentation from videantis Read More +

“Choosing the Optimum Mix of Sensors for Driver Assistance and Autonomous Vehicles,” a Presentation from NXP Semiconductors

Ali Osman Ors, Director of Automotive Microcontrollers and Processors at NXP Semiconductors, presents the "Choosing the Optimum Mix of Sensors for Driver Assistance and Autonomous Vehicles" tutorial at the May 2017 Embedded Vision Summit. A diverse set of sensor technologies is available and emerging to provide vehicle autonomy or driver assistance. These sensor technologies often

“Choosing the Optimum Mix of Sensors for Driver Assistance and Autonomous Vehicles,” a Presentation from NXP Semiconductors Read More +

“Choosing the Optimum Mix of Sensors for Driver Assistance and Autonomous Vehicles,” a Presentation from NXP Semiconductors

Ali Osman Ors, Director of Automotive Microcontrollers and Processors at NXP Semiconductors, presents the "Choosing the Optimum Mix of Sensors for Driver Assistance and Autonomous Vehicles" tutorial at the May 2017 Embedded Vision Summit. A diverse set of sensor technologies is available and emerging to provide vehicle autonomy or driver assistance. These sensor technologies often

“Choosing the Optimum Mix of Sensors for Driver Assistance and Autonomous Vehicles,” a Presentation from NXP Semiconductors Read More +

“Implementing an Optimized CNN Traffic Sign Recognition Solution,” a Presentation from NXP Semiconductors and Au-Zone Technologies

Rafal Malewski, Head of the Graphics Technology Engineering Center at NXP Semiconductors, and Sébastien Taylor, Vision Technology Architect at Au-Zone Technologies, present the "Implementing an Optimized CNN Traffic Sign Recognition Solution" tutorial at the May 2017 Embedded Vision Summit. Now that the benefits of using deep neural networks for image classification are well known, the

“Implementing an Optimized CNN Traffic Sign Recognition Solution,” a Presentation from NXP Semiconductors and Au-Zone Technologies Read More +

“Implementing an Optimized CNN Traffic Sign Recognition Solution,” a Presentation from NXP Semiconductors and Au-Zone Technologies

Rafal Malewski, Head of the Graphics Technology Engineering Center at NXP Semiconductors, and Sébastien Taylor, Vision Technology Architect at Au-Zone Technologies, present the "Implementing an Optimized CNN Traffic Sign Recognition Solution" tutorial at the May 2017 Embedded Vision Summit. Now that the benefits of using deep neural networks for image classification are well known, the

“Implementing an Optimized CNN Traffic Sign Recognition Solution,” a Presentation from NXP Semiconductors and Au-Zone Technologies Read More +

“Moving CNNs from Academic Theory to Embedded Reality,” a Presentation from Synopsys

Tom Michiels, System Architect for Embedded Vision Processors at Synopsys, presents the "Moving CNNs from Academic Theory to Embedded Reality" tutorial at the May 2017 Embedded Vision Summit. In this presentation, you will learn to recognize and avoid the pitfalls of moving from an academic CNN/deep learning graph to a commercial embedded vision design. You

“Moving CNNs from Academic Theory to Embedded Reality,” a Presentation from Synopsys Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top