PROVIDER

EVA180x100

Embedded Vision Insights: August 1, 2017 Edition

COMPUTER VISION FOR IMAGE UNDERSTANDING Semantic Segmentation for Scene Understanding: Algorithms and Implementations Recent research in deep learning provides powerful tools that begin to address the daunting problem of automated scene understanding. Modifying deep learning methods, such as CNNs, to classify pixels in a scene with the help of the neighboring pixels has provided very […]

Embedded Vision Insights: August 1, 2017 Edition Read More +

“Choosing the Optimum Mix of Sensors for Driver Assistance and Autonomous Vehicles,” a Presentation from NXP Semiconductors

Ali Osman Ors, Director of Automotive Microcontrollers and Processors at NXP Semiconductors, presents the "Choosing the Optimum Mix of Sensors for Driver Assistance and Autonomous Vehicles" tutorial at the May 2017 Embedded Vision Summit. A diverse set of sensor technologies is available and emerging to provide vehicle autonomy or driver assistance. These sensor technologies often

“Choosing the Optimum Mix of Sensors for Driver Assistance and Autonomous Vehicles,” a Presentation from NXP Semiconductors Read More +

“Choosing the Optimum Mix of Sensors for Driver Assistance and Autonomous Vehicles,” a Presentation from NXP Semiconductors

Ali Osman Ors, Director of Automotive Microcontrollers and Processors at NXP Semiconductors, presents the "Choosing the Optimum Mix of Sensors for Driver Assistance and Autonomous Vehicles" tutorial at the May 2017 Embedded Vision Summit. A diverse set of sensor technologies is available and emerging to provide vehicle autonomy or driver assistance. These sensor technologies often

“Choosing the Optimum Mix of Sensors for Driver Assistance and Autonomous Vehicles,” a Presentation from NXP Semiconductors Read More +

“Implementing an Optimized CNN Traffic Sign Recognition Solution,” a Presentation from NXP Semiconductors and Au-Zone Technologies

Rafal Malewski, Head of the Graphics Technology Engineering Center at NXP Semiconductors, and Sébastien Taylor, Vision Technology Architect at Au-Zone Technologies, present the "Implementing an Optimized CNN Traffic Sign Recognition Solution" tutorial at the May 2017 Embedded Vision Summit. Now that the benefits of using deep neural networks for image classification are well known, the

“Implementing an Optimized CNN Traffic Sign Recognition Solution,” a Presentation from NXP Semiconductors and Au-Zone Technologies Read More +

“Implementing an Optimized CNN Traffic Sign Recognition Solution,” a Presentation from NXP Semiconductors and Au-Zone Technologies

Rafal Malewski, Head of the Graphics Technology Engineering Center at NXP Semiconductors, and Sébastien Taylor, Vision Technology Architect at Au-Zone Technologies, present the "Implementing an Optimized CNN Traffic Sign Recognition Solution" tutorial at the May 2017 Embedded Vision Summit. Now that the benefits of using deep neural networks for image classification are well known, the

“Implementing an Optimized CNN Traffic Sign Recognition Solution,” a Presentation from NXP Semiconductors and Au-Zone Technologies Read More +

“Deep Learning Beyond Cats and Cars: Developing a Real-life DNN-based Embedded Vision Product for Agriculture, Construction, Medical, or Retail,” a Presentation from Luxoft

Alexey Rybakov, Senior Director for Embedded Systems at Luxoft, presents the "Deep Learning Beyond Cats and Cars: Developing a Real-life DNN-based Embedded Vision Product for Agriculture, Construction, Medical, or Retail" tutorial at the May 2017 Embedded Vision Summit. By now we know very well how to design and train a neural network to recognize cats,

“Deep Learning Beyond Cats and Cars: Developing a Real-life DNN-based Embedded Vision Product for Agriculture, Construction, Medical, or Retail,” a Presentation from Luxoft Read More +

“Deep Learning Beyond Cats and Cars: Developing a Real-life DNN-based Embedded Vision Product for Agriculture, Construction, Medical, or Retail,” a Presentation from Luxoft

Alexey Rybakov, Senior Director for Embedded Systems at Luxoft, presents the "Deep Learning Beyond Cats and Cars: Developing a Real-life DNN-based Embedded Vision Product for Agriculture, Construction, Medical, or Retail" tutorial at the May 2017 Embedded Vision Summit. By now we know very well how to design and train a neural network to recognize cats,

“Deep Learning Beyond Cats and Cars: Developing a Real-life DNN-based Embedded Vision Product for Agriculture, Construction, Medical, or Retail,” a Presentation from Luxoft Read More +

BrianChip_logo

BrainChip Adds Thomas Stengel as Vice President of Americas Business Development

Leadership Team in Place to Drive Sales of new AI-Based BrainChip Studio Video Analytic Solutions Highlights: Tom Stengel joins as VP of Business Development for the Americas as the Company launches BrainChip Studio, a commercially available integrated software suite for pattern and facial recognition analytics. BrainChip expands its sales organization with a 30-year industry veteran

BrainChip Adds Thomas Stengel as Vice President of Americas Business Development Read More +

“Training CNNs for Efficient Inference,” a Presentation from Imagination Technologies

Paul Brasnett, Principal Research Engineer at Imagination Technologies, presents the "Training CNNs for Efficient Inference" tutorial at the May 2017 Embedded Vision Summit. Key challenges to the successful deployment of CNNs in embedded markets are in addressing the compute, bandwidth and power requirements. Typically, for mobile devices, the problem lies in the inference, since the

“Training CNNs for Efficient Inference,” a Presentation from Imagination Technologies Read More +

“Training CNNs for Efficient Inference,” a Presentation from Imagination Technologies

Paul Brasnett, Principal Research Engineer at Imagination Technologies, presents the "Training CNNs for Efficient Inference" tutorial at the May 2017 Embedded Vision Summit. Key challenges to the successful deployment of CNNs in embedded markets are in addressing the compute, bandwidth and power requirements. Typically, for mobile devices, the problem lies in the inference, since the

“Training CNNs for Efficient Inference,” a Presentation from Imagination Technologies Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top