Videos

“A Fast Object Detector for ADAS using Deep Learning,” a Presentation from Panasonic

Minyoung Kim, Senior Research Engineer at Panasonic Silicon Valley Laboratory, presents the "A Fast Object Detector for ADAS using Deep Learning" tutorial at the May 2017 Embedded Vision Summit. Object detection has been one of the most important research areas in computer vision for decades. Recently, deep neural networks (DNNs) have led to significant improvement […]

“A Fast Object Detector for ADAS using Deep Learning,” a Presentation from Panasonic Read More +

“Unsupervised Everything,” a Presentation from Panasonic

Luca Rigazio, Director of Engineering for the Panasonic Silicon Valley Laboratory, presents the "Unsupervised Everything" tutorial at the May 2017 Embedded Vision Summit. The large amount of multi-sensory data available for autonomous intelligent systems is just astounding. The power of deep architectures to model these practically unlimited datasets is limited by only two factors: computational

“Unsupervised Everything,” a Presentation from Panasonic Read More +

Luxoft Demonstration of Its Machine Learning Platform Toolkit

Ihor Starepravo, Embedded Practice Director at Luxoft, demonstrates the company's latest embedded vision technologies and products at the May 2017 Embedded Vision Summit. Specifically, Starepravo demonstrates how a machine learning platform identifies multiple faces it has already "seen" before. This technology includes all components necessary for multiple face recognition as well as a data pipeline

Luxoft Demonstration of Its Machine Learning Platform Toolkit Read More +

Luxoft Demonstration of an Optimized Stereo Depth Map

Ihor Starepravo, Embedded Practice Director at Luxoft, demonstrates the company's latest embedded vision technologies and products at the May 2017 Embedded Vision Summit. Specifically, Starepravo demonstrates how an embedded system platform extracts a depth map out of what’s being filmed. This complex process is done in real time, allowing devices to understand complex dynamic 3D

Luxoft Demonstration of an Optimized Stereo Depth Map Read More +

“How 3D Maps Will Change the World,” a Presentation from Augmented Pixels

Vitaliy Goncharuk, CEO and Founder of Augmented Pixels, presents the "How 3D Maps Will Change the World" tutorial at the May 2017 Embedded Vision Summit. In the very near future, cars, robots, mobile phones and augmented reality glasses will incorporate inexpensive and efficient depth sensing. This will quickly bring us to a new world in

“How 3D Maps Will Change the World,” a Presentation from Augmented Pixels Read More +

“Designing a Vision-based, Solar-powered Rear Collision Warning System,” a Presentation from Pearl Automation

Aman Sikka, Vision System Architect at Pearl Automation, presents the "Designing a Vision-based, Solar-powered Rear Collision Warning System" tutorial at the May 2017 Embedded Vision Summit. Bringing vision algorithms into mass production requires carefully balancing trade-offs between accuracy, performance, usability, and system resources. In this talk, Sikka describes the vision algorithms along with the system

“Designing a Vision-based, Solar-powered Rear Collision Warning System,” a Presentation from Pearl Automation Read More +

Imagination Technologies Demonstration of PowerVR GPUs’ Versatility for Running CNNs

Chris Longstaff, Senior Director of Marketing at Imagination Technologies, demonstrates the company's latest embedded vision technologies and products at the May 2017 Embedded Vision Summit. Specifically, Longstaff demonstrates the company's PowerVR GPU versatility for running CNNs…or as he likes to call it, "the banana detector, evolved." In the demo, he shows the PowerVR GPU, successfully

Imagination Technologies Demonstration of PowerVR GPUs’ Versatility for Running CNNs Read More +

Imagination Technologies Demonstration of the OpenVX Neural Network Extension on a PowerVR GPU

Chris Longstaff, Senior Director of Marketing at Imagination Technologies, demonstrates the company's latest embedded vision technologies and products at the May 2017 Embedded Vision Summit. Specifically, Longstaff demonstrates an OpenVX neural network extension running on a PowerVR GPU. He learns to count to 9; the demo shows how he can practice drawing the digits on

Imagination Technologies Demonstration of the OpenVX Neural Network Extension on a PowerVR GPU Read More +

“Designing a Wearable Imaging Device – for Mice,” a Presentation from Inscopix

Shung Chieh, Vice President of Engineering at Inscopix, presents the "Designing a Wearable Imaging Device – for Mice" tutorial at the May 2017 Embedded Vision Summit. Extremely small imaging systems can enable applications in diverse areas such as healthcare, manufacturing and wearable devices. In this presentation, Chieh explores the challenges encountered in designing one such

“Designing a Wearable Imaging Device – for Mice,” a Presentation from Inscopix Read More +

“Designing a Stereo IP Camera From Scratch,” a Presentation from ELVEES

Anton Leontiev, Embedded Software Architect at ELVEES, JSC, presents the "Designing a Stereo IP Camera From Scratch" tutorial at the May 2017 Embedded Vision Summit. As the number of cameras in an intelligent video surveillance system increases, server processing of the video quickly becomes a bottleneck. On the other hand, when computer vision algorithms are

“Designing a Stereo IP Camera From Scratch,” a Presentation from ELVEES Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top