Fundamentals

“Approaches for Energy Efficient Implementation of Deep Neural Networks,” a Presentation from MIT

Vivienne Sze, Associate Professor at MIT, presents the “Approaches for Energy Efficient Implementation of Deep Neural Networks” tutorial at the May 2018 Embedded Vision Summit. Deep neural networks (DNNs) are proving very effective for a variety of challenging machine perception tasks. But these algorithms are very computationally demanding. To enable DNNs to be used in […]

“Approaches for Energy Efficient Implementation of Deep Neural Networks,” a Presentation from MIT Read More +

“Understanding Automotive Radar: Present and Future,” a Presentation from NXP Semiconductors

Arunesh Roy, Radar Algorithms Architect at NXP Semiconductors, presents the “Understanding Automotive Radar: Present and Future” tutorial at the May 2018 Embedded Vision Summit. Thanks to its proven, all-weather range detection capability, radar is increasingly used for driver assistance functions such as automatic emergency braking and adaptive cruise control. Radar is considered a crucial sensing

“Understanding Automotive Radar: Present and Future,” a Presentation from NXP Semiconductors Read More +

“Visual-Inertial Tracking for AR and VR,” a Presentation from Meta

Timo Ahonen, Director of Engineering for Computer Vision at Meta, presents the “Visual-Inertial Tracking for AR and VR” tutorial at the May 2018 Embedded Vision Summit. This tutorial covers the main current approaches to solving the problem of tracking the motion of a display for AR and VR use cases. Ahonen covers methods for inside-out

“Visual-Inertial Tracking for AR and VR,” a Presentation from Meta Read More +

“Bad Data, Bad Network, or: How to Create the Right Dataset for Your Application,” a Presentation from AMD

Mike Schmit, Director of Software Engineering for computer vision and machine learning at AMD, presents the “Bad Data, Bad Network, or: How to Create the Right Dataset for Your Application” tutorial at the May 2018 Embedded Vision Summit. When training deep neural networks, having the right training data is key. In this talk, Schmit explores

“Bad Data, Bad Network, or: How to Create the Right Dataset for Your Application,” a Presentation from AMD Read More +

“Understanding and Implementing Face Landmark Detection and Tracking,” a Presentation from PathPartner Technology

Jayachandra Dakala, Technical Architect at PathPartner Technology, presents the “Understanding and Implementing Face Landmark Detection and Tracking” tutorial at the May 2018 Embedded Vision Summit. Face landmark detection is of profound interest in computer vision, because it enables tasks ranging from facial expression recognition to understanding human behavior. Face landmark detection and tracking can be

“Understanding and Implementing Face Landmark Detection and Tracking,” a Presentation from PathPartner Technology Read More +

“From Feature Engineering to Network Engineering,” a Presentation from ShatterLine Labs and AMD

Auro Tripathy, Founding Principal at ShatterLine Labs (representing AMD), presents the “From Feature Engineering to Network Engineering” tutorial at the May 2018 Embedded Vision Summit. The availability of large labeled image datasets is tilting the balance in favor of “network engineering”instead of “feature engineering”. Hand-designed features dominated recognition tasks in the past, but now features

“From Feature Engineering to Network Engineering,” a Presentation from ShatterLine Labs and AMD Read More +

“Depth Cameras: A State-of-the-Art Overview,” a Presentation from Aquifi

Carlo Dal Mutto, CTO of Aquifi, presents the “Depth Cameras: A State-of-the-Art Overview” tutorial at the May 2018 Embedded Vision Summit. In the last few years, depth cameras have reached maturity and are being incorporated in an increasing variety of commercial products. Typical applications span gaming, contactless authentication in smartphones, AR/VR and IoT. State-of-the-art depth

“Depth Cameras: A State-of-the-Art Overview,” a Presentation from Aquifi Read More +

“Designing Vision Front Ends for Embedded Systems,” a Presentation from Basler

Friedrich Dierks, Director of Product Marketing and Development for the Module Business at Basler, presents the “Designing Vision Front Ends for Embedded Systems” tutorial at the May 2018 Embedded Vision Summit. This presentation guides viewers through the process of specifying and selecting a vision front end for an embedded system. It covers topics such as

“Designing Vision Front Ends for Embedded Systems,” a Presentation from Basler Read More +

“Optimize Performance: Start Your Algorithm Development With the Imaging Subsystem,” a Presentation from Twisthink

Ryan Johnson, lead engineer at Twisthink, presents the “Optimize Performance: Start Your Algorithm Development With the Imaging Subsystem” tutorial at the May 2018 Embedded Vision Summit. Image sensor and algorithm performance are rapidly increasing, and software and hardware development tools are making embedded vision systems easier to develop. Even with these advancements, optimizing vision-based detection

“Optimize Performance: Start Your Algorithm Development With the Imaging Subsystem,” a Presentation from Twisthink Read More +

“Introduction to Creating a Vision Solution in the Cloud,” a Presentation from GumGum

Nishita Sant, Computer Vision Scientist at GumGum, presents the “Introduction to Creating a Vision Solution in the Cloud” tutorial at the May 2018 Embedded Vision Summit. A growing number of applications utilize cloud computing for execution of computer vision algorithms. In this presentation, Sant introduces the basics of creating a cloud-based vision service, based on

“Introduction to Creating a Vision Solution in the Cloud,” a Presentation from GumGum Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top