Summit 2020

“Designing Bespoke CNNs for Target Hardware,” a Presentation from StradVision

Woonhyun Nam, Algorithms Director at StradVision, presents the “Designing Bespoke CNNs for Target Hardware” tutorial at the September 2020 Embedded Vision Summit. Due to the great success of deep neural networks (DNNs) in computer vision and other machine learning applications, numerous specialized processors have been developed to execute these algorithms with reduced cost and power […]

“Designing Bespoke CNNs for Target Hardware,” a Presentation from StradVision Read More +

“Tackling Extreme Visual Conditions for Autonomous UAVs In the Wild,” a Presentation from Skydio

Hayk Martiros, Head of Autonomy at Skydio, presents the “Tackling Extreme Visual Conditions for Autonomous UAVs In the Wild” tutorial at the September 2020 Embedded Vision Summit. Skydio ships autonomous robots that are flown at scale in complex, unknown environments every day to capture incredible video, automate dangerous inspections and save lives of first responders.

“Tackling Extreme Visual Conditions for Autonomous UAVs In the Wild,” a Presentation from Skydio Read More +

“Introduction to Simultaneous Localization and Mapping (SLAM),” a Presentation from Skydio

Gareth Cross, Technical Lead for State Estimation at Skydio, presents the “Introduction to Simultaneous Localization and Mapping (SLAM)” tutorial at the September 2020 Embedded Vision Summit. This talk provides an introduction to the fundamentals of simultaneous localization and mapping (SLAM). Cross provides foundational knowledge; viewers are not expected to have any prerequisite experience in the

“Introduction to Simultaneous Localization and Mapping (SLAM),” a Presentation from Skydio Read More +

“Multi-modal Re-identification: IOT + Computer Vision for Residential Community Tracking,” a Presentation from Seedland

Kit Thambiratnam, General Manager of the Seedland AI Center, presents the “Multi-modal Re-identification: IOT + Computer Vision for Residential Community Tracking” tutorial at the September 2020 Embedded Vision Summit. The recent COVID-19 outbreak necessitated monitoring in communities such as tracking of quarantined residents and tracking of close-contact interactions with sick individuals. High-density communities also have

“Multi-modal Re-identification: IOT + Computer Vision for Residential Community Tracking,” a Presentation from Seedland Read More +

“New Methods for Implementation of 2-D Convolution for Convolutional Neural Networks,” a Presentation from Santa Clara University

Tokunbo Ogunfunmi, Professor of Electrical Engineering and Director of the Signal Processing Research Laboratory at Santa Clara University, presents the “New Methods for Implementation of 2-D Convolution for Convolutional Neural Networks” tutorial at the September 2020 Embedded Vision Summit. The increasing usage of convolutional neural networks (CNNs) in various applications on mobile and embedded devices

“New Methods for Implementation of 2-D Convolution for Convolutional Neural Networks,” a Presentation from Santa Clara University Read More +

“Improving Power Efficiency for Edge Inferencing with Memory Management Optimizations,” a Presentation from Samsung

Nathan Levy, Project Leader at Samsung, presents the “Improving Power Efficiency for Edge Inferencing with Memory Management Optimizations” tutorial at the September 2020 Embedded Vision Summit. In the race to power efficiency for neural network processing, optimizing memory use to reduce data traffic is critical. Many processors have a small local memory (typically SRAM) used

“Improving Power Efficiency for Edge Inferencing with Memory Management Optimizations,” a Presentation from Samsung Read More +

“Image-Based Deep Learning for Manufacturing Fault Condition Detection,” a Presentation from Samsung

Jake Lee, Principal Engineer and Head of the Machine Learning Group at Samsung, presents the “Image-Based Deep Learning for Manufacturing Fault Condition Detection” tutorial at the September 2020 Embedded Vision Summit. In this presentation, Lee explores applying deep learning to analyzing manufacturing parameter data to detect fault conditions. The manufacturing parameter data contains multivariate time

“Image-Based Deep Learning for Manufacturing Fault Condition Detection,” a Presentation from Samsung Read More +

“Using an ISP for Real-time Data Augmentation,” a Presentation from Pony.AI

Timofey Uvarov, Camera System Lead at Pony.AI, presents the “Using an ISP for Real-time Data Augmentation” tutorial at the September 2020 Embedded Vision Summit. Image signal processors (ISPs) are tasked with processing raw pixels delivered by image sensors in order to optimize the quality of images. In computer vision applications, much attention is focused on

“Using an ISP for Real-time Data Augmentation,” a Presentation from Pony.AI Read More +

“Eye Tracking for the Future,” a Presentation from Parallel Rules

Peter Milford, President of Parallel Rules, presents the “Eye Tracking for the Future” tutorial at the September 2020 Embedded Vision Summit. Eye tracking is an increasingly important technology for applications ranging from augmented and virtual reality head-mounted displays to automotive driver monitoring. In this talk, Milford introduces eye tracking techniques and technical challenges. He also

“Eye Tracking for the Future,” a Presentation from Parallel Rules Read More +

“Introduction to the TVM Open Source Deep Learning Compiler Stack,” a Presentation from the University of Washington

Luis Ceze, a Professor in the Paul G. Allen School of Computer Science and Engineering at the University of Washington, co-founder and CEO of OctoML, and Venture Partner at Madrona Venture Group, presents the “Introduction to the TVM Open Source Deep Learning Compiler Stack” tutorial at the September 2020 Embedded Vision Summit. There is an

“Introduction to the TVM Open Source Deep Learning Compiler Stack,” a Presentation from the University of Washington Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top