Summit

“Introduction to Simultaneous Localization and Mapping (SLAM),” a Presentation from Skydio

Gareth Cross, Technical Lead for State Estimation at Skydio, presents the “Introduction to Simultaneous Localization and Mapping (SLAM)” tutorial at the September 2020 Embedded Vision Summit. This talk provides an introduction to the fundamentals of simultaneous localization and mapping (SLAM). Cross provides foundational knowledge; viewers are not expected to have any prerequisite experience in the […]

“Introduction to Simultaneous Localization and Mapping (SLAM),” a Presentation from Skydio Read More +

“Multi-modal Re-identification: IOT + Computer Vision for Residential Community Tracking,” a Presentation from Seedland

Kit Thambiratnam, General Manager of the Seedland AI Center, presents the “Multi-modal Re-identification: IOT + Computer Vision for Residential Community Tracking” tutorial at the September 2020 Embedded Vision Summit. The recent COVID-19 outbreak necessitated monitoring in communities such as tracking of quarantined residents and tracking of close-contact interactions with sick individuals. High-density communities also have

“Multi-modal Re-identification: IOT + Computer Vision for Residential Community Tracking,” a Presentation from Seedland Read More +

“New Methods for Implementation of 2-D Convolution for Convolutional Neural Networks,” a Presentation from Santa Clara University

Tokunbo Ogunfunmi, Professor of Electrical Engineering and Director of the Signal Processing Research Laboratory at Santa Clara University, presents the “New Methods for Implementation of 2-D Convolution for Convolutional Neural Networks” tutorial at the September 2020 Embedded Vision Summit. The increasing usage of convolutional neural networks (CNNs) in various applications on mobile and embedded devices

“New Methods for Implementation of 2-D Convolution for Convolutional Neural Networks,” a Presentation from Santa Clara University Read More +

“Improving Power Efficiency for Edge Inferencing with Memory Management Optimizations,” a Presentation from Samsung

Nathan Levy, Project Leader at Samsung, presents the “Improving Power Efficiency for Edge Inferencing with Memory Management Optimizations” tutorial at the September 2020 Embedded Vision Summit. In the race to power efficiency for neural network processing, optimizing memory use to reduce data traffic is critical. Many processors have a small local memory (typically SRAM) used

“Improving Power Efficiency for Edge Inferencing with Memory Management Optimizations,” a Presentation from Samsung Read More +

“Image-Based Deep Learning for Manufacturing Fault Condition Detection,” a Presentation from Samsung

Jake Lee, Principal Engineer and Head of the Machine Learning Group at Samsung, presents the “Image-Based Deep Learning for Manufacturing Fault Condition Detection” tutorial at the September 2020 Embedded Vision Summit. In this presentation, Lee explores applying deep learning to analyzing manufacturing parameter data to detect fault conditions. The manufacturing parameter data contains multivariate time

“Image-Based Deep Learning for Manufacturing Fault Condition Detection,” a Presentation from Samsung Read More +

“Using an ISP for Real-time Data Augmentation,” a Presentation from Pony.AI

Timofey Uvarov, Camera System Lead at Pony.AI, presents the “Using an ISP for Real-time Data Augmentation” tutorial at the September 2020 Embedded Vision Summit. Image signal processors (ISPs) are tasked with processing raw pixels delivered by image sensors in order to optimize the quality of images. In computer vision applications, much attention is focused on

“Using an ISP for Real-time Data Augmentation,” a Presentation from Pony.AI Read More +

“Eye Tracking for the Future,” a Presentation from Parallel Rules

Peter Milford, President of Parallel Rules, presents the “Eye Tracking for the Future” tutorial at the September 2020 Embedded Vision Summit. Eye tracking is an increasingly important technology for applications ranging from augmented and virtual reality head-mounted displays to automotive driver monitoring. In this talk, Milford introduces eye tracking techniques and technical challenges. He also

“Eye Tracking for the Future,” a Presentation from Parallel Rules Read More +

“Introduction to the TVM Open Source Deep Learning Compiler Stack,” a Presentation from the University of Washington

Luis Ceze, a Professor in the Paul G. Allen School of Computer Science and Engineering at the University of Washington, co-founder and CEO of OctoML, and Venture Partner at Madrona Venture Group, presents the “Introduction to the TVM Open Source Deep Learning Compiler Stack” tutorial at the September 2020 Embedded Vision Summit. There is an

“Introduction to the TVM Open Source Deep Learning Compiler Stack,” a Presentation from the University of Washington Read More +

“Imaging Systems for Applied Reinforcement Learning Control,” a Presentation from Nanotronics

Damas Limoge, Senior R&D Engineer at Nanotronics, presents the “Imaging Systems for Applied Reinforcement Learning Control” tutorial at the September 2020 Embedded Vision Summit. Reinforcement learning has generated human-level decision-making strategies in highly complex game scenarios. But most industries, such as manufacturing, have not seen impressive results from the application of these algorithms, belying the

“Imaging Systems for Applied Reinforcement Learning Control,” a Presentation from Nanotronics Read More +

“MLPerf: An Industry Standard Performance Benchmark Suite for Machine Learning,” a Presentation from Facebook and Arizona State University

Carole-Jean Wu, Research Scientist at Facebook AI Research and an Associate Professor at Arizona State University, presents the “MLPerf: An Industry Standard Performance Benchmark Suite for Machine Learning” tutorial at the September 2020 Embedded Vision Summit. The rapid growth in the use of DNNs has spurred the development of numerous specialized processor architectures and software

“MLPerf: An Industry Standard Performance Benchmark Suite for Machine Learning,” a Presentation from Facebook and Arizona State University Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top