Tools

“How Simulation Accelerates Development of Self-Driving Technology,” a Presentation from AImotive

László Kishonti, founder and CEO of AImotive, presents the “How Simulation Accelerates Development of Self-Driving Technology” tutorial at the May 2018 Embedded Vision Summit. Virtual testing, as discussed by Kishonti in this presentation, is the only solution that scales to address the billions of miles of testing required to make autonomous vehicles robust. However, integrating […]

“How Simulation Accelerates Development of Self-Driving Technology,” a Presentation from AImotive Read More +

“Recognizing Novel Objects in Novel Surroundings with Single-shot Detectors,” a Presentation from the University of North Carolina at Chapel Hill

Alexander C Berg, Associate Professor at the University of North Carolina at Chapel Hill and CTO of Shopagon, presents the “Recognizing Novel Objects in Novel Surroundings with Single-shot Detectors” tutorial at the May 2018 Embedded Vision Summit. Berg’s group’s 2016 work on single-shot object detection (SSD) reduced the computation cost for accurate detection of object

“Recognizing Novel Objects in Novel Surroundings with Single-shot Detectors,” a Presentation from the University of North Carolina at Chapel Hill Read More +

“Building a Typical Visual SLAM Pipeline,” a Presentation from Virgin Hyperloop One

YoungWoo Seo, Senior Director at Virgin Hyperloop One, presents the “Building a Typical Visual SLAM Pipeline” tutorial at the May 2018 Embedded Vision Summit. Maps are important for both human and robot navigation. SLAM (simultaneous localization and mapping) is one of the core techniques for map-based navigation. As SLAM algorithms have matured and hardware has

“Building a Typical Visual SLAM Pipeline,” a Presentation from Virgin Hyperloop One Read More +

“Developing Computer Vision Algorithms for Networked Cameras,” a Presentation from Intel

Dukhwan Kim, computer vision software architect at Intel, presents the “Developing Computer Vision Algorithms for Networked Cameras” tutorial at the May 2018 Embedded Vision Summit. Video analytics is one of the key elements in network cameras. Computer vision capabilities such as pedestrian detection, face detection and recognition and object detection and tracking are necessary for

“Developing Computer Vision Algorithms for Networked Cameras,” a Presentation from Intel Read More +

“Visual-Inertial Tracking for AR and VR,” a Presentation from Meta

Timo Ahonen, Director of Engineering for Computer Vision at Meta, presents the “Visual-Inertial Tracking for AR and VR” tutorial at the May 2018 Embedded Vision Summit. This tutorial covers the main current approaches to solving the problem of tracking the motion of a display for AR and VR use cases. Ahonen covers methods for inside-out

“Visual-Inertial Tracking for AR and VR,” a Presentation from Meta Read More +

“Bad Data, Bad Network, or: How to Create the Right Dataset for Your Application,” a Presentation from AMD

Mike Schmit, Director of Software Engineering for computer vision and machine learning at AMD, presents the “Bad Data, Bad Network, or: How to Create the Right Dataset for Your Application” tutorial at the May 2018 Embedded Vision Summit. When training deep neural networks, having the right training data is key. In this talk, Schmit explores

“Bad Data, Bad Network, or: How to Create the Right Dataset for Your Application,” a Presentation from AMD Read More +

“Understanding and Implementing Face Landmark Detection and Tracking,” a Presentation from PathPartner Technology

Jayachandra Dakala, Technical Architect at PathPartner Technology, presents the “Understanding and Implementing Face Landmark Detection and Tracking” tutorial at the May 2018 Embedded Vision Summit. Face landmark detection is of profound interest in computer vision, because it enables tasks ranging from facial expression recognition to understanding human behavior. Face landmark detection and tracking can be

“Understanding and Implementing Face Landmark Detection and Tracking,” a Presentation from PathPartner Technology Read More +

“From Feature Engineering to Network Engineering,” a Presentation from ShatterLine Labs and AMD

Auro Tripathy, Founding Principal at ShatterLine Labs (representing AMD), presents the “From Feature Engineering to Network Engineering” tutorial at the May 2018 Embedded Vision Summit. The availability of large labeled image datasets is tilting the balance in favor of “network engineering”instead of “feature engineering”. Hand-designed features dominated recognition tasks in the past, but now features

“From Feature Engineering to Network Engineering,” a Presentation from ShatterLine Labs and AMD Read More +

“Data-driven Business Models Enabled by 3D Vision Technology,” a Presentation from FRAMOS

Christopher Scheubel, Head of IP and Business Development at FRAMOS, presents the “Data-driven Business Models Enabled by 3D Vision Technology” tutorial at the May 2018 Embedded Vision Summit. This presentation describes which applications are enabled by low-cost 3D vision technology, such as home robotics, smart cities/communities and drones for precision farming, and which business models

“Data-driven Business Models Enabled by 3D Vision Technology,” a Presentation from FRAMOS Read More +

“What is Neuromorphic Event-based Computer Vision? Sensors, Theory and Applications,” a Presentation from Ryad B. Benosman

Ryad B. Benosman, Professor at the University of Pittsburgh Medical Center, Carnegie Mellon University and Sorbonne Universitas, presents the “What is Neuromorphic Event-based Computer Vision? Sensors, Theory and Applications” tutorial at the May 2018 Embedded Vision Summit. In this presentation, Benosman introduces neuromorphic, event-based approaches for image sensing and processing. State-of-the-art image sensors suffer from

“What is Neuromorphic Event-based Computer Vision? Sensors, Theory and Applications,” a Presentation from Ryad B. Benosman Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top