Videos

Videos on Edge AI and Visual Intelligence

We hope that the compelling AI and visual intelligence case studies that follow will both entertain and inspire you, and that you’ll regularly revisit this page as new material is added. For more, monitor the News page, where you’ll frequently find video content embedded within the daily writeups.

Alliance Website Videos

“Toward Hardware-agnostic ADAS Implementations for Software-defined Vehicles,” a Presentation from Valeo

Frank Moesle, Software Department Manager at Valeo, presents the “Toward Hardware-agnostic ADAS Implementations for Software-defined Vehicles” tutorial at the May 2025 Embedded Vision Summit. ADAS (advanced-driver assistance systems) software has historically been tightly bound to the underlying system-on-chip (SoC). This software, especially for visual perception, has been extensively optimized for… “Toward Hardware-agnostic ADAS Implementations for

Read More »

“Object Detection Models: Balancing Speed, Accuracy and Efficiency,” a Presentation from Union.ai

Sage Elliott, AI Engineer at Union.ai, presents the “Object Detection Models: Balancing Speed, Accuracy and Efficiency,” tutorial at the May 2025 Embedded Vision Summit. Deep learning has transformed many aspects of computer vision, including object detection, enabling accurate and efficient identification of objects in images and videos. However, choosing the… “Object Detection Models: Balancing Speed,

Read More »

“Depth Estimation from Monocular Images Using Geometric Foundation Models,” a Presentation from Toyota Research Institute

Rareș Ambruș, Senior Manager for Large Behavior Models at Toyota Research Institute, presents the “Depth Estimation from Monocular Images Using Geometric Foundation Models” tutorial at the May 2025 Embedded Vision Summit. In this presentation, Ambruș looks at recent advances in depth estimation from images. He first focuses on the ability… “Depth Estimation from Monocular Images

Read More »

“Introduction to DNN Training: Fundamentals, Process and Best Practices,” a Presentation from Think Circuits

Kevin Weekly, CEO of Think Circuits, presents the “Introduction to DNN Training: Fundamentals, Process and Best Practices” tutorial at the May 2025 Embedded Vision Summit. Training a model is a crucial step in machine learning, but it can be overwhelming for beginners. In this talk, Weekly provides a comprehensive introduction… “Introduction to DNN Training: Fundamentals,

Read More »

“Introduction to Depth Sensing: Technologies, Trade-offs and Applications,” a Presentation from Think Circuits

Chris Sarantos, Independent Consultant with Think Circuits, presents the “Introduction to Depth Sensing: Technologies, Trade-offs and Applications” tutorial at the May 2025 Embedded Vision Summit. Depth sensing is a crucial technology for many applications, including robotics, automotive safety and biometrics. In this talk, Sarantos provides an overview of depth sensing… “Introduction to Depth Sensing: Technologies,

Read More »

“Lessons Learned Building and Deploying a Weed-killing Robot,” a Presentation from Tensorfield Agriculture

Xiong Chang, CEO and Co-founder of Tensorfield Agriculture, presents the “Lessons Learned Building and Deploying a Weed-Killing Robot” tutorial at the May 2025 Embedded Vision Summit. Agriculture today faces chronic labor shortages and growing challenges around herbicide resistance, as well as consumer backlash to chemical inputs. Smarter, more sustainable approaches… “Lessons Learned Building and Deploying

Read More »

“Transformer Networks: How They Work and Why They Matter,” a Presentation from Synthpop AI

Rakshit Agrawal, Principal AI Scientist at Synthpop AI, presents the “Transformer Networks: How They Work and Why They Matter” tutorial at the May 2025 Embedded Vision Summit. Transformer neural networks have revolutionized artificial intelligence by introducing an architecture built around self-attention mechanisms. This has enabled unprecedented advances in understanding sequential… “Transformer Networks: How They Work

Read More »

“Virtual Reality, Machine Learning and Biosensing Advances Converging to Transform Healthcare and Beyond,” an Interview with Stanford University

Walter Greenleaf, Neuroscientist at Stanford University’s Virtual Human Interaction Lab, talks with Tom Vogelsong, Start-Up Scout at K2X Technology and Life Science for the “Virtual Reality, Machine Learning and Biosensing Advances Converging to Transform Healthcare and Beyond” interview at the May 2025 Embedded Vision Summit. In this wide-ranging interview, Greenleaf… “Virtual Reality, Machine Learning and

Read More »

“Understanding Human Activity from Visual Data,” a Presentation from Sportlogiq

Mehrsan Javan, Chief Technology Officer at Sportlogiq, presents the “Understanding Human Activity from Visual Data” tutorial at the May 2025 Embedded Vision Summit. Activity detection and recognition are crucial tasks in various industries, including surveillance and sports analytics. In this talk, Javan provides an in-depth exploration of human activity understanding,… “Understanding Human Activity from Visual

Read More »

“Multimodal Enterprise-scale Applications in the Generative AI Era,” a Presentation from Skyworks Solutions

Mumtaz Vauhkonen, Senior Director of AI at Skyworks Solutions, presents the “Multimodal Enterprise-scale Applications in the Generative AI Era” tutorial at the May 2025 Embedded Vision Summit. As artificial intelligence is making rapid strides in use of large language models, the need for multimodality arises in multiple application scenarios. Similar… “Multimodal Enterprise-scale Applications in the

Read More »

Eye-Catching Edge AI and Vision Industry Case Study Clips

eufy Robot Lawn Mower
Google Gemma 3n Open GenAI Model
Gemini Robotics Vision Language Model
Estes Express Lines and Samsara
Autonomous Crop Harvesting
Ray-Ban Meta Smart Glasses
Generative AI and Perceptual AI
Computer Vision in Agriculture
AI-powered Box Loading In Delivery Trucks
School Bus Safety
Autonomous Drones for Package Delivery
Whole-Body Health Tests via Retina Scans
Touchless Self-Checkout Retail System
Personalized-Info Airport Displays
Avoiding Autonomous Vacuuming Hazards
Vision-Enhanced Fitness
Coffee Pod Identification
Aerial Autonomy on Mars
Talking-Head Synthesis and Optimization
Tracking Down Litterers
Autonomous Infrastructure Inspection
AI-controlled Webcam
Underwater Image Enhancement
Colorizing B&W Images and Video
Hand Tracking on VR Headsets
Object ID for the Visually Impaired
Gesture-Based Mobile Device Control
Smart Vehicle Headlights
Insurance Valuation Via CV Image Analysis
Augmented Reality Shopping for Glasses
Blood Analysis for Malaria
Vision-Based Smart Oven
Facial Recognition for Flight Check-In
Diabetes Detection via AI Retina Scans
Vision Health Self-Analysis
Object Recognition for Children
Bossa Nova Robots in Retail
Autonomous Vehicle Parking

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top