Videos

Videos on Edge AI and Visual Intelligence

We hope that the compelling AI and visual intelligence case studies that follow will both entertain and inspire you, and that you’ll regularly revisit this page as new material is added. For more, monitor the News page, where you’ll frequently find video content embedded within the daily writeups.

Alliance Website Videos

May 2024 Embedded Vision Summit Opening Remarks (May 22)

Jeff Bier, Founder of the Edge AI and Vision Alliance, welcomes attendees to the May 2024 Embedded Vision Summit on May 22, 2024. Bier provides an overview of the edge AI and vision market opportunities, challenges, solutions and trends. He also introduces the Edge AI and Vision Alliance and the… May 2024 Embedded Vision Summit

Read More »

“Understand the Multimodal World with Minimal Supervision,” a Keynote Presentation from Yong Jae Lee

Yong Jae Lee, Associate Professor in the Department of Computer Sciences at the University of Wisconsin-Madison and CEO of GivernyAI, presents the “Learning to Understand Our Multimodal World with Minimal Supervision” tutorial at the May 2024 Embedded Vision Summit. The field of computer vision is undergoing another profound change. Recently,… “Understand the Multimodal World with

Read More »

“Recent Trends in Industrial Machine Vision: Challenging Times,” a Presentation from the Yole Group

Axel Clouet, Technology and Market Analyst for Imaging at the Yole Group, presents the “Recent Trends in Industrial Machine Vision: Challenging Times” tutorial at the May 2024 Embedded Vision Summit. For decades, cameras have been increasingly used in industrial applications as key components for automation. After two years of rapid… “Recent Trends in Industrial Machine

Read More »

“Camera Interface Standards for Embedded Vision Applications,” an Interview with the MIPI Alliance

Haran Thanigasalam, Camera and Imaging Consultant for the MIPI Alliance, talks with Shung Chieh, Senior Vice President at Eikon Systems, for the “Exploring MIPI Camera Interface Standards for Embedded Vision Applications” interview at the May 2024 Embedded Vision Summit. This insightful interview delves into the relevance and impact of MIPI… “Camera Interface Standards for Embedded

Read More »

“Identifying and Mitigating Bias in AI,” a Presentation from Intel

Nikita Tiwari, AI Enabling Engineer for OEM PC Experiences in the Client Computing Group at Intel, presents the “Identifying and Mitigating Bias in AI” tutorial at the May 2024 Embedded Vision Summit. From autonomous driving to immersive shopping, and from enhanced video collaboration to graphic design, AI is placing a… “Identifying and Mitigating Bias in

Read More »

“The Fundamentals of Training AI Models for Computer Vision Applications,” a Presentation from GMAC Intelligence

Amit Mate, Founder and CEO of GMAC Intelligence, presents the “Fundamentals of Training AI Models for Computer Vision Applications” tutorial at the May 2024 Embedded Vision Summit. In this presentation, Mate introduces the essential aspects of training convolutional neural networks (CNNs). He discusses the prerequisites for training, including models, data… “The Fundamentals of Training AI

Read More »

“An Introduction to Semantic Segmentation,” a Presentation from Au-Zone Technologies

Sébastien Taylor, Vice President of Research and Development for Au-Zone Technologies, presents the “Introduction to Semantic Segmentation” tutorial at the May 2024 Embedded Vision Summit. Vision applications often rely on object detectors, which determine the nature and location of objects in a scene. But many vision applications require a different… “An Introduction to Semantic Segmentation,”

Read More »

“Augmenting Visual AI through Radar and Camera Fusion,” a Presentation from Au-Zone Technologies

Sébastien Taylor, Vice President of Research and Development for Au-Zone Technologies, presents the “Augmenting Visual AI through Radar and Camera Fusion” tutorial at the May 2024 Embedded Vision Summit. In this presentation Taylor discusses well-known limitations of camera-based AI and how radar can be leveraged to address these limitations. He… “Augmenting Visual AI through Radar

Read More »

“DNN Quantization: Theory to Practice,” a Presentation from AMD

Dwith Chenna, Member of the Technical Staff and Product Engineer for AI Inference at AMD, presents the “DNN Quantization: Theory to Practice” tutorial at the May 2024 Embedded Vision Summit. Deep neural networks, widely used in computer vision tasks, require substantial computation and memory resources, making it challenging to run… “DNN Quantization: Theory to Practice,”

Read More »

Eye-Catching Edge AI and Vision Industry Case Study Clips

Autonomous Crop Harvesting
Ray-Ban Meta Smart Glasses
Generative AI and Perceptual AI
Computer Vision in Agriculture
AI-powered Box Loading In Delivery Trucks
School Bus Safety
Autonomous Drones for Package Delivery
Whole-Body Health Tests via Retina Scans
Touchless Self-Checkout Retail System
Personalized-Info Airport Displays
Avoiding Autonomous Vacuuming Hazards
Vision-Enhanced Fitness
Coffee Pod Identification
Aerial Autonomy on Mars
Talking-Head Synthesis and Optimization
Tracking Down Litterers
Autonomous Infrastructure Inspection
AI-controlled Webcam
Underwater Image Enhancement
Colorizing B&W Images and Video
Hand Tracking on VR Headsets
Object ID for the Visually Impaired
Gesture-Based Mobile Device Control
Smart Vehicle Headlights
Insurance Valuation Via CV Image Analysis
Augmented Reality Shopping for Glasses
Blood Analysis for Malaria
Vision-Based Smart Oven
Facial Recognition for Flight Check-In
Diabetes Detection via AI Retina Scans
Vision Health Self-Analysis
Object Recognition for Children
Bossa Nova Robots in Retail
Autonomous Vehicle Parking

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top