Videos on Edge AI and Visual Intelligence
We hope that the compelling AI and visual intelligence case studies that follow will both entertain and inspire you, and that you’ll regularly revisit this page as new material is added. For more, monitor the News page, where you’ll frequently find video content embedded within the daily writeups.
Alliance Website Videos
“Transformer Networks: How They Work and Why They Matter,” a Presentation from Ryddle AI
Rakshit Agrawal, Co-Founder and CEO of Ryddle AI, presents the “Transformer Networks: How They Work and Why They Matter” tutorial at the May 2024 Embedded Vision Summit. Transformer neural networks have revolutionized artificial intelligence by introducing an architecture built around self-attention mechanisms. This has enabled unprecedented advances in understanding sequential… “Transformer Networks: How They Work
Micron Technology Demonstration of Its DRAM and Flash Memory Product Lines
David Henderson, Industrial Segment Director at Micron Technology, demonstrates the company’s latest edge AI and vision technologies and products at the 2024 Embedded Vision Summit. Specifically, Khalilian demonstrates his company’s latest DRAM and flash memory products. Demonstrations include: Image recognition and matching on an i.MX 8 Plus-based platform containing Micron’s DRAM, eMMC and serial NOR
“Removing Weather-related Image Degradation at the Edge,” a Presentation from Rivian
Ramit Pahwa, Machine Learning Scientist at Rivian, presents the “Removing Weather-related Image Degradation at the Edge” tutorial at the May 2024 Embedded Vision Summit. For machines that operate outdoors—such as autonomous cars and trucks—image quality degradation due to weather conditions presents a significant challenge. For example, snow, rainfall and raindrops… “Removing Weather-related Image Degradation at
“Seeing the Invisible: Unveiling Hidden Details through Advanced Image Acquisition Techniques,” a Presentation from Qualitas Technologies
Raghava Kashyapa, CEO of Qualitas Technologies, presents the “Seeing the Invisible: Unveiling Hidden Details through Advanced Image Acquisition Techniques” tutorial at the May 2024 Embedded Vision Summit. In this presentation, Kashyapa explores how advanced image acquisition techniques reveal previously unseen information, improving the ability of algorithms to provide valuable insights.… “Seeing the Invisible: Unveiling Hidden
“Data-efficient and Generalizable: The Domain-specific Small Vision Model Revolution,” a Presentation from Pixel Scientia Labs
Heather Couture, Founder and Computer Vision Consultant at Pixel Scientia Labs, presents the “Data-efficient and Generalizable: The Domain-specific Small Vision Model Revolution” tutorial at the May 2024 Embedded Vision Summit. Large vision models (LVMs) trained on a large and diverse set of imagery are revitalizing computer vision, just as LLMs… “Data-efficient and Generalizable: The Domain-specific
“Omnilert Gun Detect: Harnessing Computer Vision to Tackle Gun Violence,” a Presentation from Omnilert
Chad Green, Director of Artificial Intelligence at Omnilert, presents the “Omnilert Gun Detect: Harnessing Computer Vision to Tackle Gun Violence” tutorial at the May 2024 Embedded Vision Summit. In the United States in 2023, there were 658 mass shootings, and 42,996 people lost their lives to gun violence. Detecting and… “Omnilert Gun Detect: Harnessing Computer
“Adventures in Moving a Computer Vision Solution from Cloud to Edge,” a Presentation from MetaConsumer
Nate D’Amico, CTO and Head of Product at MetaConsumer, presents the “Adventures in Moving a Computer Vision Solution from Cloud to Edge” tutorial at the May 2024 Embedded Vision Summit. Optix is a computer vision-based AI system that measures advertising and media exposures on mobile devices for real-time marketing optimization.… “Adventures in Moving a Computer
“Bridging Vision and Language: Designing, Training and Deploying Multimodal Large Language Models,” a Presentation from Meta Reality Labs
Adel Ahmadyan, Staff Engineer at Meta Reality Labs, presents the “Bridging Vision and Language: Designing, Training and Deploying Multimodal Large Language Models” tutorial at the May 2024 Embedded Vision Summit. In this talk, Ahmadyan explores the use of multimodal large language models in real-world edge applications. He begins by explaining… “Bridging Vision and Language: Designing,
“Using MIPI CSI to Interface with Multiple Cameras,” a Presentation from Meta
Karthick Kumaran Ayyalluseshagiri Viswanathan, Staff Software Engineer at Meta, presents the “Using MIPI CSI to Interface with Multiple Cameras” tutorial at the May 2024 Embedded Vision Summit. As demand rises for vision capabilities in robotics, virtual/augmented reality, drones and automotive, there’s a growing need for systems to incorporate multiple cameras.… “Using MIPI CSI to Interface
“Introduction to Depth Sensing,” a Presentation from Meta
Harish Venkataraman, Depth Cameras Architecture and Tech Lead at Meta, presents the “Introduction to Depth Sensing” tutorial at the May 2024 Embedded Vision Summit. We live in a three-dimensional world, and the ability to perceive in three dimensions is essential for many systems. In this talk, Venkataraman introduced the main… “Introduction to Depth Sensing,” a