Videos on Edge AI and Visual Intelligence
We hope that the compelling AI and visual intelligence case studies that follow will both entertain and inspire you, and that you’ll regularly revisit this page as new material is added. For more, monitor the News page, where you’ll frequently find video content embedded within the daily writeups.
Alliance Website Videos
![](https://www.edge-ai-vision.com/wp-content/uploads/2023/10/NarcottaJ_SpeakerCard-300x158.jpg)
“Advanced Presence Sensing: What It Means for the Smart Home,” a Presentation from Omdia
Jack Narcotta, Principal Analyst for the Smart Home at Omdia, presents the “Advanced Presence Sensing: What It Means for the Smart Home” tutorial at the May 2023 Embedded Vision Summit. Eventually, homes will become highly autonomous—powered by ubiquitous, connected, intelligent devices (sometimes referred to as ambient computing)—but this remains a distant vision. In the meantime,
![](https://www.edge-ai-vision.com/wp-content/uploads/2023/10/oNOI2l4VxgA-300x169.jpg)
“Tracking and Fusing Diverse Risk Factors to Drive a SAFER Future,” a Presentation from Nauto
Yoav Banin, Chief Product and Business Development Officer, and Tahmida Mahmud, Engineering Manager for Perception, both of Nauto, presents the “Tracking and Fusing Diverse Risk Factors to Drive a SAFER Future” tutorial at the May 2023 Embedded Vision Summit. Unless you’re a gang member or drug addict, driving is your top risk. But which risks
![](https://www.edge-ai-vision.com/wp-content/uploads/2023/10/ThanigasalamH_SpeakerCard-300x158.jpg)
“MIPI CSI-2 Image Sensor Interface Standard Features Enable Efficient Embedded Vision Systems,” a Presentation from the MIPI Alliance
Haran Thanigasalam, Camera and Imaging Consultant to the MIPI Alliance, presents the “MIPI CSI-2 Image Sensor Interface Standard Features Enable Efficient Embedded Vision Systems” tutorial at the May 2023 Embedded Vision Summit. As computer vision applications continue to evolve rapidly, there’s a growing need for a smarter standardized interface connecting multiple image sensors to processors
![](https://www.edge-ai-vision.com/wp-content/uploads/2023/09/ThanigasalamM_Talk2_SpeakerCard-300x158.jpg)
“Introduction to the CSI-2 Image Sensor Interface Standard,” a Presentation from the MIPI Alliance
Haran Thanigasalam, Camera and Imaging Consultant to the MIPI Alliance, presents the “Introduction to the MIPI CSI-2 Image Sensor Interface Standard” tutorial at the May 2023 Embedded Vision Summit. By taking advantage of select features in standardized interfaces, vision system architects can help reduce processor load, cost and power consumption while gaining flexibility to source
![](https://www.edge-ai-vision.com/wp-content/uploads/2023/09/ChennaD_SpeakerCard.jpg)
“Practical Approaches to DNN Quantization,” a Presentation from Magic Leap
Dwith Chenna, Senior Embedded DSP Engineer for Computer Vision at Magic Leap, presents the “Practical Approaches to DNN Quantization” tutorial at the May 2023 Embedded Vision Summit. Convolutional neural networks, widely used in computer vision tasks, require substantial computation and memory resources, making it challenging to run these models on resource-constrained devices. Quantization involves modifying
![](https://www.edge-ai-vision.com/wp-content/uploads/2023/09/DavisT_LoukiliT_SpeakerCard-300x158.jpg)
“Optimizing Image Quality and Stereo Depth at the Edge,” a Presentation from John Deere
Travis Davis, Delivery Manager in the Automation and Autonomy Core, and Tarik Loukili, Technical Lead for Automation and Autonomy Applications, both of John Deere, present the “Reinventing Smart Cities with Computer Vision” tutorial at the May 2023 Embedded Vision Summit. John Deere uses machine learning and computer vision (including stereo vision) for challenging outdoor applications
![](https://www.edge-ai-vision.com/wp-content/uploads/2023/10/p0SNA1y3P7w-300x169.jpg)
CircuitSutra Technologies Demonstration of Virtual Prototyping for Pre-silicon Software Development
Umesh Sisodia, President and CEO of CircuitSutra Technologies, demonstrates the company’s latest edge AI and vision technologies and products at the September 2023 Edge AI and Vision Alliance Forum. Specifically, Sisodia demonstrates a virtual prototype of an ARM Cortex-based SoC, developed using SystemC and the CircuitSutra Modelling Library (CSTML). It is able to boot Linux
![](https://www.edge-ai-vision.com/wp-content/uploads/2023/10/OrnS_SpeakerCard-300x158.jpg)
“Using a Collaborative Network of Distributed Cameras for Object Tracking,” a Presentation from Invision AI
Samuel Örn, Team Lead and Senior Machine Learning and Computer Vision Engineer at Invision AI, presents the “Using a Collaborative Network of Distributed Cameras for Object Tracking” tutorial at the May 2023 Embedded Vision Summit. Using multiple fixed cameras to track objects requires a careful solution design. To enable scaling the number of cameras, the
![](https://www.edge-ai-vision.com/wp-content/uploads/2023/10/f6htdZqzaJw.jpg)
ProHawk Technology Group Overview of AI-enabled Computer Vision Restoration
Brent Willis, Chief Operating Officer of the ProHawk Technology Group, demonstrates the company’s latest edge AI and vision technologies and products at the September 2023 Edge AI and Vision Alliance Forum. Specifically, Willis discusses the company’s AI-enabled computer vision restoration technology. ProHawk’s patented algorithms and technologies enable real-time, pixel-by-pixel video restoration, overcoming virtually all environmental
![](https://www.edge-ai-vision.com/wp-content/uploads/2023/10/VqWrG16krqI-300x169.jpg)
DeGirum Demonstration of Streaming Edge AI Development and Deployment
Konstantin Kudryavtsev, Vice President of Software Development at DeGirum, demonstrates the company’s latest edge AI and vision technologies and products at the September 2023 Edge AI and Vision Alliance Forum. Specifically, Kudryavtsev demonstrates streaming edge AI development and deployment using the company’s JavaScript and Python SDKs and its cloud platform. On the software front, DeGirum