Videos on Edge AI and Visual Intelligence
We hope that the compelling AI and visual intelligence case studies that follow will both entertain and inspire you, and that you’ll regularly revisit this page as new material is added. For more, monitor the News page, where you’ll frequently find video content embedded within the daily writeups.
Alliance Website Videos
Chips&Media Demonstration of CMNP, Its Application-specific NPU for Image and Video
Andy Lee, Vice President of US Marketing at Chips&Media, demonstrates the company’s latest edge AI and vision technologies and products at the 2024 Embedded Vision Summit. Specifically, Lee demonstrates his company’s neural processor, CMNP, which enhances image quality. CMNP boasts two key applications: super resolution and noise reduction. When these two primary applications are combined
VeriSilicon Demonstration of Its Secure, Low-power Machine Learning Solutions
Brian Murray, SoC Architect at VeriSilicon, demonstrates the company’s latest edge AI and vision technologies and products in Lattice Semiconductor’s booth at the 2024 Embedded Vision Summit. Specifically, Murray demonstrates the collaborative project Open Se Cura. This open-source platform combines machine learning and secure enclave technology to ensure data safety during processing. VeriSilicon has integrated
VeriSilicon Demonstration of Its End-to-end Video Transcoding Semiconductor Solutions
Halim Theny, Vice President of Product Engineering at VeriSilicon, demonstrates the company’s latest edge AI and vision technologies and products at the 2024 Embedded Vision Summit. Specifically, Theny demonstrates an end-to-end video transcoding solution. VeriSilicon supplies IP technology for semiconductor IC companies, offering support from chip design to production testing. This demo features a chip
VeriSilicon Demonstration of Its Comprehensive Silicon Solution and Edge AI Applications
Halim Theny, Vice President of Product Engineering at VeriSilicon, demonstrates the company’s latest edge AI and vision technologies and products in Lattice Semiconductor’s booth at the 2024 Embedded Vision Summit. Specifically, Theny demonstrates his company’s comprehensive semiconductor technology solutions, providing IP technology for silicon makers to build advanced ICs. The demonstrated chip features an NPU
NXP Semiconductors Demonstration of Smart Fitness with the i.MX 93 Apps Processor
Manish Bajaj, Systems Engineer at NXP Semiconductors, demonstrates the company’s latest edge AI and vision technologies and products at the 2024 Embedded Vision Summit. Specifically, Bajaj demonstrates how the i.MX 93 applications processor can run machine learning applications with an Arm Ethos U-65 microNPU to accelerate inference on two simultaneously running deep learning vision- based
NXP Semiconductors Demonstration of Quad Camera Object Detection with the i.MX 95 Apps Processor
Manish Bajaj, Systems Engineer at NXP Semiconductors, demonstrates the company’s latest edge AI and vision technologies and products at the 2024 Embedded Vision Summit. Specifically, Bajaj demonstrates the features of the new i.MX 95 applications processor that make it a great edge AI platform. The SoC’s integrated eIQ N3-1024S Neutron NPU implements robust AI inference
NXP Semiconductors Demonstration of Face Following with the MCX N Microcontroller
Anthony Huereca, Systems Engineer at NXP Semiconductors, demonstrates the company’s latest edge AI and vision technologies and products at the 2024 Embedded Vision Summit. Specifically, Huereca demonstrates some of the features of his company’s MCX N microcontroller. The AI-based face detection model in this demo enables the screen to react to the movements of the
Nota AI Demonstration of Transforming Edge AI with the LaunchX Converter and Benchmarker
Tae-Ho Kim, CTO and Co-founder of Nota AI, demonstrates the company’s latest edge AI and vision technologies and products at the 2024 Embedded Vision Summit. Specifically, Kim demonstrates his company’s LaunchX platform, featuring its powerful Converter and Benchmarker. LaunchX optimizes AI models for edge devices, reducing latency and boosting performance. Practical applications of the Converter
Nota AI Demonstration of Elevating Traffic Safety with Vision Language Models
Tae-Ho Kim, CTO and Co-founder of Nota AI, demonstrates the company’s latest edge AI and vision technologies and products at the 2024 Embedded Vision Summit. Specifically, Kim demonstrates his company’s Vision Language Model (VLM) solution, designed to elevate vehicle safety. Advanced models analyze and interpret visual data to prevent accidents and enhance driving experiences. The
Nota AI Demonstration of Revolutionizing Driver Monitoring Systems
Tae-Ho Kim, CTO and Co-founder of Nota AI, demonstrates the company’s latest edge AI and vision technologies and products at the 2024 Embedded Vision Summit. Specifically, Kim demonstrates Nota DMS, his company’s state-of-the-art driver monitoring system. The solution enhances driver safety by monitoring attention and detecting drowsiness in real-time. Cutting-edge AI techniques make Nota DMS