Applications
scroll to learn more or view by subtopic
Practical AI and computer vision technology is being developed for systems that span virtually every application, and many of these application areas will experience huge growth rates. With trendsetting products demonstrating what is possible, system designers have discovered that the suppliers of computer vision technology have removed the barriers to building practical computer vision systems—unleashing a huge wave of innovation for new products and applications.
View all Posts
“Real-time Retail Product Classification on Android Devices Inside the Caper AI Cart,” a Presentation from Instacart
David Scott, Senior Machine Learning Engineer at Instacart, presents the “Real-time Retail Product Classification on Android Devices Inside the Caper AI Cart” tutorial at the May 2024 Embedded Vision Summit. In this talk, Scott explores deploying an embedded computer vision model on Android devices for real-time product classification with the… “Real-time Retail Product Classification on
Nextchip Demonstration of a UHD Camera Reference Design Based On Its APACHE_U ISP
Barry Fitzgerald, local representative for Nextchip, demonstrates the company’s latest edge AI and vision technologies and products at the September 2024 Edge AI and Vision Alliance Forum. Specifically, Fitzgerald demonstrates a UHD camera reference design based on the company’s APACHE_U ISP in conjunction with an 8 Mpixel image sensor from fellow Alliance Member company Samsung.
MIPI Alliance Releases A-PHY v2.0, Doubling Maximum Data Rate of Automotive SerDes Interface to Enable Emerging Vehicle Architectures
Industry-leading specification simplifies the integration of image sensors and displays to support next-generation ADAS and ADS applications BRIDGEWATER, N.J., Sept. 26, 2024 — The MIPI Alliance, an international organization that develops interface specifications for mobile and mobile-influenced industries, today announced the release of MIPI A-PHY v2.0, the next version of the automotive high-speed asymmetric serializer-deserializer
Vision Components at VISION 2024: Ultra-compact OEM Module for Triangulation Sensors
Vision Components presents a new, ultra-compact OEM module for the development of individual triangulation sensors for the first time at VISION. Ettlingen, September 26, 2024 – For the first time ever, Vision Components will present a new, ultra-compact OEM module for laser triangulation sensors at VISION 2024 (October 8-10, Messe Stuttgart). It is aimed at
Global Progress and Challenges for Autonomous Buses and Roboshuttles
In recent years, the promise of the public transport revolution has been teased by autonomous buses and roboshuttles. These technologies promise to deliver significant cost reductions for operators and alleviate labor pressures. Over 50 autonomous bus and roboshuttle players once competed in this space. However, as the autonomous driving industry evolved during 2022-2024, the commercialization
What are the Key Camera Features of Drone-based Aerial Surveillance Applications?
This blog post was originally published at e-con Systems’ website. It is reprinted here with the permission of e-con Systems. Surveillance drones require cutting-edge cameras to perform aerial surveillance across various industries. The cameras help collect real-time data to improve their situational awareness. Discover the must-have camera features that these futuristic devices need for exceptional
“Future Radar Technologies and Applications,” a Presentation from IDTechEx
James Jeffs, Senior Technology Analyst at IDTechEx, presents the “Future Radar Technologies and Applications” tutorial at the May 2024 Embedded Vision Summit. Radar has value in a wide range of industries that are embracing automation, from delivery drones to agriculture, each requiring different performance attributes. Autonomous vehicles are perhaps one… “Future Radar Technologies and Applications,”
Using Generative AI to Enable Robots to Reason and Act with ReMEmbR
This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. Vision-language models (VLMs) combine the powerful language understanding of foundational LLMs with the vision capabilities of vision transformers (ViTs) by projecting text and images into the same embedding space. They can take unstructured multimodal data, reason over
“Ten Commandments for Building a Vision AI Product,” a Presentation from Hayden AI
Vaibhav Ghadiok, Chief Technology Officer of Hayden AI, presents the “Ten Commandments for Building a Vision AI Product” tutorial at the May 2024 Embedded Vision Summit. Over the past three decades, the convergence of machine learning, big data and enhanced computing power has transformed the field of computer vision from… “Ten Commandments for Building a
Renesas Leads ADAS Innovation with Power-efficient 4th-generation R-Car Automotive SoCs
New R-Car V4M & V4H SoC Devices Target High-Volume L2 and L2+ ADAS Market While Maintaining Scalability and Software Reusability with Existing R-Car Devices TOKYO, Japan, September 24, 2024 ― Renesas Electronics Corporation (TSE:6723), a premier supplier of advanced semiconductor solutions, today expanded its popular R-Car Family of system-on-chips (SoCs) for entry-level Advanced Driver Assistance