LETTER FROM THE EDITOR |
Dear Colleague, I’m delighted to announce that the 2025 AI Innovation Awards are now open for nominations. Now in their second year, the AI Innovation Awards recognize innovative new products powered by edge AI and vision technologies. Nominating a product is fast, easy and free! The AI Innovation Awards are brought to you by the Edge AI and Vision Alliance to honor real-world solutions that have been delivered in 2024. The Awards encompass a diverse range of end products in consumer and enterprise markets, from smart home appliances, digital health devices and drones to in-vehicle entertainment, robotics, security/surveillance systems and environmental monitoring solutions. Eligible end products must be sold directly to consumers or enterprises, with first deliveries in 2024 and must implement AI or vision algorithms at the edge (on the device, on-premise, or at the network edge). Nominate a product today — it’s fast and free! Brian Dipert |
AUTOMOTIVE AND AGRICULTURE APPLICATIONS |
Market and Technology Trends in Automotive ADAS In this talk, Florian Domengie, Senior Technology and Market Analyst at the Yole Group, takes an in-depth look at the rapidly advancing and fast-growing space of driver assistance and autonomous driving. He explores how ADAS systems are adding new capabilities each year, and how image sensors and processors are evolving to support this. He also examines the processors used in several production ADAS camera modules. Domengie explains how vehicle makers are moving to zonal ADAS architectures, and how camera interfaces are evolving to support larger numbers of cameras connected to central ADAS units. He also touches on emerging opportunities for thermal and neuromorphic event-based cameras. Finally, he shares his firm’s market share analysis and market forecast for ADAS camera modules, image sensors and processors. |
Better Farming through Embedded AI Blue River Technology, a subsidiary of John Deere, uses computer vision and deep learning to build intelligent machines that help farmers grow more food more efficiently. By enabling robots to tell the difference between crops and weeds and then only spraying the weeds, these machines are revolutionizing agriculture’s approach to chemical usage. By outfitting tractors with perception sensors and autonomous driving capabilities, the company is freeing farmers from tedious jobs like tillage so they can spend more time doing higher-value tasks. In this presentation, Chris Padwick, Director of Computer Vision Machine Learning at Blue River Technology, shares how his company solves machine vision problems using deep learning, and some of the specific challenges addressed along the way (such as dust interference and the visual similarities between weeds and crops). He does a deep dive into the tech stack, including on-premise compute, image augmentations, 8-bit quantization trade-offs and tips and tricks to improve model performance. |
RETAIL INSIGHTS |
Real-time Retail Product Classification on Android Devices Inside the Caper AI Cart In this talk, David Scott, Senior Machine Learning Engineer at Instacart, explores deploying an embedded computer vision model on Android devices for real-time product classification with the goal of increased customer satisfaction during shopping. He dives into the details of achieving real-time inference on a low-resource system via a state machine approach based on a machine learning pipeline. Scott touches on the challenges of seamlessly integrating with existing Android ecosystems and running real-time machine learning without hindering other device functions. He also highlights preliminary results, emphasizing minimizing system resource usage. You’ll learn how Instacart is leveraging AI in retail, aligning with the company’s goal of driving innovation in the retail sector to enhance the retail experience. |
Why Amazon Failed and the Future of Computer Vision in Retail Grabango’s checkout-free shopping system allows you to shop in grocery and convenience stores without having to scan your items or wait in lines. Grabango uses pure computer vision to identify every product in the store, understand what they are and know where they are at all times. The Grabango system works seamlessly in ordinary stores without requiring retailers to make any changes to the store’s planogram or merchandising strategies. Grabango’s clients include ALDI, Chevron, Circle K, Copec and 7-Eleven. While Grabango is seeing strong growth, Amazon is pulling back on its Just Walk Out (JWO) technology. In this interview, Junko Yoshida, Editor-in-Chief of the Ojo-Yoshida Report, catches up with Will Glaser, Grabango’s Founder and CEO, to get an update on Grabango’s trajectory. They discuss why Amazon’s JWO failed, the adoption of Grabango’s solution and the factors fueling the company’s growth. They also delve into Grabango’s insights about consumer preferences. |
UPCOMING INDUSTRY EVENTS |
Sensing In ADAS and Autonomous Vehicles: What’s Winning, and Why? – TechInsights Webinar: January 28, 2025, 9:00 am PT Embedded Vision Summit: May 20-22, 2025, Santa Clara, California |
FEATURED NEWS |
Intel Launches Its First AI PC Core Ultra Desktop Processors Qualcomm and Mistral AI Partner to Bring New Generative AI Models to Edge Devices Vision Components Introduces an Ultra-compact OEM Module for Triangulation Sensors Microchip Technology Expands Its 64-bit Portfolio with High-performance, Post-quantum Security- and AI-enabled PIC64HX Microprocessors AMD Announces Its Latest Processing and Acceleration Solutions for Advancing AI |
EDGE AI AND VISION PRODUCT OF THE YEAR WINNER SHOWCASE |
Qualcomm Snapdragon X Elite Platform (Best Edge AI Processor) Qualcomm’s Snapdragon X Elite Platform is the 2024 Edge AI and Vision Product of the Year Award Winner in the Edge AI Processors category. The Snapdragon X Elite is the first Snapdragon based on the new Qualcomm Oryon CPU architecture, which outperforms every other laptop CPU in its class. The Snapdragon X Elite’s heterogeneous AI Engine has a combined performance of greater than 70 TOPS across the NPU, CPU and GPU. The Snapdragon X Elite includes a powerful integrated NPU capable of delivering up to 45 TOPS. In addition to raw performance, on-device AI benefits from a model’s accuracy and response time, as well as the speed for large language models, measured in tokens per second. The Snapdragon X Elite can run a 7 billion parameter Llama 2 model on-device at 30 tokens per second. The Oryon CPU subsystem outperforms the competitor’s high-end 14-core laptop chip in peak performance by 60%, and can match the competitor’s performance while using 65% less power. When compared to the leading performing x86 integrated GPU, Snapdragon X Elite delivers up to 80% faster performance, and can match the competitor’s highest performance with 80% less power consumption. Developers will have access to the latest AI SDKs too. Snapdragon X Elite features support for all of the leading AI frameworks, including TensorFlow, PyTorch, ONNX, Keras and more. Please see here for more information on Qualcomm’s Snapdragon X Elite Platform. The Edge AI and Vision Product of the Year Awards celebrate the innovation of the industry’s leading companies that are developing and enabling the next generation of edge AI and computer vision products. Winning a Product of the Year award recognizes a company’s leadership in edge AI and computer vision as evaluated by independent industry experts. |