Resources
In-depth information about the edge AI and vision applications, technologies, products, markets and trends.
The content in this section of the website comes from Edge AI and Vision Alliance members and other industry luminaries.
All Resources

Intel Unveils Leadership AI and Networking Solutions with Xeon 6 Processors
Intel completes Xeon 6 portfolio of processors, delivering a CPU for the broadest set of workloads in the industry. NEWS HIGHLIGHTS Intel launches new Intel® Xeon® 6 processors with Performance-cores, offering industry-leading performance across data

Detailed Imaging and Pixel-level Accuracy with Emerging Image Sensors
A graph showing the importance of hyperspectral imaging qualities by sector. Emerging image sensors are being developed with capabilities beyond well-established CMOS detectors, with the possibility of detecting broader parts of the light spectrum that

STMicroelectronics Reveals Advanced 2-in-1 MEMS Accelerometer in One IMU for Intensive Impact Sensing in Wearables and Trackers
16g/80g dual-range IMU with edge processing brings advanced sports monitoring to affordable wearables to “train like a pro!” Geneva, Switzerland, February 24, 2025 — STMicroelectronics’ innovative LSM6DSV80X combines two accelerometer structures for 16g and 80g

The Intersection of AI and Human Expertise: How Custom Solutions Enhance Collaboration
This blog post was originally published at Digica’s website. It is reprinted here with the permission of Digica. Artificial Intelligence-based solutions have become increasingly prevalent, transforming industries, businesses, and daily life. However, rather than completely

Top-tier ADAS Systems: Exploring Automotive Radar Technology
Radars have had a place within the automotive sector for over two decades, beginning with the first use for adaptive cruise control and many other developments taking place since. IDTechEx‘s “Automotive Radar Market 2025-2045: Robotaxis

SAM 2 + GPT-4o: Cascading Foundation Models via Visual Prompting (Part 1)
This article was originally published at Tenyks’ website. It is reprinted here with the permission of Tenyks. In Part 1 of this article we introduce Segment Anything Model 2 (SAM 2). Then, we walk you

Vision Components and Phytec Announce Partnership: MIPI Cameras and Processor Boards Perfectly Integrated
Mainz, February 20, 2025 – Phytec and Vision Components have agreed to collaborate on the integration of cameras into embedded systems. The partnership enables users to easily operate all of VC’s 50+ MIPI cameras with

AI Disruption is Driving Innovation in On-device Inference
This article was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm. How the proliferation and evolution of generative models will transform the AI landscape and unlock value. The introduction

DeepSeek-R1 1.5B on SiMa.ai for Less Than 10 Watts
February 18, 2025 09:00 AM Eastern Standard Time–SAN JOSE, Calif.–(BUSINESS WIRE)–SiMa.ai, the software-centric, embedded edge machine learning system-on-chip (MLSoC) company, today announced the successful implementation of DeepSeek-R1-Distill-Qwen-1.5B on its ONE Platform for Edge AI, achieving
Technologies

3LC: What is It and Who is It For?
This blog post was originally published at 3LC’s website. It is reprinted here with the permission of 3LC. AI performance isn’t just about better architectures or more compute – it’s about better data. Even perfectly labeled datasets can hold hidden inefficiencies that limit accuracy. See how teams use 3LC to refine datasets, optimize labeling strategies,

How e-con Systems’ TintE ISP IP Core Increases the Efficiency of Embedded Vision Applications
This blog post was originally published at e-con Systems’ website. It is reprinted here with the permission of e-con Systems. e-con Systems has developed TintE™, a ready to deploy ISP IP core engineered to enhance image quality in camera systems. Built to deliver high performance on leading FPGA platforms, it accelerates real-time image processing with

MIPS Drives Real-time Intelligence into Physical AI Platforms
The new MIPS Atlas product suite delivers cutting-edge compute subsystems that empower autonomous edge solutions to sense, think and act with precision, driving innovation across the growing physical AI opportunity in industrial robotics and autonomous platform markets. SAN JOSE, CA. – March 4th, 2025 – MIPS, the world’s leading supplier of compute subsystems for autonomous
Applications

Vision Language Model Prompt Engineering Guide for Image and Video Understanding
This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. Vision language models (VLMs) are evolving at a breakneck speed. In 2020, the first VLMs revolutionized the generative AI landscape by bringing visual understanding to large language models (LLMs) through the use of a vision encoder. These

In-cabin Sensing 2025-2035: Technologies, Opportunities, and Markets
For more information, visit https://www.idtechex.com/en/research-report/in-cabin-sensing-2025-2035-technologies-opportunities-and-markets/1077. The Yearly Market Size for In-cabin Sensors will Exceed $6B by 2035 Regulations like the Advanced Driver Distraction Warning (ADDW) and General Safety Regulation (GSR) are driving the growing importance of in-cabin sensing, particularly driver and occupancy monitoring systems. IDTechEx’s report, “In-Cabin Sensing 2025-2035: Technologies, Opportunities, Markets”, examines these regulations

Nearly $1B Flows into Automotive Radar Startups
According to IDTechEx‘s latest report, “Automotive Radar Market 2025-2045: Robotaxis & Autonomous Cars“, newly established radar startups worldwide have raised nearly US$1.2 billion over the past 12 years; approximately US$980 million of which is predominantly directed toward the automotive sector. Through more than 40 funding rounds, these companies have driven the implementation and advancement of
Functions

3LC: What is It and Who is It For?
This blog post was originally published at 3LC’s website. It is reprinted here with the permission of 3LC. AI performance isn’t just about better architectures or more compute – it’s about better data. Even perfectly labeled datasets can hold hidden inefficiencies that limit accuracy. See how teams use 3LC to refine datasets, optimize labeling strategies,

Vision Language Model Prompt Engineering Guide for Image and Video Understanding
This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. Vision language models (VLMs) are evolving at a breakneck speed. In 2020, the first VLMs revolutionized the generative AI landscape by bringing visual understanding to large language models (LLMs) through the use of a vision encoder. These

SAM 2 + GPT-4o: Cascading Foundation Models via Visual Prompting (Part 2)
This article was originally published at Tenyks’ website. It is reprinted here with the permission of Tenyks. In Part 2 of our Segment Anything Model 2 (SAM 2) Series, we show how foundation models (e.g., GPT-4o, Claude Sonnet 3.5 and YOLO-World) can be used to generate visual inputs (e.g., bounding boxes) for SAM 2. Learn