Resources
In-depth information about the edge AI and vision applications, technologies, products, markets and trends.
The content in this section of the website comes from Edge AI and Vision Alliance members and other industry luminaries.
All Resources

DeepSeek-R1 1.5B on SiMa.ai for Less Than 10 Watts
February 18, 2025 09:00 AM Eastern Standard Time–SAN JOSE, Calif.–(BUSINESS WIRE)–SiMa.ai, the software-centric, embedded edge machine learning system-on-chip (MLSoC) company, today announced the successful implementation of DeepSeek-R1-Distill-Qwen-1.5B on its ONE Platform for Edge AI, achieving

3 Challenges Facing the Wearable Sensors Market
User considerations and potential barriers to adoption of new wearable technology. The wearable sensors market encompasses an array of exciting technologies with the potential to unlock a host of applications in healthcare, extended reality, industrial

The Critical Role of FPGAs in Modern Embedded Vision Systems (Part 1)
This blog post was originally published at e-con Systems’ website. It is reprinted here with the permission of e-con Systems. FPGAs are rapidly becoming a best-fit solution for meeting the unique demands of high-performing embedded

FRAMOS at Embedded World 2025
FRAMOS is about to showcase innovative imaging technology at Embedded World 2025 Munich/Nuremberg, Bavaria, Germany – February 18th, 2025 – FRAMOS, the leading global expert in embedded vision systems, will participate at the Embedded World

From Brain to Binary: Can Neuro-inspired Research Make CPUs the Future of AI Inference?
This article was originally published at Tryolabs’ website. It is reprinted here with the permission of Tryolabs. In the ever-evolving landscape of AI, the demand for powerful Large Language Models (LLMs) has surged. This has

$1 Trillion by 2030: The Semiconductor Devices Industry is On Track
This market research report was originally published at the Yole Group’s website. It is reprinted here with the permission of the Yole Group. Entering a new growth cycle, the surge in the semiconductor industry is

From Seeing to Understanding: LLMs Leveraging Computer Vision
This blog post was originally published at Tryolabs’ website. It is reprinted here with the permission of Tryolabs. From Face ID unlocking our phones to counting customers in stores, Computer Vision has already transformed how

Autonomous Cars are Leveling Up: Exploring Vehicle Autonomy
When the Society of Automotive Engineers released their definitions of varying levels of automation from level 0 to level 5, it became easier to define and distinguish between the many capabilities and advancements of autonomous

Introducing Qualcomm Custom-built AI Models, Now Available on Qualcomm AI Hub
This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm. We’re thrilled to announce that five custom-built computer vision (CV) models are now available on Qualcomm AI Hub!
Technologies

Unveiling the Qualcomm Dragonwing Brand Portfolio: Solutions For a New Era of Industrial Innovation
This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm. Our mission is to deliver intelligent computing everywhere. We have an amazing suite of products, and while you may be familiar with the Snapdragon brand portfolio, you may not know that we have a whole suite of

3LC: What is It and Who is It For?
This blog post was originally published at 3LC’s website. It is reprinted here with the permission of 3LC. AI performance isn’t just about better architectures or more compute – it’s about better data. Even perfectly labeled datasets can hold hidden inefficiencies that limit accuracy. See how teams use 3LC to refine datasets, optimize labeling strategies,

How e-con Systems’ TintE ISP IP Core Increases the Efficiency of Embedded Vision Applications
This blog post was originally published at e-con Systems’ website. It is reprinted here with the permission of e-con Systems. e-con Systems has developed TintE™, a ready to deploy ISP IP core engineered to enhance image quality in camera systems. Built to deliver high performance on leading FPGA platforms, it accelerates real-time image processing with
Applications

Unveiling the Qualcomm Dragonwing Brand Portfolio: Solutions For a New Era of Industrial Innovation
This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm. Our mission is to deliver intelligent computing everywhere. We have an amazing suite of products, and while you may be familiar with the Snapdragon brand portfolio, you may not know that we have a whole suite of

Vision Language Model Prompt Engineering Guide for Image and Video Understanding
This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. Vision language models (VLMs) are evolving at a breakneck speed. In 2020, the first VLMs revolutionized the generative AI landscape by bringing visual understanding to large language models (LLMs) through the use of a vision encoder. These

In-cabin Sensing 2025-2035: Technologies, Opportunities, and Markets
For more information, visit https://www.idtechex.com/en/research-report/in-cabin-sensing-2025-2035-technologies-opportunities-and-markets/1077. The Yearly Market Size for In-cabin Sensors will Exceed $6B by 2035 Regulations like the Advanced Driver Distraction Warning (ADDW) and General Safety Regulation (GSR) are driving the growing importance of in-cabin sensing, particularly driver and occupancy monitoring systems. IDTechEx’s report, “In-Cabin Sensing 2025-2035: Technologies, Opportunities, Markets”, examines these regulations
Functions

3LC: What is It and Who is It For?
This blog post was originally published at 3LC’s website. It is reprinted here with the permission of 3LC. AI performance isn’t just about better architectures or more compute – it’s about better data. Even perfectly labeled datasets can hold hidden inefficiencies that limit accuracy. See how teams use 3LC to refine datasets, optimize labeling strategies,

Vision Language Model Prompt Engineering Guide for Image and Video Understanding
This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. Vision language models (VLMs) are evolving at a breakneck speed. In 2020, the first VLMs revolutionized the generative AI landscape by bringing visual understanding to large language models (LLMs) through the use of a vision encoder. These

SAM 2 + GPT-4o: Cascading Foundation Models via Visual Prompting (Part 2)
This article was originally published at Tenyks’ website. It is reprinted here with the permission of Tenyks. In Part 2 of our Segment Anything Model 2 (SAM 2) Series, we show how foundation models (e.g., GPT-4o, Claude Sonnet 3.5 and YOLO-World) can be used to generate visual inputs (e.g., bounding boxes) for SAM 2. Learn