Vision Algorithms for Embedded Vision
Most computer vision algorithms were developed on general-purpose computer systems with software written in a high-level language
Most computer vision algorithms were developed on general-purpose computer systems with software written in a high-level language. Some of the pixel-processing operations (ex: spatial filtering) have changed very little in the decades since they were first implemented on mainframes. With today’s broader embedded vision implementations, existing high-level algorithms may not fit within the system constraints, requiring new innovation to achieve the desired results.
Some of this innovation may involve replacing a general-purpose algorithm with a hardware-optimized equivalent. With such a broad range of processors for embedded vision, algorithm analysis will likely focus on ways to maximize pixel-level processing within system constraints.
This section refers to both general-purpose operations (ex: edge detection) and hardware-optimized versions (ex: parallel adaptive filtering in an FPGA). Many sources exist for general-purpose algorithms. The Embedded Vision Alliance is one of the best industry resources for learning about algorithms that map to specific hardware, since Alliance Members will share this information directly with the vision community.
General-purpose computer vision algorithms

One of the most-popular sources of computer vision algorithms is the OpenCV Library. OpenCV is open-source and currently written in C, with a C++ version under development. For more information, see the Alliance’s interview with OpenCV Foundation President and CEO Gary Bradski, along with other OpenCV-related materials on the Alliance website.
Hardware-optimized computer vision algorithms
Several programmable device vendors have created optimized versions of off-the-shelf computer vision libraries. NVIDIA works closely with the OpenCV community, for example, and has created algorithms that are accelerated by GPGPUs. MathWorks provides MATLAB functions/objects and Simulink blocks for many computer vision algorithms within its Vision System Toolbox, while also allowing vendors to create their own libraries of functions that are optimized for a specific programmable architecture. National Instruments offers its LabView Vision module library. And Xilinx is another example of a vendor with an optimized computer vision library that it provides to customers as Plug and Play IP cores for creating hardware-accelerated vision algorithms in an FPGA.
Other vision libraries
- Halcon
- Matrox Imaging Library (MIL)
- Cognex VisionPro
- VXL
- CImg
- Filters

SAM 2 + GPT-4o: Cascading Foundation Models via Visual Prompting (Part 1)
This article was originally published at Tenyks’ website. It is reprinted here with the permission of Tenyks. In Part 1 of this article we introduce Segment Anything Model 2 (SAM 2). Then, we walk you through how you can set it up and run inference on your own video clips. Learn more about visual prompting

DeepSeek-R1 1.5B on SiMa.ai for Less Than 10 Watts
February 18, 2025 09:00 AM Eastern Standard Time–SAN JOSE, Calif.–(BUSINESS WIRE)–SiMa.ai, the software-centric, embedded edge machine learning system-on-chip (MLSoC) company, today announced the successful implementation of DeepSeek-R1-Distill-Qwen-1.5B on its ONE Platform for Edge AI, achieving breakthrough performance within an unprecedented power envelope of under 10 watts. This implementation marks a significant advancement in efficient, secure

From Brain to Binary: Can Neuro-inspired Research Make CPUs the Future of AI Inference?
This article was originally published at Tryolabs’ website. It is reprinted here with the permission of Tryolabs. In the ever-evolving landscape of AI, the demand for powerful Large Language Models (LLMs) has surged. This has led to an unrelenting thirst for GPUs and a shortage that causes headaches for many organizations. But what if there

From Seeing to Understanding: LLMs Leveraging Computer Vision
This blog post was originally published at Tryolabs’ website. It is reprinted here with the permission of Tryolabs. From Face ID unlocking our phones to counting customers in stores, Computer Vision has already transformed how businesses operate. As Generative AI (GenAI) becomes more compelling and accessible, this tried-and-tested technology is entering a new era of

Autonomous Cars are Leveling Up: Exploring Vehicle Autonomy
When the Society of Automotive Engineers released their definitions of varying levels of automation from level 0 to level 5, it became easier to define and distinguish between the many capabilities and advancements of autonomous vehicles. Level 0 describes an older model of vehicle with no automated features, while level 5 describes a future ideal

Introducing Qualcomm Custom-built AI Models, Now Available on Qualcomm AI Hub
This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm. We’re thrilled to announce that five custom-built computer vision (CV) models are now available on Qualcomm AI Hub! Qualcomm Technologies’ custom-built models were developed by the Qualcomm R&D team, optimized for our platforms and designed with end-user applications

The Ultimate Guide to Depth Perception and 3D Imaging Technologies
This blog post was originally published at e-con Systems’ website. It is reprinted here with the permission of e-con Systems. Depth perception helps mimic natural spatial awareness by determining how far or close objects are, which makes it invaluable for 3D imaging systems. Get expert insights on how depth perception works, the cues involved, as

New AI SDKs and Tools Released for NVIDIA Blackwell GeForce RTX 50 Series GPUs
This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. NVIDIA recently announced a new generation of PC GPUs—the GeForce RTX 50 Series—alongside new AI-powered SDKs and tools for developers. Powered by the NVIDIA Blackwell architecture, fifth-generation Tensor Cores and fourth-generation RT Cores, the GeForce RTX 50

10 Common Mistakes in Data Science Projects and How to Avoid Them
This blog post was originally published at Digica’s website. It is reprinted here with the permission of Digica. While Data Science is an established and highly esteemed profession with strong foundations in science, it is important to remember that it is still a craft and, as such, it is susceptible to errors coming from processes

RAG for Vision: Building Multimodal Computer Vision Systems
This blog post was originally published at Tenyks’ website. It is reprinted here with the permission of Tenyks. This article explores the exciting world of Visual RAG, exploring its significance and how it’s revolutionizing traditional computer vision pipelines. From understanding the basics of RAG to its specific applications in visual tasks and surveillance, we’ll examine

Qualcomm AI Research Makes Diverse Datasets Available to Advance Machine Learning Research
This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm. Qualcomm AI Research has published a variety of datasets for research use. These datasets can be used to train models in the kinds of applications most common to mobile computing, including: advanced driver assistance systems (ADAS), extended reality

Make Your Existing NVIDIA Jetson Orin Devices Faster with Super Mode
This blog post was originally published at e-con Systems’ website. It is reprinted here with the permission of e-con Systems. The NVIDIA Jetson Orin Nano Super Developer Kit, with its compact size and high-performance computing capabilities, is redefining generative AI for small edge devices. NVIDIA has dropped an exciting new update to their existing Jetson

Empowering Civil Construction with AI-driven Spatial Perception
This blog post was originally published at Au-Zone Technologies’ website. It is reprinted here with the permission of Au-Zone Technologies. Transforming Safety, Efficiency, and Automation in Construction Ecosystems In the rapidly evolving field of civil construction, AI-based spatial perception technologies are reshaping the way machinery operates in dynamic and unpredictable environments. These systems enable advanced

Can Money Buy AI Dominance?
This market research report was originally published at the Yole Group’s website. It is reprinted here with the permission of the Yole Group. The announcements of the creation of Stargate last week as well as the revelation of DeepSeek-V3’s performance at a much lower cost than OpenAI and its competitors are both raising a lot

The Future of AI in Business: Trends to Watch
This blog post was originally published at Digica’s website. It is reprinted here with the permission of Digica. In a world increasingly shaped by the rapid evolution of artificial intelligence, 2024 stands as another momentous year, with advancements that continue to reshape how we live, work, and imagine our future. From the rapid acceleration in

Multimodal Large Language Models: Transforming Computer Vision
This blog post was originally published at Tenyks’ website. It is reprinted here with the permission of Tenyks. This article introduces multimodal large language models (MLLMs) [1], their applications using challenging prompts, and the top models reshaping computer vision as we speak. What is a multimodal large language model (MLLM)? In layman terms, a multimodal