Vision Algorithms for Embedded Vision
Most computer vision algorithms were developed on general-purpose computer systems with software written in a high-level language
Most computer vision algorithms were developed on general-purpose computer systems with software written in a high-level language. Some of the pixel-processing operations (ex: spatial filtering) have changed very little in the decades since they were first implemented on mainframes. With today’s broader embedded vision implementations, existing high-level algorithms may not fit within the system constraints, requiring new innovation to achieve the desired results.
Some of this innovation may involve replacing a general-purpose algorithm with a hardware-optimized equivalent. With such a broad range of processors for embedded vision, algorithm analysis will likely focus on ways to maximize pixel-level processing within system constraints.
This section refers to both general-purpose operations (ex: edge detection) and hardware-optimized versions (ex: parallel adaptive filtering in an FPGA). Many sources exist for general-purpose algorithms. The Embedded Vision Alliance is one of the best industry resources for learning about algorithms that map to specific hardware, since Alliance Members will share this information directly with the vision community.
General-purpose computer vision algorithms
One of the most-popular sources of computer vision algorithms is the OpenCV Library. OpenCV is open-source and currently written in C, with a C++ version under development. For more information, see the Alliance’s interview with OpenCV Foundation President and CEO Gary Bradski, along with other OpenCV-related materials on the Alliance website.
Hardware-optimized computer vision algorithms
Several programmable device vendors have created optimized versions of off-the-shelf computer vision libraries. NVIDIA works closely with the OpenCV community, for example, and has created algorithms that are accelerated by GPGPUs. MathWorks provides MATLAB functions/objects and Simulink blocks for many computer vision algorithms within its Vision System Toolbox, while also allowing vendors to create their own libraries of functions that are optimized for a specific programmable architecture. National Instruments offers its LabView Vision module library. And Xilinx is another example of a vendor with an optimized computer vision library that it provides to customers as Plug and Play IP cores for creating hardware-accelerated vision algorithms in an FPGA.
Other vision libraries
- Halcon
- Matrox Imaging Library (MIL)
- Cognex VisionPro
- VXL
- CImg
- Filters
NXP Accelerates the Transformation to Software-defined Vehicles (SDV) with Agreement to Acquire TTTech Auto
NXP strengthens its automotive business with a leading software solution provider specialized in the systems, safety and security required for SDVs TTTech Auto complements and accelerates the NXP CoreRide platform, enabling automakers to reduce complexity, maximize system performance and shorten time to market The acquisition is the next milestone in NXP’s strategy to be the
Optimizing Multimodal AI Inference
This blog post was originally published at Intel’s website. It is reprinted here with the permission of Intel. Multimodal models are becoming essential for AI, enabling the integration of diverse data types into a single model for richer insights. During the second Intel® Liftoff Days 2024 hackathon, Rahul Nair’s workshop on Inference of Multimodal Models
Qualcomm Brings Industry-leading AI Innovations and Broad Collaborations to CES 2025 Across PC, Automotive, Smart Home and Enterprises
Highlights: Spotlight on bringing edge AI across devices and computing spaces, including PC, automotive, smart home and into enterprises broadly, with global ecosystem partners at the show. In PC, continued traction for the Snapdragon X Series, the launch of the new Snapdragon X platform, and the launch of a new desktop form factor and NPU-powered
Qualcomm Aware Unveils New Services to Drive Connected Intelligence Across Industries
Highlights: Qualcomm Aware adds observability, monitoring and location services to enable the development of IoT solutions that meet specific needs and challenges of consumers and enterprises across a wide range of industries and use cases. By pre-integrating Qualcomm Aware software across select Qualcomm Technologies and third-party processors, Qualcomm Technologies will provide a simple, fast and
Snapdragon X Series Continues to Redefine the PC Category with a New Platform, Mini Desktop Form Factors, and NPU Powered AI Experiences
Highlights: The 4th platform to join the Snapdragon X Series, Snapdragon X, brings AI PC leadership to Copilot+ PCs in the $600 range. Snapdragon X Series continues to gain traction with now over 60 designs in production or development with more than 100 coming by 2026 from leading OEMs including Asus, Acer, Dell Technologies, HP,
Qualcomm Launches On-prem AI Appliance Solution and Inference Suite to Step-up AI Inference Privacy, Flexibility and Cost Savings Across Enterprise and Industrial Verticals
Highlights: Qualcomm AI On-Prem Appliance Solution is designed for generative AI inference and computer vision workloads on dedicated on-premises hardware – allowing sensitive customer data, fine-tuned models, and inference loads to remain on premises. Qualcomm AI Inference Suite provides ready-to-use AI applications and agents, tools and libraries for operationalizing AI from computer vision to generative
NVIDIA TAO Toolkit: How to Build a Data-centric Pipeline to Improve Model Performance (Part 2 of 3)
This blog post was originally published at Tenyks’ website. It is reprinted here with the permission of Tenyks. During this series, we will use Tenyks to build a data-centric pipeline to debug and fix a model trained with the NVIDIA TAO Toolkit. Part 1. We demystify the NVIDIA ecosystem and define a data-centric pipeline based
Privacy-first AI: Exploring Federated Learning
This blog post was originally published at Digica’s website. It is reprinted here with the permission of Digica. Over the last year we could witness an unprecedented surge in the research and deployment of new leading-edge machine learning models. Although these have already proven themselves to offer useful applications, the gains usually come at the
Synaptics and Google Collaborate on Edge AI for the IoT
The collaboration will integrate Google’s ML core with AstraTM AI-Native hardware and open-source software to accelerate the development of context-aware devices. San Jose, CA, January 2, 2025 – Synaptics® Incorporated (Nasdaq: SYNA) today announced that it is collaborating with Google on Edge AI for the IoT to define the optimal implementation of multimodal processing for context-aware
Nextchip Demonstration of an ISP-based Thermal Imaging Camera
Barry Fitzgerald, local representative for Nextchip, demonstrates the company’s latest edge AI and vision technologies and products at the December 2024 Edge AI and Vision Alliance Forum. Specifically, Fitzgerald demonstrates a thermal imaging camera design based on the company’s ISP. The approach shown enhances night-time detection of pedestrians and animals beyond current visual capabilities, which
Why Generative AI is the Catalyst That Mixed Reality Needs
This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm. From content creation to digital avatars, generative AI is the critical ingredient for building immersive worlds in mixed reality The promise of mixed reality fundamentally changing the way we interact and live our lives has always been
What is Interpolation? Understanding Image Perception in Embedded Vision Camera Systems
This blog post was originally published at e-con Systems’ website. It is reprinted here with the permission of e-con Systems. Interpolation is a mathematical technique used to estimate unknown values that lie between known data points. Interpolation helps transform raw sensor data into stunning, full-color images in embedded vision systems. Read the blog to learn
From Generative to Agentic AI, Wrapping the Year’s AI Advancements
This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. Editor’s note: This post is part of the AI Decoded series, which demystifies AI by making the technology more accessible, and showcases new hardware, software, tools and accelerations for GeForce RTX PC and NVIDIA RTX workstation users.
NVIDIA TAO Toolkit: How to Build a Data-centric Pipeline to Improve Model Performance (Part 1 of 3)
This blog post was originally published at Tenyks’ website. It is reprinted here with the permission of Tenyks. In this series, we’ll build a data-centric pipeline using Tenyks to debug and fix a model trained with the NVIDIA TAO Toolkit. Part 1. We demystify the NVIDIA ecosystem and define a data-centric pipeline tailored for a
Spikes are the Next Digits
This article was originally published at Digica’s website. It is reprinted here with the permission of Digica. Remember the anxiety felt back in the 1990s after the publications of the first quantum algorithms by Deutsh and Jozsa (1992), Shor (1994), and Grover (1996). Most of us expected quantum computers to be of practical use within
Qualcomm CEO Cristiano Amon at Web Summit: GenAI is the New UI
This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm. How generative AI (GenAI)-powered “agents” will change the way you interact with the digital world The rise of artificial intelligence (AI) opens the door to a vast array of possibilities. AI-powered agents will be the key to