Vision Algorithms

Vision Algorithms for Embedded Vision

Most computer vision algorithms were developed on general-purpose computer systems with software written in a high-level language

Most computer vision algorithms were developed on general-purpose computer systems with software written in a high-level language. Some of the pixel-processing operations (ex: spatial filtering) have changed very little in the decades since they were first implemented on mainframes. With today’s broader embedded vision implementations, existing high-level algorithms may not fit within the system constraints, requiring new innovation to achieve the desired results.

Some of this innovation may involve replacing a general-purpose algorithm with a hardware-optimized equivalent. With such a broad range of processors for embedded vision, algorithm analysis will likely focus on ways to maximize pixel-level processing within system constraints.

This section refers to both general-purpose operations (ex: edge detection) and hardware-optimized versions (ex: parallel adaptive filtering in an FPGA). Many sources exist for general-purpose algorithms. The Embedded Vision Alliance is one of the best industry resources for learning about algorithms that map to specific hardware, since Alliance Members will share this information directly with the vision community.

General-purpose computer vision algorithms

Introduction To OpenCV Figure 1

One of the most-popular sources of computer vision algorithms is the OpenCV Library. OpenCV is open-source and currently written in C, with a C++ version under development. For more information, see the Alliance’s interview with OpenCV Foundation President and CEO Gary Bradski, along with other OpenCV-related materials on the Alliance website.

Hardware-optimized computer vision algorithms

Several programmable device vendors have created optimized versions of off-the-shelf computer vision libraries. NVIDIA works closely with the OpenCV community, for example, and has created algorithms that are accelerated by GPGPUs. MathWorks provides MATLAB functions/objects and Simulink blocks for many computer vision algorithms within its Vision System Toolbox, while also allowing vendors to create their own libraries of functions that are optimized for a specific programmable architecture. National Instruments offers its LabView Vision module library. And Xilinx is another example of a vendor with an optimized computer vision library that it provides to customers as Plug and Play IP cores for creating hardware-accelerated vision algorithms in an FPGA.

Other vision libraries

  • Halcon
  • Matrox Imaging Library (MIL)
  • Cognex VisionPro
  • VXL
  • CImg
  • Filters

“Omnilert Gun Detect: Harnessing Computer Vision to Tackle Gun Violence,” a Presentation from Omnilert

Chad Green, Director of Artificial Intelligence at Omnilert, presents the “Omnilert Gun Detect: Harnessing Computer Vision to Tackle Gun Violence” tutorial at the May 2024 Embedded Vision Summit. In the United States in 2023, there were 658 mass shootings, and 42,996 people lost their lives to gun violence. Detecting and… “Omnilert Gun Detect: Harnessing Computer

Read More »

Accelerating LLMs with llama.cpp on NVIDIA RTX Systems

This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. The NVIDIA RTX AI for Windows PCs platform offers a thriving ecosystem of thousands of open-source models for application developers to leverage and integrate into Windows applications. Notably, llama.cpp is one popular tool, with over 65K GitHub

Read More »

“Adventures in Moving a Computer Vision Solution from Cloud to Edge,” a Presentation from MetaConsumer

Nate D’Amico, CTO and Head of Product at MetaConsumer, presents the “Adventures in Moving a Computer Vision Solution from Cloud to Edge” tutorial at the May 2024 Embedded Vision Summit. Optix is a computer vision-based AI system that measures advertising and media exposures on mobile devices for real-time marketing optimization.… “Adventures in Moving a Computer

Read More »

Redefining Hybrid Meetings With AI-powered 360° Videoconferencing

This blog post was originally published at Ambarella’s website. It is reprinted here with the permission of Ambarella. The global pandemic catalyzed a boom in videoconferencing that continues to grow as companies embrace hybrid work models and seek more sustainable approaches to business communication with less travel. Now, with videoconferencing becoming a cornerstone of modern

Read More »

“Bridging Vision and Language: Designing, Training and Deploying Multimodal Large Language Models,” a Presentation from Meta Reality Labs

Adel Ahmadyan, Staff Engineer at Meta Reality Labs, presents the “Bridging Vision and Language: Designing, Training and Deploying Multimodal Large Language Models” tutorial at the May 2024 Embedded Vision Summit. In this talk, Ahmadyan explores the use of multimodal large language models in real-world edge applications. He begins by explaining… “Bridging Vision and Language: Designing,

Read More »

“Introduction to Depth Sensing,” a Presentation from Meta

Harish Venkataraman, Depth Cameras Architecture and Tech Lead at Meta, presents the “Introduction to Depth Sensing” tutorial at the May 2024 Embedded Vision Summit. We live in a three-dimensional world, and the ability to perceive in three dimensions is essential for many systems. In this talk, Venkataraman introduced the main… “Introduction to Depth Sensing,” a

Read More »

Deploying Accelerated Llama 3.2 from the Edge to the Cloud

This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. Expanding the open-source Meta Llama collection of models, the Llama 3.2 collection includes vision language models (VLMs), small language models (SLMs), and an updated Llama Guard model with support for vision. When paired with the NVIDIA accelerated

Read More »

“Advancing Embedded Vision Systems: Harnessing Hardware Acceleration and Open Standards,” a Presentation from the Khronos Group

Neil Trevett, President of the Khronos Group, presents the “Advancing Embedded Vision Systems: Harnessing Hardware Acceleration and Open Standards” tutorial at the May 2024 Embedded Vision Summit. Offloading processing to accelerators enables embedded vision systems to process workloads that exceed the capabilities of CPUs. However, parallel processors add complexity as… “Advancing Embedded Vision Systems: Harnessing

Read More »

When, Where and How AI Should Be Applied

Phil Koopman dissects strengths and weaknesses of machine learning based AI AI does amazing stuff. No question about it. But how hard have we really thought about “machine-learning capabilities” for applications? Phil Koopman, professor at Carnegie Mellon University, delivered a keynote on Sept. 11, 2024 at the Business of Semiconductor Summit, (BOSS 2024), concentrating on

Read More »

“Using AI to Enhance the Well-being of the Elderly,” a Presentation from Kepler Vision Technologies

Harro Stokman, CEO of Kepler Vision Technologies, presents the “Using Artificial Intelligence to Enhance the Well-being of the Elderly” tutorial at the May 2024 Embedded Vision Summit. This presentation provides insights into an innovative application of artificial intelligence and advanced computer vision technologies in the healthcare sector, specifically focused on… “Using AI to Enhance the

Read More »

AI Model Training Cost Have Skyrocketed by More than 4,300% Since 2020

Over the past five years, AI models have become much more complex and capable, tailored to perform specific tasks across industries and provide better efficiency, accuracy and automation. However, the cost of training in these systems has exploded. According to data presented by AltIndex.com, AI model training costs have skyrocketed by more than 4,300% since

Read More »

BrainChip Demonstration of LLM-RAG with a Custom Trained TENNs Model

Kurt Manninen, Senior Solutions Architect at BrainChip, demonstrates the company’s latest edge AI and vision technologies and products at the September 2024 Edge AI and Vision Alliance Forum. Specifically, Manninen demonstrates his company’s Temporal Event-Based Neural Network (TENN) foundational large language model with 330M parameters, augmented with a Retrieval-Augmented Generative (RAG) output to replace user

Read More »

Advex AI Demonstration of Accelerating Machine Vision with Synthetic AI Data

Pedro Pachuca, CEO at Advex AI, demonstrates the company’s latest edge AI and vision technologies and products at the September 2024 Edge AI and Vision Alliance Forum. Specifically, Pachuca demonstrates Advex’s ability to ingest just 10 images and produce thousands of labeled, synthetic images in just hours. These synthetic images cover the distribution of variations

Read More »

“Testing Cloud-to-Edge Deep Learning Pipelines: Ensuring Robustness and Efficiency,” a Presentation from Instrumental

Rustem Feyzkhanov, Staff Machine Learning Engineer at Instrumental, presents the “Testing Cloud-to-Edge Deep Learning Pipelines: Ensuring Robustness and Efficiency” tutorial at the May 2024 Embedded Vision Summit. A cloud-to-edge deep learning pipeline is a fully automated conduit for training and deploying models to the edge. This enables quick model retraining… “Testing Cloud-to-Edge Deep Learning Pipelines:

Read More »

AI PCs to Make Up 60% of Total PC Sales in 2027, 3x More than This Year

Although traditional PCs remain the number one choice for most consumers buying a new device, the surging demand for AI-driven applications has boosted the popularity of AI PCs, both in consumer and professional sectors. This trend is expected to continue in the following years, with AI PC gaining a much bigger market share than this

Read More »

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top