Vision Algorithms

Vision Algorithms for Embedded Vision

Most computer vision algorithms were developed on general-purpose computer systems with software written in a high-level language

Most computer vision algorithms were developed on general-purpose computer systems with software written in a high-level language. Some of the pixel-processing operations (ex: spatial filtering) have changed very little in the decades since they were first implemented on mainframes. With today’s broader embedded vision implementations, existing high-level algorithms may not fit within the system constraints, requiring new innovation to achieve the desired results.

Some of this innovation may involve replacing a general-purpose algorithm with a hardware-optimized equivalent. With such a broad range of processors for embedded vision, algorithm analysis will likely focus on ways to maximize pixel-level processing within system constraints.

This section refers to both general-purpose operations (ex: edge detection) and hardware-optimized versions (ex: parallel adaptive filtering in an FPGA). Many sources exist for general-purpose algorithms. The Embedded Vision Alliance is one of the best industry resources for learning about algorithms that map to specific hardware, since Alliance Members will share this information directly with the vision community.

General-purpose computer vision algorithms

Introduction To OpenCV Figure 1

One of the most-popular sources of computer vision algorithms is the OpenCV Library. OpenCV is open-source and currently written in C, with a C++ version under development. For more information, see the Alliance’s interview with OpenCV Foundation President and CEO Gary Bradski, along with other OpenCV-related materials on the Alliance website.

Hardware-optimized computer vision algorithms

Several programmable device vendors have created optimized versions of off-the-shelf computer vision libraries. NVIDIA works closely with the OpenCV community, for example, and has created algorithms that are accelerated by GPGPUs. MathWorks provides MATLAB functions/objects and Simulink blocks for many computer vision algorithms within its Vision System Toolbox, while also allowing vendors to create their own libraries of functions that are optimized for a specific programmable architecture. National Instruments offers its LabView Vision module library. And Xilinx is another example of a vendor with an optimized computer vision library that it provides to customers as Plug and Play IP cores for creating hardware-accelerated vision algorithms in an FPGA.

Other vision libraries

  • Halcon
  • Matrox Imaging Library (MIL)
  • Cognex VisionPro
  • VXL
  • CImg
  • Filters

Nota AI and Advantech Sign Strategic MOU to Pioneer On-Device GenAI Market

Nota AI and Advantech sign MOU for edge AI collaboration. Partnership focuses on generative AI at the edge. Joint marketing and sales activities planned to expand market share. SEOUL, South Korea, June 7, 2024 /PRNewswire/ — AI model optimization technology company Nota AI® (Nota Inc.) has signed a strategic Memorandum of Understanding (MOU) with global industrial AIoT

Read More »

BrainChip Introduces TENNs-PLEIADES in New White Paper

Laguna Hills, Calif. – June 5, 2024 – BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY), the world’s first commercial producer of ultra-low power, fully digital, event-based, neuromorphic AI, today released a white paper detailing the company’s TENNs-PLEIADES (PoLynomial Expansion In Adaptive Distributed Event-based Systems), a method of parameterization of temporal kernels that reduces

Read More »

“OpenCV for High-performance, Low-power Vision Applications on Snapdragon,” a Presentation from Qualcomm

Xin Zhong, Computer Vision Product Manager at Qualcomm Technologies, presents the “OpenCV for High-performance, Low-power Vision Applications on Snapdragon” tutorial at the May 2024 Embedded Vision Summit. For decades, the OpenCV software library has been popular for developing computer vision applications. However, developers have found it challenging to create efficient… “OpenCV for High-performance, Low-power Vision

Read More »

“Deploying Large Models on the Edge: Success Stories and Challenges,” a Presentation from Qualcomm

Vinesh Sukumar, Senior Director of Product Management at Qualcomm Technologies, presents the “Deploying Large Models on the Edge: Success Stories and Challenges” tutorial at the May 2024 Embedded Vision Summit. In this talk, Dr. Sukumar explains and demonstrates how Qualcomm has been successful in deploying large generative AI and multimodal… “Deploying Large Models on the

Read More »

Intel AI Platforms Accelerate Microsoft Phi-3 GenAI Models

Intel, in collaboration with Microsoft, enables support for several Phi-3 models across its data center platforms, AI PCs and edge solutions. What’s New: Intel has validated and optimized its AI product portfolio across client, edge and data center for several of Microsoft’s Phi-3 family of open models. The Phi-3 family of small, open models can run on

Read More »

“Scaling Vision-based Edge AI Solutions: From Prototype to Global Deployment,” a Presentation from Network Optix

Maurits Kaptein, Chief Data Scientist at Network Optix and Professor at the University of Eindhoven, presents the “Scaling Vision-based Edge AI Solutions: From Prototype to Global Deployment” tutorial at the May 2024 Embedded Vision Summit. The Embedded Vision Summit brings together innovators in silicon, devices, software and applications and empowers… “Scaling Vision-based Edge AI Solutions:

Read More »

“What’s Next in On-device Generative AI,” a Presentation from Qualcomm

Jilei Hou, Vice President of Engineering and Head of AI Research at Qualcomm Technologies, presents the “What’s Next in On-device Generative AI” tutorial at the May 2024 Embedded Vision Summit. The generative AI era has begun! Large multimodal models are bringing the power of language understanding to machine perception, and transformer models are expanding to

Read More »

Technologies Driving Enhanced On-device Generative AI Experiences: LoRA

This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm. Utilize low-rank adaptation (LoRA) to provide customized experiences across use cases Enhancing contextualization and customization has always been a driving force in the realm of user experience. While generative artificial intelligence (AI) has already demonstrated its transformative

Read More »

NVIDIA DeepStream 7.0 Milestone Release for Next-gen Vision AI Development

This article was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. NVIDIA DeepStream is a powerful SDK that unlocks GPU-accelerated building blocks to build end-to-end vision AI pipelines. With more than 40+ plugins available off-the-shelf, you can deploy fully optimized pipelines with cutting-edge AI Inference, object tracking, and seamless

Read More »

Technologies Driving Enhanced On-device Generative AI Experiences: Multimodal Generative AI

This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm. Leverage additional modalities in generative AI models to enable necessary advancements for contextualization and customization across use cases A constant desire in user experience is improved contextualization and customization. For example, consumers want devices to automatically use

Read More »

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

1646 N. California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone
Phone: +1 (925) 954-1411
Scroll to Top