Vision Algorithms

Vision Algorithms for Embedded Vision

Most computer vision algorithms were developed on general-purpose computer systems with software written in a high-level language

Most computer vision algorithms were developed on general-purpose computer systems with software written in a high-level language. Some of the pixel-processing operations (ex: spatial filtering) have changed very little in the decades since they were first implemented on mainframes. With today’s broader embedded vision implementations, existing high-level algorithms may not fit within the system constraints, requiring new innovation to achieve the desired results.

Some of this innovation may involve replacing a general-purpose algorithm with a hardware-optimized equivalent. With such a broad range of processors for embedded vision, algorithm analysis will likely focus on ways to maximize pixel-level processing within system constraints.

This section refers to both general-purpose operations (ex: edge detection) and hardware-optimized versions (ex: parallel adaptive filtering in an FPGA). Many sources exist for general-purpose algorithms. The Embedded Vision Alliance is one of the best industry resources for learning about algorithms that map to specific hardware, since Alliance Members will share this information directly with the vision community.

General-purpose computer vision algorithms

Introduction To OpenCV Figure 1

One of the most-popular sources of computer vision algorithms is the OpenCV Library. OpenCV is open-source and currently written in C, with a C++ version under development. For more information, see the Alliance’s interview with OpenCV Foundation President and CEO Gary Bradski, along with other OpenCV-related materials on the Alliance website.

Hardware-optimized computer vision algorithms

Several programmable device vendors have created optimized versions of off-the-shelf computer vision libraries. NVIDIA works closely with the OpenCV community, for example, and has created algorithms that are accelerated by GPGPUs. MathWorks provides MATLAB functions/objects and Simulink blocks for many computer vision algorithms within its Vision System Toolbox, while also allowing vendors to create their own libraries of functions that are optimized for a specific programmable architecture. National Instruments offers its LabView Vision module library. And Xilinx is another example of a vendor with an optimized computer vision library that it provides to customers as Plug and Play IP cores for creating hardware-accelerated vision algorithms in an FPGA.

Other vision libraries

  • Halcon
  • Matrox Imaging Library (MIL)
  • Cognex VisionPro
  • VXL
  • CImg
  • Filters

Introducing the First AMD 1B Language Models: AMD OLMo

This blog post was originally published at AMD’s website. It is reprinted here with the permission of AMD. In recent years, the rapid development of artificial intelligence technology, especially the progress in large language models (LLMs), has garnered significant attention and discussion. From the emergence of ChatGPT to subsequent models like GPT-4 and Llama, these

Read More »

Computer Vision Integration in Robotic Applications: Real-world Insights

This blog post was originally published at Geisel Software’s website. It is reprinted here with the permission of Geisel Software. Imagine a world where computer vision integration allows robots not only ‘do’ but ‘see’ and ‘understand’. Welcome to the Fourth Industrial Revolution (Industry 4.0)! Here, the fusion of artificial intelligence and industrial robotics is sparking

Read More »

Introducing Qualcomm IoT Solutions Framework: Making It Easier to Develop and Deploy Solutions

This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm. The Qualcomm IoT Solutions Framework represents a comprehensive suite of developer tools, reference blueprints and a robust ecosystem of partners Qualcomm Technologies, Inc. is known for its vast collection of wireless-related intellectual property and its processor and

Read More »

The Global Robotaxi Market Value in 2024 Will be $174B

For more information, visit https://www.idtechex.com/en/research-report/autonomous-vehicles-market-2025-2045-robotaxis-autonomous-cars-sensors/1045. The global robotaxi vehicle market value in 2045 will be US$174 billion, growing with a 20-year CAGR of 37% between 2025 and 2045 and with a market share dominated by leaders from the US and China, such as Google’s Waymo, GM’s Cruise, WeRide, Baidu, and AutoX. IDTechEx’s Autonomous Vehicles Market

Read More »

“Improved Data Sampling Techniques for Training Neural Networks,” a Presentation from Karthik Rao Aroor

Independent AI Engineer Karthik Rao Aroor presents the “Improved Data Sampling Techniques for Training Neural Networks” tutorial at the May 2024 Embedded Vision Summit. For classification problems in which there are equal numbers of samples in each class, Aroor proposes and presents a novel mini-batch sampling approach to train neural… “Improved Data Sampling Techniques for

Read More »

“Embedded Vision Opportunities and Challenges in Retail Checkout,” an Interview with Zebra Technologies

Anatoly Kotlarsky, Distinguished Member of the Technical Staff in R&D at Zebra Technologies, talks with Phil Lapsley, Co-Founder and Vice President of BDTI and Vice President of Business Development at the Edge AI and Vision Alliance, for the “Embedded Vision Opportunities and Challenges in Retail Checkout” interview at the May… “Embedded Vision Opportunities and Challenges

Read More »

Computer Vision Quarterly Snapshot – Q3 2024

Woodside Capital Partners (WCP) is pleased to share its Computer Vision and Vision AI Market Report Q3 2024, authored by Managing Partner Rudy Burger, and Associate Akhilesh Shridar. Hardware startups generally take longer to get a product to market and require more investment than software companies. Hardware has a more complicated distribution and sales structure

Read More »

“Cost-efficient, High-quality AI for Consumer-grade Smart Home Cameras,” a Presentation from Wyze

Lin Chen, Chief Scientist at Wyze, presents the “Cost-efficient, High-quality AI for Consumer-grade Smart Home Cameras” tutorial at the May 2024 Embedded Vision Summit. In this talk, Chen explains how Wyze delivers robust visual AI at ultra-low cost for millions of consumer smart cameras, and how his company is rapidly… “Cost-efficient, High-quality AI for Consumer-grade

Read More »

“Edge AI Optimization on Rails—Literally,” a Presentation from Wabtec

Matthew Pietrzykowski, Principal Data Scientist at Wabtec, presents the “Edge AI Optimization on Rails—Literally” tutorial at the May 2024 Embedded Vision Summit. In this talk, Pietrzykowski shares highlights from his company’s adventures developing computer vision solutions for the rail transportation industry. He begins with an introduction to the types of… “Edge AI Optimization on Rails—Literally,”

Read More »

“Implementing AI/Computer Vision for Corporate Security Surveillance,” a Presentation from VMware

Prasad Saranjame, former Head of Physical Security and Resiliency at VMware, presents the “Implementing AI/Computer Vision for Corporate Security Surveillance” tutorial at the May 2024 Embedded Vision Summit. AI-enabled security cameras offer substantial benefits for corporate security and operational efficiency. However, successful deployment requires thoughtful selection of use cases and… “Implementing AI/Computer Vision for Corporate

Read More »

“Continual Learning thru Sequential, Lightweight Optimization,” a Presentation from Vision Elements

Guy Lavi, Managing Partner at Vision Elements, presents the “Continual, On-the-fly Learning through Sequential, Lightweight Optimization” tutorial at the May 2024 Embedded Vision Summit. In this presentation, Lavi shows how techniques of sequential optimization are applied to enable continual learning during run-time, as new observations flow in. The lightweight nature… “Continual Learning thru Sequential, Lightweight

Read More »

Qualcomm and Mistral AI Partner to Bring New Generative AI Models to Edge Devices

Highlights: Qualcomm announces collaboration with Mistral AI to bring Mistral AI’s models to devices powered by Snapdragon and Qualcomm platforms. Mistral AI’s new state-of-the-art models, Ministral 3B and Ministral 8B, are being optimized to run on devices powered by the new Snapdragon 8 Elite Mobile Platform, Snapdragon Cockpit Elite and Snapdragon Ride Elite, and Snapdragon

Read More »

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top