Vision Algorithms for Embedded Vision
Most computer vision algorithms were developed on general-purpose computer systems with software written in a high-level language
Most computer vision algorithms were developed on general-purpose computer systems with software written in a high-level language. Some of the pixel-processing operations (ex: spatial filtering) have changed very little in the decades since they were first implemented on mainframes. With today’s broader embedded vision implementations, existing high-level algorithms may not fit within the system constraints, requiring new innovation to achieve the desired results.
Some of this innovation may involve replacing a general-purpose algorithm with a hardware-optimized equivalent. With such a broad range of processors for embedded vision, algorithm analysis will likely focus on ways to maximize pixel-level processing within system constraints.
This section refers to both general-purpose operations (ex: edge detection) and hardware-optimized versions (ex: parallel adaptive filtering in an FPGA). Many sources exist for general-purpose algorithms. The Embedded Vision Alliance is one of the best industry resources for learning about algorithms that map to specific hardware, since Alliance Members will share this information directly with the vision community.
General-purpose computer vision algorithms
![Introduction To OpenCV Figure 1](https://www.edge-ai-vision.com/wp-content/uploads/2012/01/OpenCVIntroductionFigure1-1024x770.jpg)
One of the most-popular sources of computer vision algorithms is the OpenCV Library. OpenCV is open-source and currently written in C, with a C++ version under development. For more information, see the Alliance’s interview with OpenCV Foundation President and CEO Gary Bradski, along with other OpenCV-related materials on the Alliance website.
Hardware-optimized computer vision algorithms
Several programmable device vendors have created optimized versions of off-the-shelf computer vision libraries. NVIDIA works closely with the OpenCV community, for example, and has created algorithms that are accelerated by GPGPUs. MathWorks provides MATLAB functions/objects and Simulink blocks for many computer vision algorithms within its Vision System Toolbox, while also allowing vendors to create their own libraries of functions that are optimized for a specific programmable architecture. National Instruments offers its LabView Vision module library. And Xilinx is another example of a vendor with an optimized computer vision library that it provides to customers as Plug and Play IP cores for creating hardware-accelerated vision algorithms in an FPGA.
Other vision libraries
- Halcon
- Matrox Imaging Library (MIL)
- Cognex VisionPro
- VXL
- CImg
- Filters
![](https://www.edge-ai-vision.com/wp-content/uploads/2024/06/Nota_AI_Advantech_Sign_Strategic_MOU_Pioneer_Edge_AI_Market-300x225.jpg)
Nota AI and Advantech Sign Strategic MOU to Pioneer On-Device GenAI Market
Nota AI and Advantech sign MOU for edge AI collaboration. Partnership focuses on generative AI at the edge. Joint marketing and sales activities planned to expand market share. SEOUL, South Korea, June 7, 2024 /PRNewswire/ — AI model optimization technology company Nota AI® (Nota Inc.) has signed a strategic Memorandum of Understanding (MOU) with global industrial AIoT
![](https://www.edge-ai-vision.com/wp-content/uploads/2024/06/vanEmdenR_SpeakerCard-300x158.jpg)
“Building and Scaling AI Applications with the Nx AI Manager,” a Presentation from Network Optix
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit. In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He… “Building and Scaling AI Applications
![](https://www.edge-ai-vision.com/wp-content/uploads/2020/02/logoheader_brainchip_2024-300x169.png)
BrainChip Introduces TENNs-PLEIADES in New White Paper
Laguna Hills, Calif. – June 5, 2024 – BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY), the world’s first commercial producer of ultra-low power, fully digital, event-based, neuromorphic AI, today released a white paper detailing the company’s TENNs-PLEIADES (PoLynomial Expansion In Adaptive Distributed Event-based Systems), a method of parameterization of temporal kernels that reduces
![](https://www.edge-ai-vision.com/wp-content/uploads/2024/06/ZhongX_SpeakerCard-300x158.jpg)
“OpenCV for High-performance, Low-power Vision Applications on Snapdragon,” a Presentation from Qualcomm
Xin Zhong, Computer Vision Product Manager at Qualcomm Technologies, presents the “OpenCV for High-performance, Low-power Vision Applications on Snapdragon” tutorial at the May 2024 Embedded Vision Summit. For decades, the OpenCV software library has been popular for developing computer vision applications. However, developers have found it challenging to create efficient… “OpenCV for High-performance, Low-power Vision
![](https://www.edge-ai-vision.com/wp-content/uploads/2024/06/SukumarV_SpeakerCard-300x158.jpg)
“Deploying Large Models on the Edge: Success Stories and Challenges,” a Presentation from Qualcomm
Vinesh Sukumar, Senior Director of Product Management at Qualcomm Technologies, presents the “Deploying Large Models on the Edge: Success Stories and Challenges” tutorial at the May 2024 Embedded Vision Summit. In this talk, Dr. Sukumar explains and demonstrates how Qualcomm has been successful in deploying large generative AI and multimodal… “Deploying Large Models on the
![](https://www.edge-ai-vision.com/wp-content/uploads/2024/06/newsroom-intel-meteor-lake-processor-300x169.jpg)
Intel AI Platforms Accelerate Microsoft Phi-3 GenAI Models
Intel, in collaboration with Microsoft, enables support for several Phi-3 models across its data center platforms, AI PCs and edge solutions. What’s New: Intel has validated and optimized its AI product portfolio across client, edge and data center for several of Microsoft’s Phi-3 family of open models. The Phi-3 family of small, open models can run on
![](https://www.edge-ai-vision.com/wp-content/uploads/2024/06/ai-model-development-graphic-1-300x169.jpg)
Streamline Development of AI-powered Apps with NVIDIA RTX AI Toolkit for Windows RTX PCs
This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. NVIDIA today launched the NVIDIA RTX AI Toolkit, a collection of tools and SDKs for Windows application developers to customize, optimize, and deploy AI models for Windows applications. It’s free to use, doesn’t require prior experience with
![](https://www.edge-ai-vision.com/wp-content/uploads/2024/05/KapteinM_GeneralSession_1-300x158.jpg)
“Scaling Vision-based Edge AI Solutions: From Prototype to Global Deployment,” a Presentation from Network Optix
Maurits Kaptein, Chief Data Scientist at Network Optix and Professor at the University of Eindhoven, presents the “Scaling Vision-based Edge AI Solutions: From Prototype to Global Deployment” tutorial at the May 2024 Embedded Vision Summit. The Embedded Vision Summit brings together innovators in silicon, devices, software and applications and empowers… “Scaling Vision-based Edge AI Solutions:
![](https://www.edge-ai-vision.com/wp-content/uploads/2024/05/Hou_2024_GeneralSessionSpeakerCard_Hou-300x158.jpg)
“What’s Next in On-device Generative AI,” a Presentation from Qualcomm
Jilei Hou, Vice President of Engineering and Head of AI Research at Qualcomm Technologies, presents the “What’s Next in On-device Generative AI” tutorial at the May 2024 Embedded Vision Summit. The generative AI era has begun! Large multimodal models are bringing the power of language understanding to machine perception, and transformer models are expanding to
![](https://www.edge-ai-vision.com/wp-content/uploads/2024/05/V2-2024-04-16_Omdia-Webinar-Presentation_V4.2_CWedits-300x169.png)
Generative AI at the Edge – Key Takeaways from Omdia’s White Paper and Our Joint Webinar
This blog post was originally published at Ambarella’s website. It is reprinted here with the permission of Ambarella. Ambarella recently partnered with Omdia to commission an independent white paper on the future of Generative AI at the edge, by their Principal Analyst for Advanced AI Computing, Alexander Harrowell. It combines his insights and Omdia’s data
![](https://www.edge-ai-vision.com/wp-content/uploads/2024/05/Computer_vision_cover-300x135.jpg)
Computer Vision Market to Grow by 81% and Hit a $47 Billion Value by 2030
After a massive 30% drop in 2022, the computer vision market has picked up the pace of growth, driven by continuous improvements in AI and machine learning and the increasing integration of computer vision across sectors. According to data presented by AltIndex.com, the global computer vision market is expected to grow by 17% and hit
![](https://www.edge-ai-vision.com/wp-content/uploads/2024/05/LoRA-technologies-driving-enhanced-on-device-generative-ai-experiences-300x200.jpg)
Technologies Driving Enhanced On-device Generative AI Experiences: LoRA
This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm. Utilize low-rank adaptation (LoRA) to provide customized experiences across use cases Enhancing contextualization and customization has always been a driving force in the realm of user experience. While generative artificial intelligence (AI) has already demonstrated its transformative
![](https://www.edge-ai-vision.com/wp-content/uploads/2024/05/embedded-deepstream-kv-featured-300x169.png)
NVIDIA DeepStream 7.0 Milestone Release for Next-gen Vision AI Development
This article was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. NVIDIA DeepStream is a powerful SDK that unlocks GPU-accelerated building blocks to build end-to-end vision AI pipelines. With more than 40+ plugins available off-the-shelf, you can deploy fully optimized pipelines with cutting-edge AI Inference, object tracking, and seamless
![](https://www.edge-ai-vision.com/wp-content/uploads/2024/05/Multimodal-technologies-driving-enhanced-on-device-generative-ai-experiences-300x200.jpg)
Technologies Driving Enhanced On-device Generative AI Experiences: Multimodal Generative AI
This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm. Leverage additional modalities in generative AI models to enable necessary advancements for contextualization and customization across use cases A constant desire in user experience is improved contextualization and customization. For example, consumers want devices to automatically use
![](https://www.edge-ai-vision.com/wp-content/uploads/2024/05/400x268_2024vispydark-300x201.png)
Edge AI and Vision Alliance™ Announces 2024 Edge AI and Vision Product of the Year™ and AI Innovation Award™ Winners
Awards Celebrate Innovation and Achievement in Computer Vision and Edge AI SANTA CLARA, CALIFORNIA, UNITED STATES OF AMERICA, May 23, 2024 /EINPresswire.com/ — The Edge AI and Vision Alliance today announced the 2024 winners of the Edge AI and Vision Product of the Year Awards and the AI Innovation Awards. The Edge AI and Vision
![](https://www.edge-ai-vision.com/wp-content/uploads/2024/05/logoheader_ambarella_2024-300x169.png)
2024 Edge AI and Vision Product of the Year Award Winner Showcase: Ambarella (Edge AI Software and Algorithms)
Ambarella’s central 4D imaging radar architecture is the 2024 Edge AI and Vision Product of the Year Award Winner in the Edge AI Software and Algorithms category. It is the first centralized 4D imaging radar architecture that allows both central processing of raw radar data and deep low-level fusion with other sensor inputs—including cameras, lidar