Processors for Embedded Vision
THIS TECHNOLOGY CATEGORY INCLUDES ANY DEVICE THAT EXECUTES VISION ALGORITHMS OR VISION SYSTEM CONTROL SOFTWARE
This technology category includes any device that executes vision algorithms or vision system control software. The following diagram shows a typical computer vision pipeline; processors are often optimized for the compute-intensive portions of the software workload.
![ev pipeline](https://www.edge-ai-vision.com/wp-content/uploads/2011/02/ev-pipeline-1024x99.jpg)
The following examples represent distinctly different types of processor architectures for embedded vision, and each has advantages and trade-offs that depend on the workload. For this reason, many devices combine multiple processor types into a heterogeneous computing environment, often integrated into a single semiconductor component. In addition, a processor can be accelerated by dedicated hardware that improves performance on computer vision algorithms.
General-purpose CPUs
While computer vision algorithms can run on most general-purpose CPUs, desktop processors may not meet the design constraints of some systems. However, x86 processors and system boards can leverage the PC infrastructure for low-cost hardware and broadly-supported software development tools. Several Alliance Member companies also offer devices that integrate a RISC CPU core. A general-purpose CPU is best suited for heuristics, complex decision-making, network access, user interface, storage management, and overall control. A general purpose CPU may be paired with a vision-specialized device for better performance on pixel-level processing.
Graphics Processing Units
High-performance GPUs deliver massive amounts of parallel computing potential, and graphics processors can be used to accelerate the portions of the computer vision pipeline that perform parallel processing on pixel data. While General Purpose GPUs (GPGPUs) have primarily been used for high-performance computing (HPC), even mobile graphics processors and integrated graphics cores are gaining GPGPU capability—meeting the power constraints for a wider range of vision applications. In designs that require 3D processing in addition to embedded vision, a GPU will already be part of the system and can be used to assist a general-purpose CPU with many computer vision algorithms. Many examples exist of x86-based embedded systems with discrete GPGPUs.
Digital Signal Processors
DSPs are very efficient for processing streaming data, since the bus and memory architecture are optimized to process high-speed data as it traverses the system. This architecture makes DSPs an excellent solution for processing image pixel data as it streams from a sensor source. Many DSPs for vision have been enhanced with coprocessors that are optimized for processing video inputs and accelerating computer vision algorithms. The specialized nature of DSPs makes these devices inefficient for processing general-purpose software workloads, so DSPs are usually paired with a RISC processor to create a heterogeneous computing environment that offers the best of both worlds.
Field Programmable Gate Arrays (FPGAs)
Instead of incurring the high cost and long lead-times for a custom ASIC to accelerate computer vision systems, designers can implement an FPGA to offer a reprogrammable solution for hardware acceleration. With millions of programmable gates, hundreds of I/O pins, and compute performance in the trillions of multiply-accumulates/sec (tera-MACs), high-end FPGAs offer the potential for highest performance in a vision system. Unlike a CPU, which has to time-slice or multi-thread tasks as they compete for compute resources, an FPGA has the advantage of being able to simultaneously accelerate multiple portions of a computer vision pipeline. Since the parallel nature of FPGAs offers so much advantage for accelerating computer vision, many of the algorithms are available as optimized libraries from semiconductor vendors. These computer vision libraries also include preconfigured interface blocks for connecting to other vision devices, such as IP cameras.
Vision-Specific Processors and Cores
Application-specific standard products (ASSPs) are specialized, highly integrated chips tailored for specific applications or application sets. ASSPs may incorporate a CPU, or use a separate CPU chip. By virtue of their specialization, ASSPs for vision processing typically deliver superior cost- and energy-efficiency compared with other types of processing solutions. Among other techniques, ASSPs deliver this efficiency through the use of specialized coprocessors and accelerators. And, because ASSPs are by definition focused on a specific application, they are usually provided with extensive associated software. This same specialization, however, means that an ASSP designed for vision is typically not suitable for other applications. ASSPs’ unique architectures can also make programming them more difficult than with other kinds of processors; some ASSPs are not user-programmable.
![](https://www.edge-ai-vision.com/wp-content/uploads/2024/06/ARM_CSS_KV_silicon-wafer_v1-1400x787-1-300x169.jpg)
Arm CSS for Client: The Compute Platform for AI-powered Consumer Experiences
This blog post was originally published at Arm’s website. It is reprinted here with the permission of Arm. New Arm Compute Subsystems for Client deliver a step-change in performance, efficiency, and scalability, with production-ready physical implementations on the 3nm process. AI is transforming consumer devices, and revolutionizing productivity, creativity and entertainment-based experiences. This is leading
![](https://www.edge-ai-vision.com/wp-content/uploads/2024/05/Hou_2024_GeneralSessionSpeakerCard_Hou-300x158.jpg)
“What’s Next in On-device Generative AI,” a Presentation from Qualcomm
Jilei Hou, Vice President of Engineering and Head of AI Research at Qualcomm Technologies, presents the “What’s Next in On-device Generative AI” tutorial at the May 2024 Embedded Vision Summit. The generative AI era has begun! Large multimodal models are bringing the power of language understanding to machine perception, and transformer models are expanding to
![](https://www.edge-ai-vision.com/wp-content/uploads/2024/05/V2-2024-04-16_Omdia-Webinar-Presentation_V4.2_CWedits-300x169.png)
Generative AI at the Edge – Key Takeaways from Omdia’s White Paper and Our Joint Webinar
This blog post was originally published at Ambarella’s website. It is reprinted here with the permission of Ambarella. Ambarella recently partnered with Omdia to commission an independent white paper on the future of Generative AI at the edge, by their Principal Analyst for Advanced AI Computing, Alexander Harrowell. It combines his insights and Omdia’s data
![](https://www.edge-ai-vision.com/wp-content/uploads/2021/04/logoheader_eys3d_-300x169.png)
eYs3D Microelectronics Unveils the Multi-camera Sensing System, SenseLink, Providing a Versatile Machine Vision Sensing Solution for Smart Applications
May 30, 2024 – eYs3D Microelectronics, with years of expertise in the fields of 3D sensing and computer vision, is a subsidiary of Etron Tech (TPEx: 5351). It has recently launched SenseLink, a multi-camera sensing system chip designed to enhance visual AI sensing capabilities. This technology utilizes advanced sensor fusion techniques to integrate data from
![](https://www.edge-ai-vision.com/wp-content/uploads/2024/05/Computer_vision_cover-300x135.jpg)
Computer Vision Market to Grow by 81% and Hit a $47 Billion Value by 2030
After a massive 30% drop in 2022, the computer vision market has picked up the pace of growth, driven by continuous improvements in AI and machine learning and the increasing integration of computer vision across sectors. According to data presented by AltIndex.com, the global computer vision market is expected to grow by 17% and hit
![](https://www.edge-ai-vision.com/wp-content/uploads/2024/05/LoRA-technologies-driving-enhanced-on-device-generative-ai-experiences-300x200.jpg)
Technologies Driving Enhanced On-device Generative AI Experiences: LoRA
This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm. Utilize low-rank adaptation (LoRA) to provide customized experiences across use cases Enhancing contextualization and customization has always been a driving force in the realm of user experience. While generative artificial intelligence (AI) has already demonstrated its transformative
![](https://www.edge-ai-vision.com/wp-content/uploads/2024/05/embedded-deepstream-kv-featured-300x169.png)
NVIDIA DeepStream 7.0 Milestone Release for Next-gen Vision AI Development
This article was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. NVIDIA DeepStream is a powerful SDK that unlocks GPU-accelerated building blocks to build end-to-end vision AI pipelines. With more than 40+ plugins available off-the-shelf, you can deploy fully optimized pipelines with cutting-edge AI Inference, object tracking, and seamless
![](https://www.edge-ai-vision.com/wp-content/uploads/2024/05/Multimodal-technologies-driving-enhanced-on-device-generative-ai-experiences-300x200.jpg)
Technologies Driving Enhanced On-device Generative AI Experiences: Multimodal Generative AI
This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm. Leverage additional modalities in generative AI models to enable necessary advancements for contextualization and customization across use cases A constant desire in user experience is improved contextualization and customization. For example, consumers want devices to automatically use
![](https://www.edge-ai-vision.com/wp-content/uploads/2024/05/400x268_2024vispydark-300x201.png)
Edge AI and Vision Alliance™ Announces 2024 Edge AI and Vision Product of the Year™ and AI Innovation Award™ Winners
Awards Celebrate Innovation and Achievement in Computer Vision and Edge AI SANTA CLARA, CALIFORNIA, UNITED STATES OF AMERICA, May 23, 2024 /EINPresswire.com/ — The Edge AI and Vision Alliance today announced the 2024 winners of the Edge AI and Vision Product of the Year Awards and the AI Innovation Awards. The Edge AI and Vision
![](https://www.edge-ai-vision.com/wp-content/uploads/2024/05/SnapdragonDevKitforWindows-300x169.png)
Qualcomm Accelerates Development for Copilot+ PCs with Snapdragon Dev Kit for Windows
Highlights: The Snapdragon Dev Kit for Windows, powered by Snapdragon X Elite, is a compact-form-factor PC designed for Windows developers to take advantage of the next-gen AI PC capabilities of Snapdragon. It is purpose-built with the configurability and programmability developers need to create, debug, and test apps and experiences for the many upcoming laptops based
![](https://www.edge-ai-vision.com/wp-content/uploads/2024/05/QualcommAIHubforSnapdragonXSeriesPlatformsImage-300x169.png)
Qualcomm AI Hub Expands to On-device AI Apps for Snapdragon-powered PCs
Highlights: Qualcomm AI Hub expands to support Snapdragon X Series Platforms, empowering developers to easily take advantage of the best-in-class CPU and the world’s fastest NPU for laptops, and create responsive and power-efficient on-device generative AI applications for next-gen Windows PCs. Developers can now optimize their own models using the Qualcomm AI Hub—adding flexibility and
![](https://www.edge-ai-vision.com/wp-content/uploads/2024/05/Copilot600-300x190.png)
Snapdragon X Series is the Exclusive Platform to Power the Next Generation of Windows PCs with Copilot+ Today
Highlights: Snapdragon X Elite and Snapdragon X Plus are powering the launch of a new category of devices delivering Microsoft Copilot+ PC experiences. PCs with Snapdragon X Elite and X Plus deliver, multiple days of battery life, unparalleled performance plus efficiency to accelerate productivity and creativity with unique AI experiences powered by the world’s fastest
![](https://www.edge-ai-vision.com/wp-content/uploads/2024/02/logoheader_sima-technologies-300x169.png)
SiMa.ai, Lanner, and AWL Collaborate to Accelerate Smart Retail at the Edge
Edge AI Platform Combines Hardware with Machine Learning Software and Video Analytics May 22, 2024 09:00 AM Eastern Daylight Time – SAN JOSE, Calif.–(BUSINESS WIRE)–SiMa.ai, the software centric, embedded edge machine learning system-on-chip company, today announced a collaboration with Lanner, a leading provider of industrial computing appliances, and AWL, an AI software company specializing in
![](https://www.edge-ai-vision.com/wp-content/uploads/2024/05/logoheader_ambarella_2024-300x169.png)
2024 Edge AI and Vision Product of the Year Award Winner Showcase: Ambarella (Edge AI Software and Algorithms)
Ambarella’s central 4D imaging radar architecture is the 2024 Edge AI and Vision Product of the Year Award Winner in the Edge AI Software and Algorithms category. It is the first centralized 4D imaging radar architecture that allows both central processing of raw radar data and deep low-level fusion with other sensor inputs—including cameras, lidar
![](https://www.edge-ai-vision.com/wp-content/uploads/2020/01/logoheader_qualcomm-300x168.jpg)
2024 Edge AI and Vision Product of the Year Award Winner Showcase: Qualcomm (Edge AI Processors)
Qualcomm’s Snapdragon X Elite Platform is the 2024 Edge AI and Vision Product of the Year Award Winner in the Edge AI Processors category. The Snapdragon X Elite is the first Snapdragon based on the new Qualcomm Oryon CPU architecture, which outperforms every other laptop CPU in its class. The Snapdragon X Elite’s heterogeneous AI
![](https://www.edge-ai-vision.com/wp-content/uploads/2020/01/logoheader_lattice-300x168.jpg)
Lattice Introduces Advanced 3D Sensor Fusion Reference Design for Autonomous Applications
HILLSBORO, Ore. – May 22, 2024 – Lattice Semiconductor (NASDAQ: LSCC), the low power programmable leader, today announced a new 3D sensor fusion reference design to accelerate advanced autonomous application development. Combining a low power, low latency, deterministic Lattice Avant™-E FPGA with Lumotive’s Light Control Metasurface (LCM™) programmable optical beamforming technology, the reference design enables
![](https://www.edge-ai-vision.com/wp-content/uploads/2024/05/firefox-rtx-video-nv-blog-1280x680-1-300x159.jpg)
Fire It Up: Mozilla Firefox Adds Support for AI-powered NVIDIA RTX Video
This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. The popular open-source browser is the latest to incorporate AI upscaling and high-dynamic range for NVIDIA RTX GPUs. Editor’s note: This post is part of the AI Decoded series, which demystifies AI by making the technology more
![](https://www.edge-ai-vision.com/wp-content/uploads/2024/05/Ambarella_CV72AX-CV75AX_Press-Image_Final-300x214.png)
Ambarella’s Next-Gen AI SoCs for Fleet Dash Cams and Vehicle Gateways Enable Vision Language Models and Transformer Networks Without Fan Cooling
Two New 5nm SoCs Provide Industry-Leading AI Performance Per Watt, Uniquely Allowing Small Form Factor, Single Boxes With Vision Transformers and VLM Visual Analysis SANTA CLARA, Calif., May 21, 2024 — Ambarella, Inc. (NASDAQ: AMBA), an edge AI semiconductor company, today announced during AutoSens USA, the latest generation of its AI systems-on-chip (SoCs) for in-vehicle
![](https://www.edge-ai-vision.com/wp-content/uploads/2024/05/techinsights-on-automotive-300x225.jpg)
Free Webinar Explores Processing Solutions for ADAS and Autonomous Vehicles
On July 24, 2024 at 9 am PT (noon ET), Ian Riches, Vice President of the Global Automotive Practice at TechInsights, will present the free hour webinar “Who is Winning the Battle for ADAS and Autonomous Vehicle Processing, and How Large is the Prize?,” organized by the Edge AI and Vision Alliance. Here’s the description,
![](https://www.edge-ai-vision.com/wp-content/uploads/2024/05/concept-nuage-ai-cerveau-300x225.jpg)
Neuromorphic Computing, Memory and Sensing: Towards Exponential Growth
This market research report was originally published at the Yole Group’s website. It is reprinted here with the permission of the Yole Group. The neuromorphic market is poised for expansion from smartphones to encompass opportunities in data centers, entertainment, and automotive sectors. The combined neuromorphic sensing and computing markets are anticipated to generate from US$28