Processors

Processors for Embedded Vision

THIS TECHNOLOGY CATEGORY INCLUDES ANY DEVICE THAT EXECUTES VISION ALGORITHMS OR VISION SYSTEM CONTROL SOFTWARE

This technology category includes any device that executes vision algorithms or vision system control software. The following diagram shows a typical computer vision pipeline; processors are often optimized for the compute-intensive portions of the software workload.

ev pipeline

The following examples represent distinctly different types of processor architectures for embedded vision, and each has advantages and trade-offs that depend on the workload. For this reason, many devices combine multiple processor types into a heterogeneous computing environment, often integrated into a single semiconductor component. In addition, a processor can be accelerated by dedicated hardware that improves performance on computer vision algorithms.

General-purpose CPUs

While computer vision algorithms can run on most general-purpose CPUs, desktop processors may not meet the design constraints of some systems. However, x86 processors and system boards can leverage the PC infrastructure for low-cost hardware and broadly-supported software development tools. Several Alliance Member companies also offer devices that integrate a RISC CPU core. A general-purpose CPU is best suited for heuristics, complex decision-making, network access, user interface, storage management, and overall control. A general purpose CPU may be paired with a vision-specialized device for better performance on pixel-level processing.

Graphics Processing Units

High-performance GPUs deliver massive amounts of parallel computing potential, and graphics processors can be used to accelerate the portions of the computer vision pipeline that perform parallel processing on pixel data. While General Purpose GPUs (GPGPUs) have primarily been used for high-performance computing (HPC), even mobile graphics processors and integrated graphics cores are gaining GPGPU capability—meeting the power constraints for a wider range of vision applications. In designs that require 3D processing in addition to embedded vision, a GPU will already be part of the system and can be used to assist a general-purpose CPU with many computer vision algorithms. Many examples exist of x86-based embedded systems with discrete GPGPUs.

Digital Signal Processors

DSPs are very efficient for processing streaming data, since the bus and memory architecture are optimized to process high-speed data as it traverses the system. This architecture makes DSPs an excellent solution for processing image pixel data as it streams from a sensor source. Many DSPs for vision have been enhanced with coprocessors that are optimized for processing video inputs and accelerating computer vision algorithms. The specialized nature of DSPs makes these devices inefficient for processing general-purpose software workloads, so DSPs are usually paired with a RISC processor to create a heterogeneous computing environment that offers the best of both worlds.

Field Programmable Gate Arrays (FPGAs)

Instead of incurring the high cost and long lead-times for a custom ASIC to accelerate computer vision systems, designers can implement an FPGA to offer a reprogrammable solution for hardware acceleration. With millions of programmable gates, hundreds of I/O pins, and compute performance in the trillions of multiply-accumulates/sec (tera-MACs), high-end FPGAs offer the potential for highest performance in a vision system. Unlike a CPU, which has to time-slice or multi-thread tasks as they compete for compute resources, an FPGA has the advantage of being able to simultaneously accelerate multiple portions of a computer vision pipeline. Since the parallel nature of FPGAs offers so much advantage for accelerating computer vision, many of the algorithms are available as optimized libraries from semiconductor vendors. These computer vision libraries also include preconfigured interface blocks for connecting to other vision devices, such as IP cameras.

Vision-Specific Processors and Cores

Application-specific standard products (ASSPs) are specialized, highly integrated chips tailored for specific applications or application sets. ASSPs may incorporate a CPU, or use a separate CPU chip. By virtue of their specialization, ASSPs for vision processing typically deliver superior cost- and energy-efficiency compared with other types of processing solutions. Among other techniques, ASSPs deliver this efficiency through the use of specialized coprocessors and accelerators. And, because ASSPs are by definition focused on a specific application, they are usually provided with extensive associated software. This same specialization, however, means that an ASSP designed for vision is typically not suitable for other applications. ASSPs’ unique architectures can also make programming them more difficult than with other kinds of processors; some ASSPs are not user-programmable.

BrainChip Demonstration of Neuromorphic AI in a Compact Form Factor

Todd Vierra, Vice President of Customer Engagement at BrainChip, demonstrates the company’s latest edge AI and vision technologies and products at the 2024 Embedded Vision Summit. Specifically, Vierra demonstrates inference on the edge using visual wake word and Yolo models using the Akida Edge AI Box to detect and identify people. The Akida Edge AI

Read More »

Axelera AI Demonstration of Fast and Efficient Workplace Safety with the Metis AIPU

Bram Verhoef, Co-founder of Axelera AI, demonstrates the company’s latest edge AI and vision technologies and products at the 2024 Embedded Vision Summit. Specifically, Verhoef demonstrates how his company’s Metis AIPU can accelerate computer vision applications. Axelera AI, together with its partner FogSphere, has developed a computer vision system that detects if people are wearing

Read More »

Himax Demonstration of On-device AI with Innovative End-point Computer Vision

Alex Chang, Senior Product Manager for AIOT CIS Sensors and Processors at Himax, demonstrates the company’s latest edge AI and vision technologies and products in Arm’s booth at the 2024 Embedded Vision Summit. Specifically, Chang demonstrates his company’s WiseEye2 HX6538 deploying advanced CV models to endpoints. The WiseEye2 HX6538 integrates the Arm Cortex-M55 and Ethos-U55

Read More »

Renesas Demonstration of the RZ/V2H MPU for High Performance, Low Power AI Vision Applications

Brian Witzen, Principal Business Development Manager at Renesas Electronics, demonstrates the company’s latest edge AI and vision technologies and products in Arm’s booth at the 2024 Embedded Vision Summit. Specifically, Witzen demonstrates his company’s new RZ/V2H AI MPU. The RZ/V2H features multiple Arm Cortex cores, such as a quad-core Cortex-A55, dual-core Cortex-R8 and single Cortex-M33,

Read More »

High-end Packaging: Breaking Performance Barriers?

This market research report was originally published at the Yole Group’s website. It is reprinted here with the permission of the Yole Group. The current AI/HPC demand has brought high-end performance packaging to the spotlight. OUTLINE The high-end packaging market is projected to exceed US$28 billion by 2029, with a CAGR 23-29 of 37%. TSMC

Read More »

AMD to Acquire Silo AI to Expand Enterprise AI Solutions Globally

Europe’s largest private AI lab to accelerate the development and deployment of AMD-powered AI models and software solutions Enhances open-source AI software capabilities for efficient training and inference on AMD compute platforms SANTA CLARA, Calif. — July 10, 2024 — AMD (NASDAQ: AMD) today announced the signing of a definitive agreement to acquire Silo AI,

Read More »

Arm Demonstration of the Ethos-U85 AI Accelerator

Stephen Su, Senior Segment Marketing Manager at Arm, demonstrates the company’s latest edge AI and vision technologies and products at the 2024 Embedded Vision Summit. Specifically, Su demonstrates the company’s latest Ethos-U85 AI accelerator, which scales from 128 to 2048 MAC units and delivers up to 4 TOPs performance at 1 GHz clock speeds. The

Read More »

Microchip Technology Expands Processing Portfolio to Include Multi-core 64-bit Microprocessors

PIC64GX MPU is the first of several product lines planned for Microchip’s PIC64 portfolio CHANDLER, Ariz., July 9, 2024—Real-time, compute intensive applications such as smart embedded vision and Machine Learning (ML) are pushing the boundaries of embedded processing requirements, demanding more power-efficiency, hardware-level security and high reliability at the edge. With the launch of its

Read More »

Decoding How the Generative AI Revolution BeGAN

This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. NVIDIA Research’s GauGAN demo set the scene for a new wave of generative AI apps supercharging creative workflows. Editor’s note: This post is part of the AI Decoded series, which demystifies AI by making the technology more

Read More »

Upcoming Webinar Explores the Role of GMSL2 Cameras in Autonomous Systems

On Thursday, July 11, 2024 at 10:00 am PT (1:00 pm ET), Alliance Member companies e-con Systems and Analog Devices will co-deliver the free webinar “The Role of GMSL2 Cameras in Autonomous Systems – How It Enables Functional Safety.” From the event page: Get expert insights on GMSL2 technology, its functional safety benefits, applications in

Read More »

Broad Industry Recognition for Our Centrally Processed 4D Imaging Radar Architecture and Corporate Culture

This blog post was originally published at Ambarella’s website. It is reprinted here with the permission of Ambarella. Winning an award is always exciting, but winning eight is truly exhilarating! We’re honored that our groundbreaking architecture—which includes both Ambarella’s Oculii™ adaptive AI radar software and our CV3-AD family of highly efficient 5nm AI central domain

Read More »

“How to Run Audio and Vision AI Algorithms at Ultra-low Power,” a Presentation from Synaptics

Deepak Mital, Senior Director of Architectures at Synaptics, presents the “How to Run Audio and Vision AI Algorithms at Ultra-low Power” tutorial at the May 2024 Embedded Vision Summit. Running AI algorithms on battery-powered, low-cost devices requires a different approach to designing hardware and software. The power requirements are stringent… “How to Run Audio and

Read More »

“Meeting the Critical Needs of Accuracy, Performance and Adaptability in Embedded Neural Networks,” a Presentation from Quadric

Aman Sikka, Chief Architect at Quadric, presents the “Meeting the Critical Needs of Accuracy, Performance and Adaptability in Embedded Neural Networks” tutorial at the May 2024 Embedded Vision Summit. In this presentation, Sikka explores the challenges of accuracy and performance when implementing quantized machine learning inference algorithms on embedded systems.… “Meeting the Critical Needs of

Read More »

Generate Traffic Insights Using YOLOv8 and NVIDIA JetPack 6.0

This article was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. Intelligent Transportation Systems (ITS) applications are becoming increasingly valuable and prevalent in modern urban environments. The benefits of using ITS applications include: Increasing traffic efficiency: By analyzing real-time traffic data, ITS can optimize traffic flow, reducing congestion and

Read More »

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

1646 N. California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone
Phone: +1 (925) 954-1411
Scroll to Top