Processors

Processors for Embedded Vision

THIS TECHNOLOGY CATEGORY INCLUDES ANY DEVICE THAT EXECUTES VISION ALGORITHMS OR VISION SYSTEM CONTROL SOFTWARE

This technology category includes any device that executes vision algorithms or vision system control software. The following diagram shows a typical computer vision pipeline; processors are often optimized for the compute-intensive portions of the software workload.

ev pipeline

The following examples represent distinctly different types of processor architectures for embedded vision, and each has advantages and trade-offs that depend on the workload. For this reason, many devices combine multiple processor types into a heterogeneous computing environment, often integrated into a single semiconductor component. In addition, a processor can be accelerated by dedicated hardware that improves performance on computer vision algorithms.

General-purpose CPUs

While computer vision algorithms can run on most general-purpose CPUs, desktop processors may not meet the design constraints of some systems. However, x86 processors and system boards can leverage the PC infrastructure for low-cost hardware and broadly-supported software development tools. Several Alliance Member companies also offer devices that integrate a RISC CPU core. A general-purpose CPU is best suited for heuristics, complex decision-making, network access, user interface, storage management, and overall control. A general purpose CPU may be paired with a vision-specialized device for better performance on pixel-level processing.

Graphics Processing Units

High-performance GPUs deliver massive amounts of parallel computing potential, and graphics processors can be used to accelerate the portions of the computer vision pipeline that perform parallel processing on pixel data. While General Purpose GPUs (GPGPUs) have primarily been used for high-performance computing (HPC), even mobile graphics processors and integrated graphics cores are gaining GPGPU capability—meeting the power constraints for a wider range of vision applications. In designs that require 3D processing in addition to embedded vision, a GPU will already be part of the system and can be used to assist a general-purpose CPU with many computer vision algorithms. Many examples exist of x86-based embedded systems with discrete GPGPUs.

Digital Signal Processors

DSPs are very efficient for processing streaming data, since the bus and memory architecture are optimized to process high-speed data as it traverses the system. This architecture makes DSPs an excellent solution for processing image pixel data as it streams from a sensor source. Many DSPs for vision have been enhanced with coprocessors that are optimized for processing video inputs and accelerating computer vision algorithms. The specialized nature of DSPs makes these devices inefficient for processing general-purpose software workloads, so DSPs are usually paired with a RISC processor to create a heterogeneous computing environment that offers the best of both worlds.

Field Programmable Gate Arrays (FPGAs)

Instead of incurring the high cost and long lead-times for a custom ASIC to accelerate computer vision systems, designers can implement an FPGA to offer a reprogrammable solution for hardware acceleration. With millions of programmable gates, hundreds of I/O pins, and compute performance in the trillions of multiply-accumulates/sec (tera-MACs), high-end FPGAs offer the potential for highest performance in a vision system. Unlike a CPU, which has to time-slice or multi-thread tasks as they compete for compute resources, an FPGA has the advantage of being able to simultaneously accelerate multiple portions of a computer vision pipeline. Since the parallel nature of FPGAs offers so much advantage for accelerating computer vision, many of the algorithms are available as optimized libraries from semiconductor vendors. These computer vision libraries also include preconfigured interface blocks for connecting to other vision devices, such as IP cameras.

Vision-Specific Processors and Cores

Application-specific standard products (ASSPs) are specialized, highly integrated chips tailored for specific applications or application sets. ASSPs may incorporate a CPU, or use a separate CPU chip. By virtue of their specialization, ASSPs for vision processing typically deliver superior cost- and energy-efficiency compared with other types of processing solutions. Among other techniques, ASSPs deliver this efficiency through the use of specialized coprocessors and accelerators. And, because ASSPs are by definition focused on a specific application, they are usually provided with extensive associated software. This same specialization, however, means that an ASSP designed for vision is typically not suitable for other applications. ASSPs’ unique architectures can also make programming them more difficult than with other kinds of processors; some ASSPs are not user-programmable.

“Addressing Tomorrow’s Sensor Fusion and Processing Needs with Cadence’s Newest Processors,” a Presentation from Cadence

Amol Borkar, Product Marketing Director at Cadence, presents the “Addressing Tomorrow’s Sensor Fusion and Processing Needs with Cadence’s Newest Processors” tutorial at the May 2024 Embedded Vision Summit. From ADAS to autonomous vehicles to smartphones, the number and variety of sensors used in edge devices is increasing: radar, LiDAR, time-of-flight… “Addressing Tomorrow’s Sensor Fusion and

Read More »

“Temporal Event Neural Networks: A More Efficient Alternative to the Transformer,” a Presentation from BrainChip

Chris Jones, Director of Product Management at BrainChip, presents the “Temporal Event Neural Networks: A More Efficient Alternative to the Transformer” tutorial at the May 2024 Embedded Vision Summit. The expansion of AI services necessitates enhanced computational capabilities on edge devices. Temporal Event Neural Networks (TENNs), developed by BrainChip, represent… “Temporal Event Neural Networks: A

Read More »

How Edge Devices Can Help Mitigate the Global Environmental Cost of Generative AI

This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm. Exploring the role of edge devices in reducing energy consumption and promoting sustainability in AI systems The economic value of generative artificial intelligence (AI) to the world is immense. Research from McKinsey estimates that generative AI could add the

Read More »

“Silicon Slip-ups: The Ten Most Common Errors Processor Suppliers Make (Number Four Will Amaze You!),” a Presentation from BDTI

Phil Lapsley, Co-founder and Vice President of BDTI, presents the “Silicon Slip-ups: The Ten Most Common Errors Processor Suppliers Make (Number Four Will Amaze You!)” tutorial at the May 2024 Embedded Vision Summit. For over 30 years, BDTI has provided engineering, evaluation and advisory services to processor suppliers and companies… “Silicon Slip-ups: The Ten Most

Read More »

“How Axelera AI Uses Digital Compute-in-memory to Deliver Fast and Energy-efficient Computer Vision,” a Presentation from Axelera AI

Bram Verhoef, Head of Machine Learning at Axelera AI, presents the “How Axelera AI Uses Digital Compute-in-memory to Deliver Fast and Energy-efficient Computer Vision” tutorial at the May 2024 Embedded Vision Summit. As artificial intelligence inference transitions from cloud environments to edge locations, computer vision applications achieve heightened responsiveness, reliability… “How Axelera AI Uses Digital

Read More »

Leapmotor and Ambarella Announce Strategic Cooperation Agreement for Powerful Advanced Intelligent Driving Development

HANGZHOU, China and SANTA CLARA, Calif., June 11, 2024 — Leapmotor (HKEX: 09863), a technology-driven intelligent electric vehicle company with a full suite of R&D capabilities, and Ambarella, Inc. (NASDAQ: AMBA), an edge AI semiconductor company, recently signed a strategic cooperation agreement. The two companies will focus on creating a first-class intelligent driving experience for

Read More »

Power Cloud-native Microservices at the Edge with NVIDIA JetPack 6.0, Now GA

This article was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. NVIDIA JetPack SDK powers NVIDIA Jetson modules, offering a comprehensive solution for building end-to-end accelerated AI applications. JetPack 6 expands the Jetson platform’s flexibility and scalability with microservices and a host of new features. It’s the most downloaded

Read More »

“How Arm’s Machine Learning Solution Enables Vision Transformers at the Edge,” a Presentation from Arm

Stephen Su, Senior Segment Marketing Manager at Arm, presents the “How Arm’s Machine Learning Solution Enables Vision Transformers at the Edge” tutorial at the May 2024 Embedded Vision Summit. AI at the edge has been transforming over the last few years, with newer use cases running more efficiently and securely.… “How Arm’s Machine Learning Solution

Read More »

Nota AI and Advantech Sign Strategic MOU to Pioneer On-Device GenAI Market

Nota AI and Advantech sign MOU for edge AI collaboration. Partnership focuses on generative AI at the edge. Joint marketing and sales activities planned to expand market share. SEOUL, South Korea, June 7, 2024 /PRNewswire/ — AI model optimization technology company Nota AI® (Nota Inc.) has signed a strategic Memorandum of Understanding (MOU) with global industrial AIoT

Read More »

BrainChip Introduces TENNs-PLEIADES in New White Paper

Laguna Hills, Calif. – June 5, 2024 – BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY), the world’s first commercial producer of ultra-low power, fully digital, event-based, neuromorphic AI, today released a white paper detailing the company’s TENNs-PLEIADES (PoLynomial Expansion In Adaptive Distributed Event-based Systems), a method of parameterization of temporal kernels that reduces

Read More »

A Guide to AI TOPS and NPU Performance Metrics

This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm. In today’s swiftly evolving technological landscape, where artificial intelligence (AI) is reshaping industries and driving innovation, understanding the intricacies of AI performance metrics is paramount.  Previously, many of the AI models were required to run in the

Read More »

“OpenCV for High-performance, Low-power Vision Applications on Snapdragon,” a Presentation from Qualcomm

Xin Zhong, Computer Vision Product Manager at Qualcomm Technologies, presents the “OpenCV for High-performance, Low-power Vision Applications on Snapdragon” tutorial at the May 2024 Embedded Vision Summit. For decades, the OpenCV software library has been popular for developing computer vision applications. However, developers have found it challenging to create efficient… “OpenCV for High-performance, Low-power Vision

Read More »

“Deploying Large Models on the Edge: Success Stories and Challenges,” a Presentation from Qualcomm

Vinesh Sukumar, Senior Director of Product Management at Qualcomm Technologies, presents the “Deploying Large Models on the Edge: Success Stories and Challenges” tutorial at the May 2024 Embedded Vision Summit. In this talk, Dr. Sukumar explains and demonstrates how Qualcomm has been successful in deploying large generative AI and multimodal… “Deploying Large Models on the

Read More »

Intel AI Platforms Accelerate Microsoft Phi-3 GenAI Models

Intel, in collaboration with Microsoft, enables support for several Phi-3 models across its data center platforms, AI PCs and edge solutions. What’s New: Intel has validated and optimized its AI product portfolio across client, edge and data center for several of Microsoft’s Phi-3 family of open models. The Phi-3 family of small, open models can run on

Read More »

AMD Unveils Next-gen “Zen 5” Ryzen Processors to Power Advanced AI Experiences

AMD Ryzen™ AI 300 Series Processors Unlock Transformational AI Experiences for Windows Copilot+ PCs AMD Ryzen™ 9000 Series Processors Set New Standards in Efficiency, Performance, and Content Creation TAIPEI, Taiwan, June 02, 2024 (GLOBE NEWSWIRE) — Today, during Computex 2024, AMD (NASDAQ: AMD) announced a groundbreaking series of next-generation architecture and products aimed at ushering

Read More »

AMD Extends AI and High-Performance Leadership with New AMD Instinct, Ryzen and EPYC Processors at Computex 2024

Expanded AMD Instinct accelerator roadmap brings annual cadence of leadership AI accelerators; next generation AMD EPYC processors to extend data center CPU leadership New AMD Ryzen AI 300 Series laptop and AMD Ryzen 9000 Series desktop processors deliver leading performance for Copilot+ PCs, gaming, content creation and productivity TAIPEI, Taiwan, June 02, 2024 (GLOBE NEWSWIRE)

Read More »

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

1646 N. California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone
Phone: +1 (925) 954-1411
Scroll to Top