Processors

Processors for Embedded Vision

THIS TECHNOLOGY CATEGORY INCLUDES ANY DEVICE THAT EXECUTES VISION ALGORITHMS OR VISION SYSTEM CONTROL SOFTWARE

This technology category includes any device that executes vision algorithms or vision system control software. The following diagram shows a typical computer vision pipeline; processors are often optimized for the compute-intensive portions of the software workload.

ev pipeline

The following examples represent distinctly different types of processor architectures for embedded vision, and each has advantages and trade-offs that depend on the workload. For this reason, many devices combine multiple processor types into a heterogeneous computing environment, often integrated into a single semiconductor component. In addition, a processor can be accelerated by dedicated hardware that improves performance on computer vision algorithms.

General-purpose CPUs

While computer vision algorithms can run on most general-purpose CPUs, desktop processors may not meet the design constraints of some systems. However, x86 processors and system boards can leverage the PC infrastructure for low-cost hardware and broadly-supported software development tools. Several Alliance Member companies also offer devices that integrate a RISC CPU core. A general-purpose CPU is best suited for heuristics, complex decision-making, network access, user interface, storage management, and overall control. A general purpose CPU may be paired with a vision-specialized device for better performance on pixel-level processing.

Graphics Processing Units

High-performance GPUs deliver massive amounts of parallel computing potential, and graphics processors can be used to accelerate the portions of the computer vision pipeline that perform parallel processing on pixel data. While General Purpose GPUs (GPGPUs) have primarily been used for high-performance computing (HPC), even mobile graphics processors and integrated graphics cores are gaining GPGPU capability—meeting the power constraints for a wider range of vision applications. In designs that require 3D processing in addition to embedded vision, a GPU will already be part of the system and can be used to assist a general-purpose CPU with many computer vision algorithms. Many examples exist of x86-based embedded systems with discrete GPGPUs.

Digital Signal Processors

DSPs are very efficient for processing streaming data, since the bus and memory architecture are optimized to process high-speed data as it traverses the system. This architecture makes DSPs an excellent solution for processing image pixel data as it streams from a sensor source. Many DSPs for vision have been enhanced with coprocessors that are optimized for processing video inputs and accelerating computer vision algorithms. The specialized nature of DSPs makes these devices inefficient for processing general-purpose software workloads, so DSPs are usually paired with a RISC processor to create a heterogeneous computing environment that offers the best of both worlds.

Field Programmable Gate Arrays (FPGAs)

Instead of incurring the high cost and long lead-times for a custom ASIC to accelerate computer vision systems, designers can implement an FPGA to offer a reprogrammable solution for hardware acceleration. With millions of programmable gates, hundreds of I/O pins, and compute performance in the trillions of multiply-accumulates/sec (tera-MACs), high-end FPGAs offer the potential for highest performance in a vision system. Unlike a CPU, which has to time-slice or multi-thread tasks as they compete for compute resources, an FPGA has the advantage of being able to simultaneously accelerate multiple portions of a computer vision pipeline. Since the parallel nature of FPGAs offers so much advantage for accelerating computer vision, many of the algorithms are available as optimized libraries from semiconductor vendors. These computer vision libraries also include preconfigured interface blocks for connecting to other vision devices, such as IP cameras.

Vision-Specific Processors and Cores

Application-specific standard products (ASSPs) are specialized, highly integrated chips tailored for specific applications or application sets. ASSPs may incorporate a CPU, or use a separate CPU chip. By virtue of their specialization, ASSPs for vision processing typically deliver superior cost- and energy-efficiency compared with other types of processing solutions. Among other techniques, ASSPs deliver this efficiency through the use of specialized coprocessors and accelerators. And, because ASSPs are by definition focused on a specific application, they are usually provided with extensive associated software. This same specialization, however, means that an ASSP designed for vision is typically not suitable for other applications. ASSPs’ unique architectures can also make programming them more difficult than with other kinds of processors; some ASSPs are not user-programmable.

Arm CSS for Client: The Compute Platform for AI-powered Consumer Experiences

This blog post was originally published at Arm’s website. It is reprinted here with the permission of Arm. New Arm Compute Subsystems for Client deliver a step-change in performance, efficiency, and scalability, with production-ready physical implementations on the 3nm process. AI is transforming consumer devices, and revolutionizing productivity, creativity and entertainment-based experiences. This is leading

Read More »

“What’s Next in On-device Generative AI,” a Presentation from Qualcomm

Jilei Hou, Vice President of Engineering and Head of AI Research at Qualcomm Technologies, presents the “What’s Next in On-device Generative AI” tutorial at the May 2024 Embedded Vision Summit. The generative AI era has begun! Large multimodal models are bringing the power of language understanding to machine perception, and transformer models are expanding to

Read More »

eYs3D Microelectronics Unveils the Multi-camera Sensing System, SenseLink, Providing a Versatile Machine Vision Sensing Solution for Smart Applications

May 30, 2024 – eYs3D Microelectronics, with years of expertise in the fields of 3D sensing and computer vision, is a subsidiary of Etron Tech (TPEx: 5351). It has recently launched SenseLink, a multi-camera sensing system chip designed to enhance visual AI sensing capabilities. This technology utilizes advanced sensor fusion techniques to integrate data from

Read More »

Technologies Driving Enhanced On-device Generative AI Experiences: LoRA

This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm. Utilize low-rank adaptation (LoRA) to provide customized experiences across use cases Enhancing contextualization and customization has always been a driving force in the realm of user experience. While generative artificial intelligence (AI) has already demonstrated its transformative

Read More »

NVIDIA DeepStream 7.0 Milestone Release for Next-gen Vision AI Development

This article was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. NVIDIA DeepStream is a powerful SDK that unlocks GPU-accelerated building blocks to build end-to-end vision AI pipelines. With more than 40+ plugins available off-the-shelf, you can deploy fully optimized pipelines with cutting-edge AI Inference, object tracking, and seamless

Read More »

Technologies Driving Enhanced On-device Generative AI Experiences: Multimodal Generative AI

This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm. Leverage additional modalities in generative AI models to enable necessary advancements for contextualization and customization across use cases A constant desire in user experience is improved contextualization and customization. For example, consumers want devices to automatically use

Read More »

Qualcomm AI Hub Expands to On-device AI Apps for Snapdragon-powered PCs

Highlights: Qualcomm AI Hub expands to support Snapdragon X Series Platforms, empowering developers to easily take advantage of the best-in-class CPU and the world’s fastest NPU for laptops, and create responsive and power-efficient on-device generative AI applications for next-gen Windows PCs. Developers can now optimize their own models using the Qualcomm AI Hub—adding flexibility and

Read More »

Snapdragon X Series is the Exclusive Platform to Power the Next Generation of Windows PCs with Copilot+ Today

Highlights: Snapdragon X Elite and Snapdragon X Plus are powering the launch of a new category of devices delivering Microsoft Copilot+ PC experiences. PCs with Snapdragon X Elite and X Plus deliver, multiple days of battery life, unparalleled performance plus efficiency to accelerate productivity and creativity with unique AI experiences powered by the world’s fastest

Read More »

SiMa.ai, Lanner, and AWL Collaborate to Accelerate Smart Retail at the Edge

Edge AI Platform Combines Hardware with Machine Learning Software and Video Analytics May 22, 2024 09:00 AM Eastern Daylight Time – SAN JOSE, Calif.–(BUSINESS WIRE)–SiMa.ai, the software centric, embedded edge machine learning system-on-chip company, today announced a collaboration with Lanner, a leading provider of industrial computing appliances, and AWL, an AI software company specializing in

Read More »

Lattice Introduces Advanced 3D Sensor Fusion Reference Design for Autonomous Applications

HILLSBORO, Ore. – May 22, 2024 – Lattice Semiconductor (NASDAQ: LSCC), the low power programmable leader, today announced a new 3D sensor fusion reference design to accelerate advanced autonomous application development. Combining a low power, low latency, deterministic Lattice Avant™-E FPGA with Lumotive’s Light Control Metasurface (LCM™) programmable optical beamforming technology, the reference design enables

Read More »

Fire It Up: Mozilla Firefox Adds Support for AI-powered NVIDIA RTX Video

This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. The popular open-source browser is the latest to incorporate AI upscaling and high-dynamic range for NVIDIA RTX GPUs. Editor’s note: This post is part of the AI Decoded series, which demystifies AI by making the technology more

Read More »

Ambarella’s Next-Gen AI SoCs for Fleet Dash Cams and Vehicle Gateways Enable Vision Language Models and Transformer Networks Without Fan Cooling

Two New 5nm SoCs Provide Industry-Leading AI Performance Per Watt, Uniquely Allowing Small Form Factor, Single Boxes With Vision Transformers and VLM Visual Analysis SANTA CLARA, Calif., May 21, 2024 — Ambarella, Inc. (NASDAQ: AMBA), an edge AI semiconductor company, today announced during AutoSens USA, the latest generation of its AI systems-on-chip (SoCs) for in-vehicle

Read More »

Neuromorphic Computing, Memory and Sensing: Towards Exponential Growth

This market research report was originally published at the Yole Group’s website. It is reprinted here with the permission of the Yole Group. The neuromorphic market is poised for expansion from smartphones to encompass opportunities in data centers, entertainment, and automotive sectors. The combined neuromorphic sensing and computing markets are anticipated to generate from US$28

Read More »

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

1646 N. California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone
Phone: +1 (925) 954-1411
Scroll to Top