Processors for Embedded Vision
THIS TECHNOLOGY CATEGORY INCLUDES ANY DEVICE THAT EXECUTES VISION ALGORITHMS OR VISION SYSTEM CONTROL SOFTWARE
This technology category includes any device that executes vision algorithms or vision system control software. The following diagram shows a typical computer vision pipeline; processors are often optimized for the compute-intensive portions of the software workload.
The following examples represent distinctly different types of processor architectures for embedded vision, and each has advantages and trade-offs that depend on the workload. For this reason, many devices combine multiple processor types into a heterogeneous computing environment, often integrated into a single semiconductor component. In addition, a processor can be accelerated by dedicated hardware that improves performance on computer vision algorithms.
General-purpose CPUs
While computer vision algorithms can run on most general-purpose CPUs, desktop processors may not meet the design constraints of some systems. However, x86 processors and system boards can leverage the PC infrastructure for low-cost hardware and broadly-supported software development tools. Several Alliance Member companies also offer devices that integrate a RISC CPU core. A general-purpose CPU is best suited for heuristics, complex decision-making, network access, user interface, storage management, and overall control. A general purpose CPU may be paired with a vision-specialized device for better performance on pixel-level processing.
Graphics Processing Units
High-performance GPUs deliver massive amounts of parallel computing potential, and graphics processors can be used to accelerate the portions of the computer vision pipeline that perform parallel processing on pixel data. While General Purpose GPUs (GPGPUs) have primarily been used for high-performance computing (HPC), even mobile graphics processors and integrated graphics cores are gaining GPGPU capability—meeting the power constraints for a wider range of vision applications. In designs that require 3D processing in addition to embedded vision, a GPU will already be part of the system and can be used to assist a general-purpose CPU with many computer vision algorithms. Many examples exist of x86-based embedded systems with discrete GPGPUs.
Digital Signal Processors
DSPs are very efficient for processing streaming data, since the bus and memory architecture are optimized to process high-speed data as it traverses the system. This architecture makes DSPs an excellent solution for processing image pixel data as it streams from a sensor source. Many DSPs for vision have been enhanced with coprocessors that are optimized for processing video inputs and accelerating computer vision algorithms. The specialized nature of DSPs makes these devices inefficient for processing general-purpose software workloads, so DSPs are usually paired with a RISC processor to create a heterogeneous computing environment that offers the best of both worlds.
Field Programmable Gate Arrays (FPGAs)
Instead of incurring the high cost and long lead-times for a custom ASIC to accelerate computer vision systems, designers can implement an FPGA to offer a reprogrammable solution for hardware acceleration. With millions of programmable gates, hundreds of I/O pins, and compute performance in the trillions of multiply-accumulates/sec (tera-MACs), high-end FPGAs offer the potential for highest performance in a vision system. Unlike a CPU, which has to time-slice or multi-thread tasks as they compete for compute resources, an FPGA has the advantage of being able to simultaneously accelerate multiple portions of a computer vision pipeline. Since the parallel nature of FPGAs offers so much advantage for accelerating computer vision, many of the algorithms are available as optimized libraries from semiconductor vendors. These computer vision libraries also include preconfigured interface blocks for connecting to other vision devices, such as IP cameras.
Vision-Specific Processors and Cores
Application-specific standard products (ASSPs) are specialized, highly integrated chips tailored for specific applications or application sets. ASSPs may incorporate a CPU, or use a separate CPU chip. By virtue of their specialization, ASSPs for vision processing typically deliver superior cost- and energy-efficiency compared with other types of processing solutions. Among other techniques, ASSPs deliver this efficiency through the use of specialized coprocessors and accelerators. And, because ASSPs are by definition focused on a specific application, they are usually provided with extensive associated software. This same specialization, however, means that an ASSP designed for vision is typically not suitable for other applications. ASSPs’ unique architectures can also make programming them more difficult than with other kinds of processors; some ASSPs are not user-programmable.
Qualcomm CEO Cristiano Amon at Web Summit: GenAI is the New UI
This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm. How generative AI (GenAI)-powered “agents” will change the way you interact with the digital world The rise of artificial intelligence (AI) opens the door to a vast array of possibilities. AI-powered agents will be the key to
An Easy Introduction to Multimodal Retrieval-augmented Generation for Video and Audio
This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. Building a multimodal retrieval augmented generation (RAG) system is challenging. The difficulty comes from capturing and indexing information from across multiple modalities, including text, images, tables, audio, video, and more. In our previous post, An Easy Introduction
Virtual and Augmented Reality: The Rise and Drawbacks of AR
While being closed off from the real world is an experience achievable with virtual reality (VR) headsets, augmented reality (AR) offers images and data combined with real-time views to create an enriched and computing-enhanced experience. IDTechEx‘s portfolio of reports, including “Optics for Virtual, Augmented and Mixed Reality 2024-2034: Technologies, Players and Markets“, explore the latest
Why Smaller, More Accurate GenAI Models Put Safety Front and Center
This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm. In the rapidly evolving world of generative artificial intelligence (GenAI), the focus has traditionally been on large, complex models that require significant computational resources. However, a new trend is emerging: the development and deployment of small, efficient
Revolutionizing Embedded Vision: Macnica Americas at CES 2025
At CES® 2025, Macnica Americas is proud to spotlight the transformative potential of generative AI in embedded vision solutions, showcasing its role as a leader in imaging and vision solutions. Collaborating with innovative partners such as iENSO and Ambarella, Macnica is enabling next-generation advancements in IoT, surveillance, consumer electronics, and beyond. iENSO, a trusted solution
Axelera AI Partners with Arduino to Extend the Full Power of AI to the Edge
Axelera AI – a leading edge-inference company – and Arduino, the global leader in open-source hardware and software, today announced a strategic partnership to make high-performance AI at the edge more accessible than ever, building advanced technology solutions based on inference and an open ecosystem. This furthers Axelera AI’s strategy to democratize artificial intelligence everywhere.
Hardware for HPC and AI 2025-2035: Technologies, Markets and Forecasts
For more information, visit https://www.idtechex.com/en/research-report/hardware-for-hpc-and-ai-2025-2035-technologies-markets-forecasts/1058. HPC hardware market to reach US$581 billion by 2035 at a CAGR of 13.6%. This report examines the high-performance computing (HPC) and AI hardware markets. It provides an overview of the overall HPC market in the exascale era, including analysis of major supercomputers like El Capitan. Detailed overview of technologies
NVIDIA Unveils Its Most Affordable Generative AI Supercomputer
The Jetson Orin Nano Super delivers up to a 1.7x gain in generative AI performance, supporting popular models for hobbyists, developers and students. NVIDIA is taking the wraps off a new compact generative AI supercomputer, offering increased performance at a lower price with a software upgrade. The new NVIDIA Jetson Orin Nano Super Developer Kit,
An Easy Introduction to Multimodal Retrieval-augmented Generation
This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. A retrieval-augmented generation (RAG) application has exponentially higher utility if it can work with a wide variety of data types—tables, graphs, charts, and diagrams—and not just text. This requires a framework that can understand and generate responses
Sony Semiconductor Demonstration of Its AITRIOS Edge AI Platform for Computer Vision
Armaghan, Ebrahimi, Senior Technical Product Manager at Sony Semiconductor, demonstrates the company’s latest edge AI and vision technologies and products at the December 2024 Edge AI and Vision Alliance Forum. Specifically, Cervantes demonstrates her company’s AITRIOS edge AI platform for computer vision. AITRIOS integrates powerful AI development tools, training services, and edge devices, including the
“Using Computer Vision-powered Robots to Improve Retail Operations,” a Presentation from Simbe Robotics
Durgesh Tiwari, VP of Hardware Systems, R&D at Simbe Robotics, presents the “Using Computer Vision-powered Robots to Improve Retail Operations” tutorial at the December 2024 Edge AI and Vision Innovation Forum. In this presentation, you’ll learn how Simbe Robotics’ AI- and CV-enabled robot, Tally, provides store operators with real-time intelligence to improve inventory management, streamline
Frontgrade Gaisler Licenses BrainChip’s Akida IP to Deploy AI Chips into Space
Laguna Hills, Calif. – December 15, 2024 – BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY), the world’s first commercial producer of ultra-low power, fully digital, event-based, neuromorphic AI, today announced that Frontgrade Gaisler, a leading provider of radiation-hardened microprocessors for space applications, has licensed its Akida™ IP for incorporation into space-grade, fault-tolerant system-on-chip
The Impact of AI On the Automotive Industry
This blog post was originally published at Digica’s website. It is reprinted here with the permission of Digica. The automotive industry has undergone a profound transformation over the years, with technological advancements driving innovation at an unprecedented pace. One of the most influential technologies shaping the future of vehicles is Artificial Intelligence. AI’s integration into
Microchip Expands PolarFire FPGA and SoC Solution Stacks with New Offerings for Medical Imaging and Smart Robotics
Application-specific, integrated hardware and software technology stacks lower the barrier of entry and speed time to market CHANDLER, Ariz., December 12, 2024 — The rise of IoT, industrial automation and smart robotics, along with the proliferation of medical imaging solutions to the intelligent edge, has made designing these types of power and thermally constrained applications more
Software-defined Vehicles: AI Assistants and Biometrics
Software-defined vehicles (SDVs) represent a combination of automotive features that provide new possibilities for passengers to engage with vehicles. In the report, “Software-Defined Vehicles, Connected Cars, and AI in Cars 2024-2034: Markets, Trends, and Forecasts“, IDTechEx depicts how the cellular connectivity within SDVs can provide access to IoT (Internet of Things) features including OTA (over-the-air)
Qualcomm at NeurIPS 2024: Our Groundbreaking Innovations and Cutting-edge Advancements in AI
This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm. See how Qualcomm AI Research continues to innovate, from platform enhancements to AI fundamentals Neural Information Processing Systems (NeurIPS), the premier machine learning conference, returns in person this year with an impressive 25% acceptance rate, maintaining its
Lattice Advances Low Power FPGA Leadership with New Small and Mid-range FPGA Offerings
Introduces Lattice Nexus 2 next-gen small FPGA platform, extends mid-range portfolio with Lattice Avant 30 and Avant 50 devices, and enhances capabilities of application-specific solution stacks and design software tools HILLSBORO, Ore. – Dec. 10, 2024 – Today, at Lattice Developers Conference 2024, Lattice Semiconductor (NASDAQ: LSCC) expanded its edge to cloud FPGA innovation leadership
STMicroelectronics to Boost AI at the Edge with New NPU-accelerated STM32 Microcontrollers
New machine-learning capabilities make it possible to run computer vision, audio processing, sound analysis and more consumer and industrial applications at the edge STM32N6 MCU series is the most powerful in STM32 family, and first to feature proprietary Neural-ART Accelerator™ NPU, architected for embedded inference Combination of software and tools ecosystem continues to lower the
BrainChip Awarded Air Force Research Laboratory Radar Development Contract
Laguna Hills, Calif. – DECEMBER 9, 2024 – BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY), the world’s first commercial producer of ultra-low power, fully digital, event-based, neuromorphic AI, today announced that it was awarded a development contract for $1.8M from Air Force Research Laboratory (AFRL) on neuromorphic radar signaling processing technologies. The AFRL contract
How RTX AI PCs Unlock AI Agents That Solve Complex Problems Autonomously With Generative AI
This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. NVIDIA RTX-accelerated AnythingLLM launches Community Hub for sharing prompts, slash commands and AI agent skills. Editor’s note: This post is part of the AI Decoded series, which demystifies AI by making the technology more accessible, and showcases