Imagination Launches Multi-core IMG Series4 NNA – The Ultimate AI Accelerator Delivering Industry-disruptive Performance for ADAS and Autonomous Driving

Offering the most responsive and power efficient neural network acceleration for next-generation automotive use cases

London, England – 12th November 2020Imagination Technologies announces IMG Series4, its next-generation neural network accelerator (NNA) for advanced driver-assistance systems (ADAS) and autonomous driving. Series4 targets automotive industry-leading disruptors as well as Tier 1s, original equipment manufacturer (OEMS) and automotive semiconductor system-on-chip (SoC) manufacturers.

Featuring a new multi-core architecture, Series4 delivers ultra-high performance of 600 tera operations per second (TOPS) and beyond, offering low bandwidth and exceptionally low latency for large neural network workloads.

The automotive industry is on the cusp of a revolution, with new use cases such as self-driving cars and robotaxis demanding new levels of artificial intelligence (AI) performance. To that end, Imagination is already working with leading players and innovators in automotive, and other industries where functional safety is valued. Series4 has already been licensed and will be available on the market in December 2020.

Imagination’s low-power NNA architecture is designed to run full network inferencing while meeting functional safety requirements. It executes multiple operations in a single pass to maximise performance per watt and deliver its industry-leading energy efficiency.

Series4 includes:

  • Multi-core scalability and flexibility: Multi-core allows for flexible allocation and synchronisation of workloads across the cores. Imagination’s software, which provides fine-grained control and increases flexibility through batching, splitting and scheduling of multiple workloads, can now be exploited across any number of cores. Available in configurations of 2, 4, 6, or 8 cores per cluster.
  • Ultra-high performance: Series4 offers 12.5 TOPS per core at less than one watt. For example, an 8-cluster core can deliver 100 TOPS: thus, a 6×100 solution offers 600 TOPS. A Series4 NNA achieves performance that is over 20x faster than an embedded GPU and 1000x faster than an embedded CPU for AI inference.
  • Ultra-low latency: By combining all the cores into a 2, 4, 6 or 8-core cluster, all the cores can be dedicated to executing a single task, reducing latency, and therefore response time, by a corresponding factor. For example, for an 8-core cluster by a factor of eight.
  • Major bandwidth savings: Imagination’s Tensor Tiling (ITT), new to Series4 is a patent-pending technology that addresses bandwidth efficiency by splitting input data tensors into multiple tiles for efficient data processing. ITT exploits local data dependencies to keep intermediate data in on-chip memory. This minimises data transfers to external memory, reducing bandwidth by up to an incredible 90%. ITT is a scalable algorithm with major benefits on networks with large input data sizes.
  • Automotive safety: Series4 includes IP-level safety features and a design process that conforms to ISO 26262 to help customers to achieve certification. ISO 26262 is the industry safety standard that addresses risk in automotive electronics. Series4 enables the safe inference of a neural network without impacting performance. Hardware safety mechanisms protect the compiled network, the execution of the network and the data-processing pipeline.

James Hodgson, Principle Analyst, Smart Mobility and Automotive, ABI Research, says; “While we expect the demand for ADAS to triple by around 2027 the automotive industry is already looking beyond this to full self-driving cars and robotaxis. Wider adoption of neural networks will be an essential factor in the evolution from Level 2 and 3 ADAS, to full self-driving at Level 4 and Level 5. These systems will have to cope with hundreds of complex scenarios, absorbing data from numerous sensors, such as multiple cameras and LiDAR, for solutions such as automated valet parking, and intersection management and safely navigating complex urban environments. A combination of high performance, low latency and energy efficiency will be key to scaling highly automated driving.”

Andrew Grant, Senior Director, Artificial Intelligence, Imagination Technologies, says; “We believe the Series4 NNA to be the industry-standard platform for the development of advanced driver assistance and self-driving cars. Innovators are already tackling the task of creating the silicon that will support the next generation of ADAS features and autonomous vehicles. Any company or R&D team looking to be a serious player in automotive needs to be integrating this technology into their platforms now.”

To find out more about the Series4 NNA, watch our keynote session here.

About Imagination Technologies

Imagination is a UK-based company that creates silicon and software intellectual property (IP) designed to give its customers an edge in a competitive global technology market. Its graphics, compute, vision & AI, and connectivity technologies enable outstanding power, performance and area (PPA), robust security, fast time-to-market and lower total cost of ownership. Products based on Imagination IP are used by billions of people across the globe in their phones, cars, homes, and workplaces. Imagination Technologies was acquired in 2017 by Canyon Bridge, a global private equity investment fund. See www.imgtec.com.

Follow Imagination on Twitter, YouTube, LinkedIn, RSS, Facebook and Blog.

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top