Latest NPU Adds to Arm’s AI Platform Performance, Applicability, and Efficiency

News Highlights:

  • Adding to the success of Arm’s microNPU product line, Arm Ethos-U65 maintains the power efficiency of the Ethos-U55, while extending its applicability to Arm Cortex-A- and Arm Neoverse-based systems

  • Continued support for existing software and toolchains to provide a single, unified developer experience

  • Expanded technology partnership with NXP to deliver energy-efficient, cost-effective AI solutions for the fast-growing Industrial and IoT markets

With an ever-increasing range of devices adding artificial intelligence (AI) functionality, Arm today announced the latest member of its Ethos product line, the Arm Ethos-U65 microNPU (Neural Processing Unit).  Focused on AI/machine learning (ML) processing, and providing a new performance point and new capabilities, the Ethos-U65 maintains the power efficiency of the Arm Ethos-U55 while extending its applicability from Arm Cortex-M to Arm Cortex-A and Arm Neoverse-based systems, and at the same time delivering twice the on-device ML performance.

The rapid adoption of AI and ML into edge and endpoint devices is driving not only increased functionality but also increased device and system requirements. This in turn means providers must deliver systems with more performance and on-device ML capabilities, while maintaining or improving power efficiency.

Enabling AI everywhere: No device left behind

Arm’s unparalleled AI platform is designed to deliver flexible, scalable technologies to the broadest range of devices. Earlier this year we announced new ML IP, the Arm Cortex-M55 processor and Arm Ethos-U55, the world’s first microNPU, a combined solution delivering a 480x leap in ML performance for microcontrollers. This solution is enabling ML compute to happen everywhere, from the tiniest endpoint devices to smart home solutions and beyond.

We’re already seeing great success with the Ethos-U55, as licensees including NXP Semiconductors deliver solutions that address the increasing demand for on-device ML processing for billions of small, power-constrained IoT and embedded devices. NXP will now also add the Ethos-U65 to its SoC offerings, providing their customers with more on-device intelligence for smarter, more powerful devices that benefit from high efficiency, greater privacy, and reliability.

Ethos-U65: Powering a new wave of edge AI

5G network rollouts are enabling and enhancing connectivity for billions of diverse devices, which is why we added another performance point with the Ethos-U65. This will enable an additional range of solutions for complex AI workloads across the Cortex-M, Cortex-A, and Neoverse processor families, while increasing throughput and efficiency on a consistent software stack and with familiar tools. By enabling greater system-level performance in rich embedded devices, as well as subsystems in applications like data and control planes of Neoverse-based systems, developers can unlock better throughput and power efficiency, more intelligence, and more natural ways to interact with our connected world.

For more technical details, read this blog from my colleague Tanuj Arora.

Solving the ML software challenge: Common toolchains so it just works

Critical to enabling the AI revolution is the need to support the millions of developers working to make this technology innovative and accessible to the masses. The Ethos-U65 leverages the extensive Arm AI ecosystem that is creating and optimizing algorithms to broadly enable on-device AI. The Arm AI Platform delivers a unified software and tools interface for implementing the most popular neural networks across the heterogenous Arm-based processors typically found in modern integrated circuits (ICs).  In addition, there is rich support for open source software to bring the flexibility required in implementing these complex use cases.

We recognize that one of the greatest benefits for our developer ecosystem is alignment around common platforms in order to protect the scale of investments through portability, standards, and common APIs. Our guiding principle is to enable developers to write code once and deploy it anywhere by providing an extensible platform, using common software and toolchains with a simple goal that it should just work.

An unprecedented opportunity

AI and ML represent a once-in-a-generation paradigm shift in capability using data, compute, and software. Arm continues to support this transformation with an unparalleled roadmap of dedicated ML processors and technologies, enabling everyone from cloud providers to tiny IoT device manufacturers.

Through the momentum of our Ethos architecture, and the massive success we’ve already seen for AI devices, we’re helping the ecosystem unlock significant value by delivering the foundations for many new possibilities, maybe even ones we’ve yet to imagine.

Dennis Laudick
Vice President of Marketing, Machine Learning Group, Arm

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top