“How Arm’s Machine Learning Solution Enables Vision Transformers at the Edge,” a Presentation from Arm

Stephen Su, Senior Segment Marketing Manager at Arm, presents the “How Arm’s Machine Learning Solution Enables Vision Transformers at the Edge” tutorial at the May 2024 Embedded Vision Summit.

AI at the edge has been transforming over the last few years, with newer use cases running more efficiently and securely. Most edge AI workloads were initially run on CPUs, but machine learning accelerators have gradually been integrated into SoCs, providing more efficient solutions. At the same time, ChatGPT has driven a sudden surge in interest in transformer-based models, which are primarily deployed using cloud resources. Soon, many transformer-based models will be modified to run effectively on edge devices.

In this presentation, Su explains the role of transformer-based models in vision applications and the challenges of implementing transformer models at the edge. Next, he introduces the latest Arm machine learning solution and how it enables the deployment of transformer-based vision networks at the edge. Finally, he shares an example implementation of a transformer-based embedded vision use case and uses this to contrast such solutions with those based on traditional CNN networks.

See here for a PDF of the slides.

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top