“Challenges in Architecting Vision Inference Systems for Transformer Models,” a Presentation from Flex Logix

Cheng Wang, Co-founder and CTO of Flex Logix, presents the “Challenges in Architecting Vision Inference Systems for Transformer Models” tutorial at the May 2023 Embedded Vision Summit.

When used correctly, transformer neural networks can deliver greater accuracy for less computation. But transformers are challenging for existing AI engine architectures because they use many compute functions not required by previously prevalent convolutional neural networks.

In this talk, Wang explores key transformer compute requirements and highlights how they differ from CNN compute requirements. He then introduces Flex Logix’s silicon IP InferX X1 AI accelerator. Wang shows how the dynamic TPU array architecture used by InferX efficiently executes transformer neural networks. He also explains how InferX integrates into your system and shows how it scales to adapt to varying cost and performance requirements.

See here for a PDF of the slides.

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top