“How to Run Audio and Vision AI Algorithms at Ultra-low Power,” a Presentation from Synaptics

Deepak Mital, Senior Director of Architectures at Synaptics, presents the “How to Run Audio and Vision AI Algorithms at Ultra-low Power” tutorial at the May 2024 Embedded Vision Summit.

Running AI algorithms on battery-powered, low-cost devices requires a different approach to designing hardware and software. The power requirements are stringent at standby, but the device needs to be able to awaken quickly when an event is detected. The device needs to “pseudo” wake up, determine if the event needs attention, and then either go back to standby or become active to classify the event.

This multistage wake-up process and the associated intelligence requires tight orchestration of hardware and software. Apart from runtime software, the AI models must be highly optimized to fit and run on the constrained device. To show how this can be done, Mital presents a solution that combines hardware, software and AI models to enable running audio and video AI algorithms at ultra-low power.

See here for a PDF of the slides.

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top