Deepak Mital, Senior Director of Architectures at Synaptics, presents the “How to Run Audio and Vision AI Algorithms at Ultra-low Power” tutorial at the May 2024 Embedded Vision Summit.
Running AI algorithms on battery-powered, low-cost devices requires a different approach to designing hardware and software. The power requirements are stringent at standby, but the device needs to be able to awaken quickly when an event is detected. The device needs to “pseudo” wake up, determine if the event needs attention, and then either go back to standby or become active to classify the event.
This multistage wake-up process and the associated intelligence requires tight orchestration of hardware and software. Apart from runtime software, the AI models must be highly optimized to fit and run on the constrained device. To show how this can be done, Mital presents a solution that combines hardware, software and AI models to enable running audio and video AI algorithms at ultra-low power.
See here for a PDF of the slides.