Patrick Worfolk, Senior Vice President and CTO of Synaptics, presents the “Enabling Visual AI at the Edge: From Surveillance Cameras to People Counters” tutorial at the May 2021 Embedded Vision Summit.
New AI-at-the-edge processors with improved efficiencies and flexibility are unleashing a huge opportunity to democratize computer vision broadly across all markets, enabling edge AI devices with small, low-cost, low-power cameras. Synaptics has embarked on a roadmap of edge-AI DNN processors targeted at a range of real-time computer vision and multimedia applications. These span from enhancing the image quality of a high-resolution camera’s output using Synaptics’ VS680 multi-TOPS processor to performing computer vision in battery-powered devices at lower resolution using the company’s Katana Edge-AI SoC.
In this talk, Worfolk shows how these edge AI SoCs can be used to:
- Achieve exceptional color video in very low light conditions
- De-noise and distortion-correct both 2D and 3D imagery from a time-of-flight depth camera that images through a smartphone OLED display
- Perform super-resolution enhancement of high-resolution video imagery, and
- Recognize objects using lower-resolution sensors under battery power.
See here for a PDF of the slides.