Ryad Benosman, Professor at the University of Pittsburgh and Adjunct Professor at the CMU Robotics Institute, presents the “Event-Based Neuromorphic Perception and Computation: The Future of Sensing and AI” tutorial at the May 2022 Embedded Vision Summit.
We say that today’s mainstream computer vision technologies enable machines to “see,” much as humans do. We refer to today’s image sensors as the “eyes” of these machines. And we call our most powerful algorithms deep “neural” networks. In reality, the principles underlying current mainstream computer vision are completely different from those underlying biological vision. Conventional image sensors operate very differently from eyes found in nature, and there’s virtually nothing “neural” about deep neural networks. Can we gain important advantages by implementing computer vision using principles of biological vision? Professor Ryad Benosman thinks so.
Mainstream image sensors and processors acquire and process visual information as a series of snapshots recorded at a fixed frame rate, resulting in limited temporal resolution, low dynamic range and a high degree of redundancy in data and computation. Nature suggests a different approach: Biological vision systems are driven and controlled by events within the scene in view, and not – like conventional techniques – by artificially created timing and control signals that have no relation to the source of the visual information.
The term “neuromorphic” refers to systems that mimic biological processes. In this talk, Professor Benosman — a pioneer of neuromorphic sensing and computing — introduces the fundamentals of bio-inspired, event-based image sensing and processing approaches, and explores their strengths and weaknesses. He shows that bio-inspired vision systems have the potential to outperform conventional, frame-based systems and to enable new capabilities in terms of data compression, dynamic range, temporal resolution and power efficiency in applications such as 3D vision, object tracking, motor control and visual feedback loops.
See here for a PDF of the slides.