Lina Karam, Professor and Computer Engineering Director at Arizona State University, presents the “Generative Sensing: Reliable Recognition from Unreliable Sensor Data” tutorial at the May 2018 Embedded Vision Summit.
While deep neural networks (DNNs) perform on par with – or better than – humans on pristine high-resolution images, DNN performance is significantly worse than human performance on images with quality degradations, which are frequently encountered in real-world applications. This talk introduces a new generative sensing framework which integrates low-end sensors with computational intelligence to attain recognition accuracy on par with that attained using high-end sensors.
This generative sensing framework aims to transform low-quality sensor data into higher quality data in terms of classification accuracy. In contrast with existing methods for image generation, this framework is based on discriminative models and targets to maximize the recognition accuracy rather than a similarity measure. This is achieved through the introduction of selective feature regeneration in a deep neural network.