"Tailoring Convolutional Neural Networks for Low-Cost, Low-Power Implementation," a Presentation from Synopsys
Deep learning-based object detection using convolutional neural networks (CNN) has recently emerged as one of the leading approaches for achieving state-of-the-art detection accuracy for a wide range of object classes. Most of the current CNN-based detection algorithm implementations run on high-performance computing platforms that include high-end general-purpose processors and GP-GPUs. These CNN implementations have significant computing power and memory requirements. Bruno Lavigueur, Project Leader for Embedded Vision at Synopsys, presents the company's experience in reducing the complexity of the CNN graph to make the resulting algorithm amenable to low-cost and low-power computing platforms. This involves reducing the compute requirements, memory size for storing convolution coefficients, and moving from floating point to 8 and 16 bit fixed point data widths. Lavigueur demonstrates results for a face detection application running on a dedicated low-cost and low-power multi-core platform optimized for CNN-based applications.
"An Augmented Navigation Platform: The Convergence of ADAS and Navigation," a Presentation from Harman
Until recently, advanced driver assistance systems (ADAS) and in-car navigation systems have evolved as separate standalone systems. Today, however, the combination of available embedded computing power and modern computer vision algorithms enables the merger of these functions into an immersive driver information system. In this presentation, Alon Atsmon, Vice President of Technology Strategy at Harman International, discusses the company's activities in ADAS and navigation. He explores how computer vision enables more intelligent systems with more natural user interfaces, and highlights some of the challenges associated with using computer vision to deliver a seamless, reliable experience for the driver. Atsmon also demonstrates Harman’s latest unified Augmented Navigation platform which overlays ADAS warnings and navigation instructions over a camera feed or over the driver’s actual road view.
More Videos
|
A Design Approach for Real Time Classifiers
Object detection and classification is a supervised learning process in machine vision to recognize patterns or objects from images or other data, according to Sudheesh TV and Anshuman S Gauriar, Technical Leads at PathPartner Technology. It is a major component in advanced driver assistance systems (ADAS), for example, where it is commonly used to detect pedestrians, vehicles, traffic signs etc. The offline classifier training process fetches sets of selected images and other data containing objects of interest, extracts features from this input, and maps them to corresponding labelled classes in order to generate a classification model. Real-time inputs are categorized based on the pre-trained classification model in an online process which finally decides whether or not the object is present. More
Pokemon Go-es to Show the Power of AR
Pokemon Go is an awesome concept, says Freddi Jeffries, Content Marketer at ARM. While she's a strong believer in VR (virtual reality) as a driving force in how we will handle much of our lives in the future, she can now see that apps like this have the potential to take AR (augmented reality) mainstream much faster than VR. More
More Articles
|
ARC Processor Summit: September 13, 2016, Santa Clara, California
Deep Learning for Vision Using CNNs and Caffe: A Hands-on Tutorial: September 22, 2016, Cambridge, Massachusetts
IEEE International Conference on Image Processing (ICIP): September 25-28, 2016, Phoenix, Arizona
SoftKinetic DepthSense Workshop: September 26-27, 2016, San Jose, California
Sensors Midwest (use code EVA for a free Expo pass): September 27-28, 2016, Rosemont, Illinois
Embedded Vision Summit: May 1-3, 2017, Santa Clara, California
More Events
|