CDNN software framework, in conjunction with the CEVA-XM4 imaging and vision DSP, enables:
-
Real-time object recognition and vision analytics
-
Lowest power deep learning solution for embedded systems: 30x lower power and 3x faster processing when compared to leading GPU-based systems
-
15x average memory bandwidth reduction compared to typical neural network implementations
-
Automatic conversion from offline pre-trained networks to real-time embedded-ready networks
-
Flexibility to support various neural network structures, including any number and type of layers
SANTA CLARA, Calif., Oct. 7, 2015 /PRNewswire/ — Linley Processor Conference 2015 — CEVA, Inc. (NASDAQ: CEVA), the leading licensor of DSP IP platforms for cellular, multimedia and connectivity, has today introduced the CEVA Deep Neural Network (CDNN), a real-time neural network software framework, to streamline machine learning deployment in low-power embedded systems. Harnessing the processing power of the CEVA-XM4 imaging & vision DSP, the CDNN enables embedded systems to perform deep learning tasks 3x faster than the leading GPU-based systems while consuming 30x less power and requiring 15x less memory bandwidth*. For example, running a Deep Neural Network (DNN) based pedestrian detection algorithm at 28nm requires less than 30mW for a 1080p 30 frames per second video stream.
Key to the performance, low power and low memory bandwidth capabilities of CDNN is the CEVA Network Generator, a proprietary automated technology that converts a customer's network structure and weights to a slim, customized network model used in real-time. This enables a faster network model which consumes significantly lower power and memory bandwidth, with less than 1% degradation in accuracy compared to the original network. Once the customized embedded-ready network is generated, it runs on the CEVA-XM4 imaging and vision DSP using fully optimized Convolutional Neural Network (CNN) layers, software libraries and APIs.
Phi Algorithm Solutions, a member of CEVA's CEVAnet partner program, has used CDNN to implement a CNN-based Universal Object Detector algorithm for the CEVA-XM4 DSP. This is now available for application developers and OEMs to run a variety of applications including pedestrian detection and face detection for security, ADAS and other embedded devices based around low-power camera-enabled systems.
"The CEVA Deep Neural Network framework provided a quick and smooth path from offline training to real-time detection for our convolutional neural network based algorithms," said Steven Hanna, president and co-founder at Phi Algorithm Solutions. "In a matter of days we were able to get an optimized implementation of our unique object detection network, while significantly reducing power consumption compared to other platforms. The CEVA-XM4 imaging & vision DSP together with the CDNN framework is ideal for embedded vision devices and paves the way to advances in artificial intelligence devices in the coming years using deep learning techniques."
"With more than 20 design wins to-date, we continue to lead the industry in the embedded vision processor domain and are constantly enhancing our portfolio of vision IP offerings to help our customers get to market quicker with minimal risk," said Eran Briman, vice president of marketing at CEVA. "Our new Deep Neural Network framework for the CEVA-XM4 is the first of its kind in the embedded industry, providing a significant step forward for developers looking to implement viable deep learning algorithms within power-constrained embedded systems."
The CDNN software framework is supplied as source code, extending the CEVA-XM4's existing Application Developer Kit (ADK). It is flexible and modular, capable of supporting either the complete CNN implementation or specific layers. It works with various networks and structures, such as networks developed with Caffe, Torch or Theano training frameworks, or proprietary networks. CDNN includes real-time example models for image classification, localization and object recognition. It is intended to be used for object and scene recognition, advanced driver assistance systems (ADAS), Artificial intelligence (AI), video analytics, augmented reality (AR), virtual reality (VR) and similar computer vision applications. For more information on CDNN, go to http://launch.ceva-dsp.com/cdnn/.
On November 12th, CEVA will host a live webinar on implementing machine vision in embedded systems, including a deep dive into CDNN. For more information and to register for the webinar, visit http://bit.ly/1hoam9p.
CEVA will also present CDNN at the Linley Processor Conference 2015, taking place today in Santa Clara, California. For more information, visit the Linley Processor Conference website at: http://www.linleygroup.com/events/event.php?num=35.
*Comparison run on AlexNet network, the most popular deep neural network.
About CEVA, Inc.
CEVA is the leading licensor of cellular, multimedia and connectivity technologies to semiconductor companies and OEMs serving the mobile, consumer, automotive and IoT markets. Our DSP IP portfolio includes comprehensive platforms for multimode 2G/3G/LTE/LTE-A baseband processing in terminals and infrastructure, computer vision and computational photography for any camera-enabled device, audio/voice/speech and ultra-low power always-on/sensing applications for multiple IoT markets. For connectivity, we offer the industry's most widely adopted IPs for Bluetooth (Smart and Smart Ready), Wi-Fi (802.11 b/g/n/ac up to 4×4) and serial storage (SATA and SAS). One in every three phones sold worldwide is powered by CEVA, from many of the world's leading OEMs including Samsung, Huawei, Xiaomi, Lenovo, HTC, LG, Coolpad, ZTE, Micromax and Meizu. Visit us at www.ceva-dsp.com and follow us on Twitter, YouTube and LinkedIn.