Khronos Launches Dual Neural Network Standard Initiatives

khronos-group-logo

Industry Call for Participation in new Neural Network Exchange Format working group; OpenVX standard for vision processing releases Neural Network extension

October 4th 2016 – San Francisco, CA – The Khronos™ Group, an open consortium of leading hardware and software companies, today announced the creation of two standardization initiatives to address the growing industry interest in the deployment and acceleration of neural network technology. Firstly, Khronos has formed a new working group to create an API independent standard file format for exchanging deep learning data between training systems and inference engines. Work on generating requirements and detailed design proposals for the Neural Network Exchange Format (NNEF™) is already underway, and companies interested in participating are welcome to join Khronos for a voice and a vote in the development process. Secondly, the OpenVX™ working group has released an extension to enable Convolutional Neural Network topologies to be represented as OpenVX graphs and mixed with traditional vision functions.

Neural network technology has seen recent explosive progress in solving pattern matching tasks in computer vision such as object recognition, face identification, image search, and image to text, and is also playing a key part in enabling driver assistance and autonomous driving systems. Convolutional Neural Networks (CNN) are computationally intensive, and so many companies are actively developing mobile and embedded processor architectures to accelerate neural network-based inferencing at high speed and low power. As a result of such rapid progress, the market for embedded neural network processing is in danger of fragmenting, creating barriers for developers seeking to configure and accelerate inferencing engines across multiple platforms.

About the Neural Network Exchange Format (NNEF)

Today, most neural network toolkits and inference engines use proprietary formats to describe the trained network parameters, making it necessary to construct many proprietary importers and exporters to enable a trained network to be executed across multiple inference engines. The Khronos Neural Network Exchange Format (NNEF) is designed to simplify the process of using a tool to create a network, and running that trained network on other toolkits or inference engines. This can reduce deployment friction and encourage a richer mix of cross-platform deep learning tools, engines and applications.

The NNEF standard encapsulates neural network structure, data formats, commonly used operations (such as convolution, pooling, normalization, etc.) and formal network semantics. This enables the essentials of a trained network to be reliably exported and imported across tools and engines. NNEF is purely a data interchange format and deliberately does not prescribe how an exported network has been trained, or how an imported network is to be executed. This ensures that the data format does not hinder innovation and competition in this rapidly evolving domain. More information on the NNEF initiative is available at the NNEF Home Page.

About the OpenVX Neural Network Extension

The OpenVX Neural Network extension specifies an architecture for executing CNN-based inference in OpenVX graphs. The extension defines a multi-dimensional tensor object data structure which can be used to connect neural network layers, represented as OpenVX nodes, to create flexible CNN topologies. OpenVX neural network layer types include convolution, pooling, fully connected, normalization, soft-max and activation – with nine different activation functions. The extension enables neural network inferencing to be mixed with traditional vision processing operations in the same OpenVX graph.

Today, OpenVX has also released an Import/Export extension that complements the Neural Network extension by defining an API to import and export OpenVX objects, such as traditional computer vision nodes, data objects of a graph or partial graph, and CNN objects including network weights and biases or complete networks.

The high-level abstraction of OpenVX enables implementers to accelerate a dataflow graph of vision functions across a diverse array of hardware and software acceleration platforms. The inclusion of neural network inferencing functionality in OpenVX enables the same portable, processor-independent expression of functionality with significant freedom and flexibility in how that inferencing is actually accelerated. The OpenVX Neural Network extension is released in provisional form to enable developers and implementers to provide feedback before finalization and industry feedback is welcomed at the OpenVX Forums. More details on OpenVX and the new extensions can be found at the OpenVX Home Page.

Khronos is coordinating its neural network activities, and expects that NNEF files will be able to represent all aspects of an OpenVX neural network graph, and that OpenVX will enable import of network topologies via NNEF files through the Import/Export extension, once the NEFF format definition is complete.

Industry Support

“AdasWorks initiated the creation of the NNEF working group as we saw the growing need for platform-independent neural network-based software solutions in the autonomous driving space. We cooperate closely with chip companies to help them build low-power, high-performance neural network hardware and believe firmly that an industry standard, which works across multiple platforms, will be beneficial for the whole market. We are happy to see numerous companies joining the initiative,” said Laszlo Kishonti, founder and CEO of AdasWorks.

“AMD fully supports the development of open standards, currently being the only company with an open source version of OpenVX. We support the creation of OpenVX extensions and data formats related to Neural Networks such as CNN in computer vision and related applications,” said Mike Mantor, corporate fellow and CTO, Radeon Technologies Group, AMD.

“Cadence has been investing heavily in tools for OpenVX and CNN programming to accelerate adoption of our market-leading Tensilica Vision DSPs,” said Dino Bekis, vice president of product marketing for the IP Group at Cadence. “Khronos’ efforts to standardize a universal CNN description exchange format will speed the availability of universal tools for converting trained CNNs to the inference domain. The extensions to OpenVX graph descriptions will enable more seamless deployment of both imaging and vision algorithms in deeply embedded devices.”

"Intel supports and welcomes the adoption of OpenVX and the OpenVX Neural Network Extension as an important element in proliferating computer vision deep learning usage models," said Ron Friedman, Intel Corporate vice president and general manager of IP Blocks and Technologies. "Khronos OpenVX Neural Network Extension brings algorithms tuned for deep learning to the embedded computer vision and machine intelligence hardware devices."

“As CNNs are becoming key to vision processing, Imagination is delighted to participate in Khronos’ neural net initiatives. Our PowerVR GPUs have supported OpenVX since its inception and we’ve already demonstrated CNNs running on PowerVR GPUs. The extension of OpenVX to support CNNs will provide a framework to make it easy for our customers to deploy vision applications using CNNs on new and existing PowerVR based SoCs,” said Chris Longstaff, Senior Director of Product and Technology Marketing, PowerVR, Imagination Technologies.

“We see increasingly more real life problems getting solved with neural network technologies”, said Victor Erukhimov, CEO of Itseez3D, Inc. and the chair of the OpenVX working group. “Efficient implementation of neural networks inference on embedded devices will enable a wide variety of applications for mobile phones, AR/VR and automotive safety.”

“As an active working group member and one of the earliest OpenVX adopters, VeriSilicon is excited to see Khronos extend its support to deep learning and neural networks,” said Shanghung Lin, Vice President for Vision and Image Product Development at VeriSilicon. “Programmability and inter-operability between vision functions and the Neural Net extension makes OpenVX a perfect programming interface for VeriSilicon’s VIP8000 ultra-low-power, scalable vision processor solution, which combines neural network engines, OpenVX optimized shader programming engines, and a special interconnect logic called tensor processing fabric to allow collaborative computing for vision and neural net technology. VeriSilicon looks forward to participating in the Khronos NEFF working group to bridge the disparate market of deep learning frameworks and toolkits. A simple and standard neural net format is imperative to facilitate users choosing their favorite training tools and deploying the trained network to different inference engines in different applications.”

About The Khronos Group

The Khronos Group is an industry consortium creating open standards to enable the authoring and acceleration of parallel computing, graphics, vision and neural nets on a wide variety of platforms and devices. Khronos standards include Vulkan™, OpenGL®, OpenGL® ES, OpenGL® SC, WebGL™, OpenCL™, SPIR™, SPIR-V™, SYCL™, WebCL™, OpenVX™, EGL™, COLLADA™, and glTF™. Khronos members are enabled to contribute to the development of Khronos specifications, are empowered to vote at various stages before public deployment, and are able to accelerate the delivery of their cutting-edge accelerated platforms and applications through early access to specification drafts and conformance tests.

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top