Neil Trevett, President of the Khronos Group and Vice President at NVIDIA, presents the "APIs for Accelerating Vision and Inferencing: An Industry Overview of Options and Trade-offs" tutorial at the May 2019 Embedded Vision Summit.
The landscape of SDKs, APIs and file formats for accelerating inferencing and vision applications continues to evolve rapidly. Low-level compute APIs, such as OpenCL, Vulkan and CUDA are being used to accelerate inferencing engines such as OpenVX, CoreML, NNAPI and TensorRT, being fed by neural network file formats such as NNEF and ONNX.
Some of these APIs, like OpenCV, are vision-specific, while others, like OpenCL, are general-purpose. Some engines, like CoreML and TensorRT, are supplier-specific, while others such as OpenVX, are open standards that any supplier can adopt. Which ones should you use for your project? Trevett answers these and other questions in this presentation.