Edge AI and Vision Insights: February 4, 2020 Edition

 

LETTER FROM THE EDITOR
Dear Colleague,Edge AI and Vision Alliance

We’re announcing a significant expansion of the Embedded Vision Alliance to encompass the full range of edge AI technology and applications, including our traditional domain of computer vision and visual AI. To better reflect our new scope, we’re changing our name to the Edge AI and Vision Alliance. We’ve even got a shiny new website. And, as you may have already noticed, the name of this newsletter has also been updated to reflect our expansion.

We are seeing the same challenges in edge AI today that we saw in computer vision almost a decade ago. Then, as now, a powerful new technology had become ready for widespread use, but the companies and developers creating systems and applications were struggling to figure out how to best incorporate it into their products. And then, as now, technology suppliers needed data and insights to help them find their best opportunities, as well as connections to customers and partners to enable them to grow their businesses.

Rest assured that our fundamental purpose remains the same: inspiring and empowering the individuals and companies creating intelligent systems and applications; building a vibrant ecosystem by bringing together technology suppliers, end-product creators, and partners; and delivering timely insights into relevant markets, technology trends, standards and application requirements. The difference is that our mission now includes not just vision but the full range of edge AI technologies and applications.

We’re proud of the work we¹ve done since 2011 to bring news, trends, and resources in computer vision and visual AI to engineers, innovators, and business leaders. Now we’re looking forward to expanding that to include the full range of edge AI technologies and applications. For more information, check out our press release; stay tuned for more developments in the months ahead. In the meantime, I’d be delighted to hear any thoughts or answer any questions you might have. And please be sure to add edge-ai-vision.com to your safe sender list to get important updates and information, including our weekly newsletters.

Brian Dipert
Editor-In-Chief, Edge AI and Vision Alliance

AUTONOMOUS VEHICLES

Can We Have Both Safety and Performance in AI for Autonomous Vehicles?Codeplay Software
The need to ensure safety in AI subsystems within autonomous vehicles is obvious. How to achieve it is not. Standard safety engineering tools are designed for software that runs on general-purpose CPUs. But AI algorithms require more performance than CPUs provide, and the specialized processors employed to achieve this performance are very difficult to qualify for safety. How can we achieve the redundancy and very strict testing required to achieve safety, while also using specialized processors to achieve AI performance? How can ISO 26262 be applied to AI accelerators? How can standard automotive practices like coverage checking and MISRA coding guidelines be used? Codeplay Software believes that safe autonomous vehicle AI subsystems are achievable, but only with cross-industry collaboration. In this presentation, Andrew Richards, the company’s CEO and co-founder, examines the challenges of implementing safe autonomous vehicle AI subsystems and explains the most promising approaches for overcoming these challenges, including leveraging standards bodies such as Khronos, MISRA and AUTOSAR.

DNN Challenges and Approaches for L4/L5 Autonomous VehiclesGraphcore
The industry has made great strides in development of L4/L5 autonomous vehicles, but what’s available today falls far short of expectations set as recently as two to three years ago. To some extent, the industry is in a “we don’t know what we don’t know” state regarding the sensors and AI processing required for a reliable and practical L4/L5 solution. Research on new types of DNNs for perception is advancing rapidly, and solutions for planning are in their infancy. In this talk, Tom Wilson, Vice President of Automotive at Graphcore, reviews important areas of uncertainty and surveys some of the DNN approaches under consideration for perception and planning. He also explores the compute challenges associated with these DNNs.

OPEN SOURCE-BASED DEVELOPMENT

OpenCV: Current Status and Future PlansOpenCV.org
With over two million downloads per week, OpenCV is the most popular open source computer vision library in the world. It implements over 2500 optimized algorithms, works on all major operating systems, is available in multiple languages and is free for commercial use. This talk from Satya Mallick, Interim CEO of OpenCV.org, provides a technical update on OpenCV, answering questions such as:

  • What’s new in OpenCV 4.0?
  • What is the Graph API?
  • Why are we so excited about the Deep Neural Network (DNN) module in OpenCV? (Short answer: It is one of the fastest inference engines on the CPU.)

Mallick also shares plans for the future of OpenCV, including new algorithms that the organization plans to add through the Google Summer of Code. And he briefly shares information on the new Open Source Vision Foundation (OSVF), on OpenCV’s sister organizations, CARLA and Open3D, and on some of the initiatives planned by these organizations.

Building Complete Embedded Vision Systems on Linux — From Camera to DisplayMontgomery One
There’s a huge wealth of open-source software components available today for embedding vision on the latest SoCs from numerous suppliers, at lower power and cost points than ever before. Testing vision algorithms is the first step, but what about the rest of your system? In this talk, Clay D. Montgomery, Freelance Embedded Multimedia Developer at Montgomery One, considers the best open-source components available today and explains how to select and integrate them to build complete video pipelines on Linux—from camera to display—while maximizing performance. Montgomery examines and compares popular open-source libraries for vision, including Yocto, ffmpeg, gstreamer, V4L2, OpenCV, OpenVX, OpenCL and OpenGL. Which components do you need and why? He also summarizes the steps required to build and test complete video pipelines, common integration problems to avoid and how to work around issues to get the best performance possible on embedded systems.

UPCOMING INDUSTRY EVENTS

FRAMOS Tech Days Event: February 6, 2020, San Francisco, California

Yole Développement Webinar – 3D Imaging and Sensing: From Enhanced Photography to an Enabling Technology for AR and VR: February 19, 2020, 8:00 am PT

Edge AI and Vision Alliance Webinar – Algorithms, Processors and Tools for Visual AI: Analysis, Insights and Forecasts: February 27, 2020, 9:00 am and 6:00 pm PT (two sessions)

Embedded Vision Summit: May 18-21, 2020, Santa Clara, California

More Events

FEATURED NEWS

Ambarella Announces the CV22FS and CV2FS Automotive Camera SoCs for Advanced Driver Assistance Systems (ADAS)

Synaptics Announces Its First Edge Computing Video SoCs with a Secure AI Framework

Qualcomm Launches Three New Snapdragon Mobile Platforms to Address Ongoing Demand for 4G Smartphones

Allied Vision’s Alvium 1800 USB Camera is Now Available with New CMOS Global Shutter Sensors

Basler’s New Products in the ace 2 Camera Series Have 5 MPixel and 8 MPixel Resolutions

More News

 

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top