Edge AI and Vision Insights: December 2, 2020 Edition

LETTER FROM THE EDITOR
Dear Colleague,Imagination Technoloiges webinar

On Tuesday, January 12, 2021 at 9 am PT, Imagination Technologies will deliver the free webinar “The Role of Neural Network Acceleration in Automotive” in partnership with the Edge AI and Vision Alliance. Industry research suggests that demand for ADAS will triple by around 2027. In addition, the automotive industry is already looking beyond this to full self-driving cars and robotaxis. Neural networks are fundamental to and will underpin this evolution from Level 2 and 3 ADAS, to full self-driving at Level 4 and Level 5. These systems will have to cope with hundreds of complex scenarios, absorbing data from numerous sensors, such as multiple cameras and LiDAR, for solutions such as automated valet parking, and intersection management and driving safely through complex urban environments. In this interview session, Jamie Broome, Head of Automotive Business, and Andrew Grant, Senior Director of Artificial Intelligence, both of Imagination Technologies, will share their thoughts and observations on the role of neural network acceleration in the future of automotive, as well as share insights on the company’s product line and roadmap. For more information and to register, please see the event page.

And on Thursday, January 21, 2021 at 9 am PT, Horizon Robotics will deliver the free webinar “Advancing the AI Processing Architecture for the Software-Defined Car,” also in partnership with the Alliance. Edge AI applications for automotive, such as the intelligent cockpit, ADAS and autonomous driving, have set the bar for one of the biggest technology challenges of our time. While the race toward the ultimate driverless vehicle accelerates, AI applications find their way into today’s vehicles at an increasing rate, thus demanding computing performance with high accuracy, high reliability, energy efficiency and cost effectiveness. The Horizon Robotics Journey optimized processor series and Matrix computing system, paired with state-of-the-art AI and computer vision software and an efficient AI toolkit, have emerged as one of the most efficient AI-optimized solutions for smart mobility. This webinar will cover the company’s hardware architecture and roadmap, various vision application examples, selected partnership projects, and a proposal for metrics that best represent production price, power and performance for comparing AI processing platforms. For more information and to register, please see the event page.

Brian Dipert
Editor-In-Chief, Edge AI and Vision Alliance

INFERENCE PROCESSOR BENCHMARKING

Getting Efficient DNN Inference Performance: Is It Really About the TOPS?Intel
This presentation looks at how performance is measured among deep learning inference platforms, starting with the simple peak TOPS metric, why it’s used and why it might be misleading. Gary Brown, Director of AI Marketing at Intel, looks at compute efficiency as measured by real benchmark workload performance and how it relates to peak TOPS, comparing performance across Intel’s inference platforms. He also discusses how developers can use Intel’s DevCloud for the Edge to quickly access Intel’s inference platforms.

Benchmarking vs. Benchmarketing: Why Should You Care?Qualcomm
How can developers know what is the best hardware for their models? Comparing AI hardware is not as simple as it might seem; there are many caveats that need to be considered, such as INT8 and floating-point benefits, how commercial benchmarks are structured and what is the hardware optimized for. In this talk from Felix Baum, Director of Product Management at Qualcomm, you will learn about some of the most common ways of comparing AI hardware and what you need to consider in order to make an accurate assessment.

CAMERA DEVELOPMENT AND IMPLEMENTATION

How to Create Your Own AI-Enabled Camera Solution in DaysIDS Imaging
Cameras coupled with AI can solve numerous real-world problems. But developing an AI camera solution can be a complex and time-consuming undertaking, especially for teams that lack experience in topics like training deep neural networks. IDS Imaging has developed the IDS NXT ocean system to empower teams to quickly and easily create custom AI camera solutions for specific applications. One key element, a streamlined cloud-based training system called IDS NXT lighthouse, enables you to complete your first fully trained CNN solution within a few hours, with no need to select, purchase or install training hardware or software. In this presentation, Carsten Traupe, Director of Product Management at IDS Imaging, presents the key elements of the IDS NXT ocean system and the associated workflow, which together provide everything you need to quickly and easily create your own inference camera.

Smart Factory and Smart Life: AI Embedded Vision CameraLAON People
In this session, you’ll hear about how to use the LAON PEOPLE AI Edge Camera and its associated software solutions. This smart camera is powered by the company’s own hardware and software technology and includes a world-class AI detection algorithm. During this session, Henry Sang, Business Development Manager at LAON PEOPLE, explains how to integrate the AI Edge Camera and software to create solutions for a variety of applications, such as traffic management, smart farms and advanced people counting with thermal information.

UPCOMING INDUSTRY EVENTS

Yole Développement Webinar – Sensor Fusion for Autonomous Vehicles: December 15, 2020, 9:00 am PT

BrainChip Webinar – Power-efficient Edge AI Applications through Neuromorphic Processing: December 17, 2020, 9:00 am PT

Imagination Technologies Webinar – The Role of Neural Network Acceleration in Automotive: January 12, 2021, 9:00 am PT

Horizon Robotics Webinar – Advancing the AI Processing Architecture for the Software-Defined Car: January 21, 2021, 9:00 am PT

Vision Components Webinar – Adding Embedded Cameras to Your Next Industrial Product Design: February 16, 2021, 9:00 am PT

More Events

FEATURED NEWS

Intel Announces Its First Structured ASIC for 5G, AI, the Cloud and the Edge

Mythic Launches Its First AI Analog Matrix Processor

Gyrfalcon Technology Unveils AI-X, a Full-Stack Solution for Edge-AI Development

An Upcoming Webinar Explores Best Practices When Working with FRAMOS’ Industrial RealSense Camera

GrAI Matter Labs Raises $14M to Bring High Performance AI per Watt to Every Device on the Edge

More News

VISION PRODUCT OF THE YEAR WINNER SHOWCASE

Morpho Semantic Filtering (Best AI Software or Algorithm)Morpho
Morpho’s Semantic Filtering is the 2020 Vision Product of the Year Award Winner in the AI Software and Algorithms category. Semantic Filtering improves camera image quality by combining the best of AI-based segmentation and pixel processing filters. In conventional imaging, computational photography algorithms are typically applied to the entire image, which can sometimes cause unwanted side effects such as loss of detail and textures, as well as in the appearance of noise in certain areas. Morpho’s Semantic Filtering is trained to identify the meaning of each pixel in the object of interest, allowing the application of the right algorithm for each category, with different strength levels that are most effective to achieve the best image quality for still-image capture.

Please see here for more information on Morpho and its Semantic Filtering. The Vision Product of the Year Awards are open to Member companies of the Edge AI and Vision Alliance and celebrate the innovation of the industry’s leading companies that are developing and enabling the next generation of computer vision products. Winning a Vision Product of the Year award recognizes leadership in computer vision as evaluated by independent industry experts.

 

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top