Edge AI and Vision Insights: March 4, 2020 Edition

LETTER FROM THE EDITOR
Dear Colleague,2020 Embedded Vision Summit

We are excited to announce David Patterson as our first keynote speaker at this year’s Embedded Vision Summit, taking place May 18-21 at the Santa Clara, California Convention Center. Patterson is a UC Berkeley professor of the graduate school, a Google distinguished engineer and the RISC-V Foundation Vice-Chair. He is a prolific innovator—from co-inventing RISC architecture to his leadership on the Google TPU processor used for accelerating machine learning workloads. His Summit keynote talk, “A New Golden Age for Computer Architecture: Processor Innovation to Enable Ubiquitous AI,” is a must-see for anyone creating machine-learning systems or processors. Registration for the Summit, the preeminent conference on practical visual AI and computer vision, is now open. Be sure to register today with promo code EARLYBIRD20 to receive your 15%-off Early Bird Discount!

The Edge AI and Vision Alliance is now accepting applications for the 2020 Vision Product of the Year Awards competition. The Vision Product of the Year Awards are open to Member companies of the Alliance and celebrate the innovation of the industry’s leading companies that are developing and enabling the next generation of computer vision products. Winning a Vision Product of the Year award recognizes your leadership in computer vision as evaluated by independent industry experts; winners will be announced at the 2020 Embedded Vision Summit. For more information, and to enter, please see the program page.

Brian Dipert
Editor-In-Chief, Edge AI and Vision Alliance

OBJECT TRACKING AND SPORTS ANALYTICS

Object Trackers: Approaches and ApplicationsIntel
Object tracking is a powerful algorithm component and one of the fundamental building blocks for many real-world computer vision applications. Object trackers provide two main benefits when incorporated into a localization module. First, trackers can reduce overall computation and power requirements by allowing a reduction in the frequency at which detections must be generated. Second, trackers can maintain the identity of an object across multiple frames, which is important for many applications. Recent advances in deep learning provide us with a unified method for designing detectors, but we still have many design choices for trackers. In this talk, Minje Park, Deep Learning R&D Engineer at Intel, describes three basic tracker approaches and their use in video analytics applications including face recognition, people counting and action recognition. He also provides insights on how recent advances in recurrent neural networks and reinforcement learning can be used for enhancing trackers.

Teaching Machines to See, Understand, Describe and Predict Sports Games in Real TimeSportlogiq
Sports analytics is about observing, understanding and describing the game in an intelligent manner. In practice, this means designing a fully-automated, robust, end-to-end pipeline; from visual input, to player and group activities, to player and team evaluation, to planning. Despite major advancements in computer vision and machine learning, sports analytics is still in its infancy and relies heavily on simpler descriptive statistics. This talk from Mehrsan Javan, CTO of Sportlogiq, focuses on how the sports analytics industry operates, what makes sports the best test case to develop and test new computer vision systems, what the challenges are and how we have solved those problems and advanced the state of the art in computer vision and machine learning.

EDGE COMPUTING ON APPLICATION PROCESSORS

Efficient Deployment of Quantized ML Models at the Edge Using Snapdragon SoCsQualcomm
Increasingly, machine learning models are being deployed at the edge, and these models are getting bigger. As a result, we are hitting the constraints of edge devices: bandwidth, performance and power. One way to reduce ML computation demands and increase power efficiency is quantization—a set of techniques that reduce the number of bits needed, and hence reduce bandwidth, computation and storage requirements. Qualcomm Snapdragon SoCs provide a robust hardware solution for deploying ML applications in embedded and mobile devices. Many Snapdragon SoCs incorporate the Qualcomm Artificial Intelligence Engine, comprised of hardware and software components to accelerate on-device ML. In this talk, Felix Baum, Director of Product Management for AI Software at Qualcomm, explores the performance and accuracy offered by the accelerator cores within the AI Engine. He also highlights the tools and techniques Qualcomm offers for developers targeting these cores, utilizing intelligent quantization to deliver optimal performance with low power consumption while maintaining algorithm accuracy.

MediaTek’s Approach for Edge IntelligenceMediaTek
MediaTek has incorporated an AI processing unit (APU) alongside the traditional CPU and GPU in its SoC designs for the next wave of smart client devices (smartphones, cameras, appliances, cars, etc.). Edge applications can harness the CPU, GPU and APU together to achieve significantly higher performance with excellent efficiency. In this talk, Bing Yu, formerly a Senior Technical Manager and Architect at MediaTek, presents MediaTek’s AI-enabled SoCs for smart client devices. He examines the features of the AI accelerator, which is the core building block of the APU. He also describes the accompanying toolkit, called NeuroPilot, which enables app developers to conveniently implement inference models using industry-standard frameworks.

UPCOMING INDUSTRY EVENTS

Embedded Vision Summit: May 18-21, 2020, Santa Clara, California

More Events

FEATURED NEWS

Lattice Semiconductor’s New mVision Solutions Stack Accelerates Low Power Embedded Vision Development

Graphcore Secures an Additional $150 Million in New Capital

Qualcomm Accelerates XR Headset Development with the New Qualcomm Snapdragon XR2 5G Reference Design

BrainChip’s Akida Development Environment is Now Freely Available

Vision Components’ MIPI Camera Boards Offer a Large Sensor Variety

More News

 

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top