Edge AI and Vision Insights: December 8, 2021 Edition

LETTER FROM THE EDITOR
Dear Colleague,2022 Embedded Vision Summit

Every year, the Edge AI and Vision Alliance surveys developers to understand what technologies and techniques they use to build visual AI systems, and what their toughest challenges are. This is our eighth year conducting the survey, and we would like to get your opinions. Many suppliers of computer vision building-block technologies use the results of our Computer Vision Developer Survey to guide their priorities. We also share the results from the Survey at Edge AI and Vision Alliance events and in white papers and presentations made available throughout the year on the Alliance website.

I’d really appreciate it if you’d take a few minutes to complete the first stage of this year’s survey. (It typically takes less than 10 minutes to complete.) We are keeping the survey open through this Friday, December 10! Don’t miss your chance to have your voice heard. As a thank-you, we will send you a coupon for $50 off the price of a two-day Embedded Vision Summit ticket (to be sent when registration opens). In addition, we will enter your completed survey into a drawing for one of fifty Amazon gift cards worth $25! Thank you in advance for your perspective. Fill out the survey.


Tomorrow, Thursday December 9 at 9 am PT, BrainChip will deliver the free webinar “Developing Intelligent AI Everywhere with BrainChip’s Akida” in partnership with the Edge AI and Vision Alliance. Implementing “smart” sensors at the edge using traditional machine learning is extremely challenging, requiring real-time processing and the management of both low power consumption and low latency requirements. BrainChip’s Akida neural processor unit (NPU) brings intelligent AI to the edge with ease. The BrainChip Akida NPU leverages advanced neuromorphic computing as its processing “engine”. Akida addresses critical requirements such as privacy, security, latency and power consumption, delivering key features such as one-shot learning and on-device computing with no “cloud” dependencies. With Akida, BrainChip is delivering on next-generation demands by achieving efficient, effective and straightforward AI functionality. In this session, you’ll learn how to easily develop efficient AI in edge devices by implementing Akida IP either into your SoC or as standalone silicon. For more information and to register, please see the event page.

Brian Dipert
Editor-In-Chief, Edge AI and Vision Alliance

SMART SPACES OPPORTUNITIES

A Mask Detection Smart Camera Using the NVIDIA Jetson Nano: System Architecture and Developer ExperienceBDTI and Tryolabs
MaskCam is a prototype reference design for a smart camera that counts the number of people wearing masks in its field of view and reports statistics to the cloud. Based on the NVIDIA Jetson Nano, all AI processing is handled on the device. In this presentation, Evan Juras, Computer Vision Engineer at BDTI, and Braulio Ríos, Machine Learning Engineer at Tryolabs, share their story of developing MaskCam, and how the two companies went from concept to working prototype with a small team in just a few months. Juras and Ríos explain the hardware and software design of the camera and present their companies’ experiences working with the Jetson Nano (both the Dev Kit and the system-on-module) and DeepStream framework from NVIDIA, and utilizing containerization via balenaOS/balenaCloud. The solution features a custom tracking component, static video file serving, video streaming over RTSP, and MQTT communication to report statistics and receive commands using a web interface. Working with Jabil Optics, Juras and Ríos also present estimated manufacturing costs for volume production.

Visual AI at the Edge: From Surveillance Cameras to People CountersSynaptics
New AI-at-the-edge processors with improved efficiencies and flexibility are unleashing a huge opportunity to democratize computer vision broadly across all markets, enabling edge AI devices with small, low-cost, low-power cameras. Synaptics has embarked on a roadmap of edge-AI DNN processors targeted at a range of real-time computer vision and multimedia applications. These span from enhancing the image quality of a high-resolution camera’s output using Synaptics’ VS680 multi-TOPS processor to performing computer vision in battery-powered devices at lower resolution using the company’s Katana Edge-AI SoC. In this talk, Patrick Worfolk, Senior Vice President and CTO of Synaptics, shows how these edge AI SoCs can be used to:

  • Achieve exceptional color video in very low light conditions
  • De-noise and distortion-correct both 2D and 3D imagery from a time-of-flight depth camera that images through a smartphone OLED display
  • Perform super-resolution enhancement of high-resolution video imagery, and
  • Recognize objects using lower-resolution sensors under battery power.

AI PRODUCTIZATION

Productizing Edge AI Across Applications and Verticals: Case Study and InsightsHailo and NEC
As edge AI is growing across different markets and entering more products, discussions about realizing product and application goals are growing in importance. This presentation explores how product and application goals are being met in real-world applications by examining case studies from customers who have leveraged Hailo’s processors to perform high-performance AI inferencing at the edge. The main case study discussed is NEC’s video analytics platform, which targets smart city, security and other use cases. For this, Orr Danon, CEO of Hailo, is joined by Tsvi Lev, Managing Director of NEC Research Center Israel and an NEC Vice President. Following the case study, Danon and Lev conclude by highlighting key insights learned and offer a glimpse into future deployments.

Why is Taking AI to Production So Difficult?Intel and IntelliSite
80% of AI projects never reach broad deployment. This roundtable discussion between Ken Mills, CEO of Epic IO, IntelliSite and Broad Sky Networks, Matthew Formica, Director of OpenVINO Product Marketing and Developer Evangelism at Intel, and Aaron Tersteeg, Global Sales Director for AI Edge Inference at Intel, offers a lively debate and dialogue to re-think the promise of AI and uncover why successfully taking AI solutions to production—to actually doing something wonderful in the world—has been so difficult. Lack of proper deep learning software optimization skills? Architectural challenges matching the full model-plus-application workload with appropriate hardware? Difficulty finding the right solution partners to meet a customer’s needs? This lively discussion also suggests potential solutions for the pain points raised.

UPCOMING INDUSTRY EVENTS

Developing Intelligent AI Everywhere with BrainChip’s Akida – BrainChip Webinar: December 9, 2021, 9:00 am PT

Embedded Vision Summit: May 17-19, 2022, Santa Clara, California

More Events

FEATURED NEWS

aiMotive Announces aiDrive 3.0, the Latest Version of Its Software Stack for ADAS and Autonomous Vehicle Applications

MicroSys Puts Hailo AI Performance on Its SoM Platforms Along with NXP S32G Vehicle Network Processors

ADLINK Technology Releases its First SMARC Module Based on the Qualcomm QRB5165, Enabling High Performance Robots and Drones at Low Power

STMicroelectronics Streamlines Machine-Learning Software Development for Connected Devices and Industrial Equipment with Upgrades to Its NanoEdge AI Studio

Microchip Technology’s New Smart Embedded Vision Development Platform is Its Second Development Tool Offering for Designers Using Low-Power PolarFire RISC-V SoC FPGAs for Embedded Vision Applications at the Edge

More News

EDGE AI AND VISION PRODUCT OF THE YEAR WINNER SHOWCASE

EyeTech Digital Systems EyeOn (Best Consumer Edge AI End Product)EyeTech Digital Systems
EyeTech Digital Systems’ EyeOn is the 2021 Edge AI and Vision Product of the Year Award Winner in the Consumer Edge AI End Products category. EyeOn combines next-generation eye-tracking technology with the power of a portable, lightweight tablet, making it the fastest, most accurate device for augmentative and alternative communication. With hands-free screen control through built-in predictive eye-tracking, EyeOn gives a voice to impaired and non-verbal patients with conditions such as cerebral palsy, autism, ALS, muscular dystrophy, stroke, traumatic brain injuries, spinal cord injuries, and Rett syndrome. EyeOn empowers users to communicate, control their environments, search the web, work, and learn independently – all hands-free, using the power of their eyes.

Please see here for more information on EyeTech Digital Systems’ EyeOn. The Edge AI and Vision Product of the Year Awards celebrate the innovation of the industry’s leading companies that are developing and enabling the next generation of edge AI and computer vision products. Winning a Product of the Year award recognizes a company’s leadership in edge AI and computer vision as evaluated by independent industry experts.

 

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top