Edge AI and Vision Insights: August 31, 2021 Edition

APPLICATION CASE STUDIES

Combining CNNs and Conventional Algorithms for Low-Compute Vision: A Case Study in the GarageChamberlain Group
Chamberlain Group (CGI) is a global leader in access control solutions with its Chamberlain and LiftMaster garage door opener brands and myQ connected technology. In this presentation from Nathan Kopp, the company’s Principal Software Architect for Video Systems, you’ll learn how CGI is innovating to bring efficient, affordable computer vision into the garage, opening new possibilities and insights for homeowners and businesses. With constant improvements in neural network architectures and advancements in low-power edge processors, it is tempting to assume that convolutional neural networks (CNNs) will solve every vision problem. However, simpler “conventional” computer vision techniques continue to offer an attractive cost-to-performance ratio and require orders of magnitude less training data. Unfortunately, these algorithms often need hand-tuning of parameters, and do not generalize well to previously unseen environments. By combining CNNs with simpler algorithms into a layered, intelligent vision pipeline—and by understanding the constraints of the problem—the weaknesses of simpler algorithms can be offset by the strengths of CNNs, while still preserving their cost-saving benefits.

Feeding the World Through Embedded VisionJohn Deere
Although it’s not widely known outside of the industry, computer vision is beginning to be used at scale in agriculture, where it is delivering meaningful improvements in efficiency and quality, with the potential for tremendous impact on how our food is grown. In this presentation, Travis Davis, Delivery Manager for the Automation Delivery team with the Intelligent Solutions Group at John Deere, introduces deployed agricultural computer vision solutions for harvesting and spraying. He explores key technical challenges that John Deere had to overcome to create these solutions, and highlights the ways in which agricultural vision applications often have requirements that are quite different from those of automotive and commercial applications.

FUNDAMENTALS

A Practical Guide to Implementing Deep Neural Network Inferencing at the EdgeZebra Technologies
In this presentation, Toly Kotlarsky, Distinguished Member of the Technical Staff in R&D at Zebra Technologies, explores practical aspects of implementing a pre-trained deep neural network (DNN) for inference on typical edge processors. First, he briefly touches on how to evaluate the accuracy of DNNs for use in real-world applications. Next, he explains the process for converting a trained model in TensorFlow into formats suitable for deployment at the edge and examines a simple, generic C++ real-time inference application that can be deployed on a variety of hardware platforms. Kotlarsky then outlines a method for evaluating the performance of edge DNN implementations and shows the results of utilizing this method to benchmark the performance of three popular edge computing platforms: the Google Coral (based on the Edge TPU), NVIDIA’s Jetson Nano and the Raspberry Pi 3.

An Introduction to Simultaneous Localization and Mapping (SLAM)Skydio
This talk from Gareth Cross, former Technical Lead for State Estimation at Skydio, provides an introduction to the fundamentals of simultaneous localization and mapping (SLAM). Cross provides foundational knowledge; viewers are not expected to have any prerequisite experience in the field. The talk consists of an introduction to the concept of SLAM, as well as practical design considerations in formulating SLAM problems. Visual inertial odometry is introduced as a motivating example of SLAM, and Cross reviews how the problem is structured and solved.

UPCOMING INDUSTRY EVENTS

Introducing the EON Tuner: Edge Impulse’s New AutoML Tool for Embedded Machine Learning – Edge Impulse Webinar: September 16, 2021, 9:00 am PT

Securing Smart Devices: Protecting AI Models at the Edge – Sequitur Labs Webinar: September 28, 2021, 9:00 am PT

How Battery-powered Intelligent Vision is Bringing AI to the IoT – Eta Compute Webinar: October 5, 2021, 9:00 am PT

More Events

FEATURED NEWS

Lattice Semiconductor’s New Certus-NX FPGAs are Optimized for Automotive Applications

Unity Technologies Announces Support for the ROS 2 Open-source Robotics Middleware Suite

Mindtech Global Releases New Features for the Chameleon Synthetic Data Creation Platform for Training AI Vision Systems

eCapture from eYs3D Microelectronics Launches a Small Form Factor 3D Stereo Depth Sensing Camera for Robotics and Object Tracking

MediaTek Announces the AI-enabled Dimensity 920 and Dimensity 810 Chips for 5G Smartphones

More News

EDGE AI AND VISION PRODUCT OF THE YEAR WINNER SHOWCASE

EyeTech Digital Systems EyeOn (Best Consumer Edge AI End Product)EyeTech Digital Systems
EyeTech Digital Systems’ EyeOn is the 2021 Edge AI and Vision Product of the Year Award Winner in the Consumer Edge AI End Products category. EyeOn combines next-generation eye-tracking technology with the power of a portable, lightweight tablet, making it the fastest, most accurate device for augmentative and alternative communication. With hands-free screen control through built-in predictive eye-tracking, EyeOn gives a voice to impaired and non-verbal patients with conditions such as cerebral palsy, autism, ALS, muscular dystrophy, stroke, traumatic brain injuries, spinal cord injuries, and Rett syndrome. EyeOn empowers users to communicate, control their environments, search the web, work, and learn independently – all hands-free, using the power of their eyes.

Please see here for more information on EyeTech Digital Systems’ EyeOn. The Edge AI and Vision Product of the Year Awards celebrate the innovation of the industry’s leading companies that are developing and enabling the next generation of edge AI and computer vision products. Winning a Product of the Year award recognizes a company’s leadership in edge AI and computer vision as evaluated by independent industry experts.

 

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top