LETTER FROM THE EDITOR |
Dear Colleague,
On Thursday, September 16 at 9 am PT, Edge Impulse will deliver the free webinar “Introducing the EON Tuner: Edge Impulse’s New AutoML Tool for Embedded Machine Learning” in partnership with the Edge AI and Vision Alliance. Finding the best machine learning (ML) model for analyzing sensor data isn’t easy. What pre-processing steps yield the best results, for example? And what signal processing parameters should you pick? The selection process is even more challenging when the resulting model needs to run on a microcontroller with significant latency, memory and power constraints. AutoML tools can help, but typically only look at the neural network, disregarding the important roles that pre-processing and signal processing play with tinyML. This hands-on session will introduce the EON Tuner, a new AutoML tool available to all Edge Impulse developers. You’ll learn how to use EON Tuner to pick the optimum model within the constraints of your device, improving the accuracy of your audio, computer vision and other sensor data classification projects. The webinar will include demonstrations of the concepts discussed. For more information and to register, please see the event page.
Brian Dipert
Editor-In-Chief, Edge AI and Vision Alliance |
DEEP LEARNING FUNDAMENTALS |
Introducing Machine Learning and How to Teach Machines to See
What is machine learning? How can machines distinguish a cat from a dog in an image? What’s the magic behind convolutional neural networks? These are some of the questions Facundo Parodi, Research and Machine Learning Engineer at Tryolabs, answers in this introductory talk on machine learning in computer vision. Parodi introduces machine learning and explores the different types of problems it can solve. He explains the main components of practical machine learning, from data gathering and training to deployment. He then focuses on deep learning as an important machine learning technique and provides an introduction to convolutional neural networks and how they can be used to solve image classification problems. Parodi also touches on recent advancements in deep learning and how they have revolutionized the entire field of computer vision.
Can You See What I See? The Power of Deep Learning
It’s an exciting time to work in computer vision, mainly due to the technological advances in the area of deep learning. This talk from Scott Thibault, President and Founder of StreamLogic, is an introduction to some of the most important computer vision tasks that can be solved with deep learning. In particular, Thibault focuses on the application of convolutional neural networks to the tasks of image classification, object detection and facial image recognition using embeddings. You will learn about the types of applications in which DNNs performing these functions are typically used, and discover some of the publicly available models and data sets that you can use to help bootstrap your own applications.
|
DATA AUGMENTATION TECHNIQUES |
Practical Image Data Augmentation Methods for Training Deep Learning Object Detection Models
Data augmentation is a method of expanding deep learning training datasets by making various automated modifications to existing images in the dataset. The resulting increased data diversity can enable a more accurate and robust model without the need to manually obtain more images. In this presentation, Evan Juras, Computer Vision Engineer at EJ Technology Consultants, explores practical methods of image data augmentation for training object detection models. He also shows how to create an augmented dataset of 50,000 unique images with labeled bounding boxes in a few hours using a short Python script.
Using an ISP for Real-time Data Augmentation
Image signal processors (ISPs) are tasked with processing raw pixels delivered by image sensors in order to optimize the quality of images. In computer vision applications, much attention is focused on tuning the ISP, both to obtain good training data and to optimize image quality in deployed systems when faced with widely varying, dynamic imaging conditions (e.g., changes in lighting). In this presentation, Timofey Uvarov, Camera System Lead at Pony.AI, describes a data-driven solution to these challenges developed in a collaboration between Pony.AI and On Semiconductor. His company’s approach manipulates the ISP configuration during training data collection in order to perform real-time data augmentation. By training deep neural networks (DNNs) with the resulting augmented data, Pony.AI is able to create DNNs that are robust to variations in imaging conditions and ISP settings.
|
UPCOMING INDUSTRY EVENTS |
Introducing the EON Tuner: Edge Impulse’s New AutoML Tool for Embedded Machine Learning – Edge Impulse Webinar: September 16, 2021, 9:00 am PT
Securing Smart Devices: Protecting AI Models at the Edge – Sequitur Labs Webinar: September 28, 2021, 9:00 am PT
How Battery-powered Intelligent Vision is Bringing AI to the IoT – Eta Compute Webinar: October 5, 2021, 9:00 am PT
More Events
|
FEATURED NEWS |
Alliance Member Companies Au-Zone, NXP Semiconductors and Vision Components, in Partnership with Toradex, have Launched the Modular Maivin i.MX 8M Plus AI Vision Kit
Teledyne’s New AI Software Enables Deep Learning at Runtime
Alliance Member Companies Renesas and Syntiant have Co-developed a Voice-Controlled Multimodal AI Solution
Edge Impulse’s Upcoming Imagine Online Conference Showcases the Latest Innovations in Embedded Machine Learning for the Real World
Alliance Member Companies Maxim Integrated and Xailient are Teaming to Provide Fast, Low-Power IoT Face Detection
More News
|
EDGE AI AND VISION PRODUCT OF THE YEAR WINNER SHOWCASE |
MINIEYE In-cabin Sensing Solution (Best Automotive Solution)
MINIEYE’s In-cabin Sensing (I-CS) Solution is the 2021 Edge AI and Vision Product of the Year Award Winner in the Automotive Solutions category. I-CS provides comprehensive in-vehicle sensing solutions for smart cockpit and autonomous vehicles by leveraging embedded computer vision and AI using IR cameras. I-CS tracks visual attributes such as head orientation, movement of facial features, gaze, gesture, and body movements, and analyzes driver’s and occupant’s identities, intentions and behaviors. In addition, I-CS also detects objects inside a vehicle that are closely related to in-cabin activities. I-CS’s edge computing infrastructure allows algorithms to run with high efficiency on automotive level chips, making it possible to offer larger combinations of visual sensing features in one solution set. The solution also supports a large variety of computing platforms, including Arm CPUs, FPGAs and specialized neural network chips.
Please see here for more information on MINIEYE’s In-cabin Sensing Solution. The Edge AI and Vision Product of the Year Awards celebrate the innovation of the industry’s leading companies that are developing and enabling the next generation of edge AI and computer vision products. Winning a Product of the Year award recognizes a company’s leadership in edge AI and computer vision as evaluated by independent industry experts. |