Edge AI and Vision Insights: August 3, 2022 Edition

SELECTING AND COLLECTING TRAINING DATA

Data Collection in the WildBMW Group
In scientific papers, computer vision models are usually evaluated on well-defined training and test datasets. In practice, however, collecting high-quality data that accurately represents the real world is a challenging problem. Developing models using a non-representative dataset will give high accuracy during testing, but the model will perform poorly when deployed in the real world. In this 2021 Embedded Vision Summit presentation, Vladimir Haltakov, Self-Driving Car Engineer at BMW Group, discusses the challenges, common pitfalls and possible solutions for creating datasets for real-world problems. He also discusses how to avoid typical biases while curating the data, and dives deep into imbalanced distributions and presents techniques on how to handle them. Finally, he discusses strategies to detect and deal with model drift after a model is deployed in production.

DNN Training Data: How to Know What You Need and How to Get ItTech Mahindra
Successful training of deep neural networks requires the right amounts and types of annotated training data. Collecting, curating and labeling this data is typically one of the most time-consuming aspects of developing a deep-learning-based solution. In this 2021 Embedded Vision Summit talk, Abhishek Sharma, Practice Head for Engineering AI at Tech Mahindra, discusses approaches useful for situations where insufficient data is available, including transfer learning and data augmentation, including the use of generative adversarial networks (GANs). He also discusses techniques that can be helpful when data is plentiful, such as transforms, data path optimization and approximate computing. He illustrates these techniques and challenges via case studies from the healthcare and manufacturing industries.

OPTIMIZING DEEP LEARNING EFFICIENCY

Efficient Deep Learning for 3D Point Cloud UnderstandingFacebook
Understanding the 3D environment is a crucial computer vision capability required by a growing set of applications such as autonomous driving, AR/VR and AIoT. 3D visual information, captured by LiDAR and other sensors, is typically represented by a point cloud consisting of thousands of unstructured points. Developing computer vision solutions to understand 3D point clouds requires addressing several challenges, including how to efficiently represent and process 3D point clouds, how to design efficient on-device neural networks to process 3D point clouds, and how to easily obtain data to train 3D models and improve data efficiency. In this 2021 Embedded Vision Summit talk, Bichen Wu, Research Scientist at Facebook (now Meta) Reality Labs, shows how his company addresses these challenges as part of its “SqeezeSeg” research and presents a highly efficient, accurate, and data-efficient solution for on-device 3D point-cloud understanding.

A Highly Data-Efficient Deep Learning ApproachSamsung
Many applications, such as medical imaging, lack the large amounts of data required for training popular CNNs to achieve sufficient accuracy. Often, these same applications suffer from an imbalanced class distribution problem that negatively impacts model accuracy. In this 2021 Embedded Vision Summit presentation, Patrick Bangert, Vice President of AI at Samsung, proposes a highly data-efficient methodology that can achieve the same level of accuracy using significantly fewer labeled images and is insensitive to class imbalance. The approach is based on a training pipeline with two components: a CNN trained in an unsupervised setting for image feature representation generation, and a multiclass Gaussian process classifier, trained in active learning cycles, using the image representations with labels. Bangert demonstrates his company’s approach with a COVID-19 chest X-ray classifier solution where data is scarce and highly imbalanced. He shows that the approach is insensitive to class imbalance and achieves comparable accuracy to prior approaches while using only a fraction of the training data.

UPCOMING INDUSTRY EVENTS

Accelerating TensorFlow Models on Intel Compute Devices Using Only 2 Lines of Code – Intel Webinar: August 25, 2022, 9:00 am PT

Edge Impulse’s FOMO Technology and Sony’s Computer Vision Platform: A Compelling Combination – Edge Impulse Webinar: August 30, 2022, 9:00 am PT

More Events

FEATURED NEWS

SmartCow Launches Ultron, an Edge AI Platform to Provide Sensor Fusion For Smart Cities and Autonomous Infrastructure Deployments

Basler Enhances Its Lighting Portfolio for Vision Applications

STMicroelectronics Extends Its STM32Cube.AI Development Tool with Support for Deeply Quantized Neural Networks

IDS’ uEye Warp10 with 10GigE is Faster Than Any Other IDS Industrial Camera

Allegro DVT Releases AV1 Decoder Silicon IP with Support for 12-bit Pixel Size and 4:4:4 Chroma Sub-Sampling

More News

EDGE AI AND
VISION PRODUCT OF THE YEAR WINNER SHOWCASE

Synopsys’ DesignWare ARC EV7xFS Processor IP for Functional Safety (Best Automotive Solution)Synopsys
Synopsys’ DesignWare ARC EV7xFS Processor IP for Functional Safety is the 2022 Edge AI and Vision Product of the Year Award winner in the Automotive Solutions category. The EV7xFS is a unique multi-core SIMD vector digital signal processing (DSP) solution that combines seamless scalability of computer vision, DSP and AI processing with state-of-the-art safety features for real-time applications in next-generation automobiles. This processor family scales from a single core EV71FS to a dual core EV72FS and a quad core EV74FS. The multicore vector DSP products include L1 cache coherence and a software tool chain that supports OpenCL C or C/C++ and automatically partitions algorithms across multiple cores. All cores share the same programming model and one set of development tools, ARC MetaWare EV Development Toolkit for Safety. The EV7xFS family includes unique fine-grained power management for maximizing power efficiency. AI is supported in the vector DSP cores and can be scaled to higher levels of performance with optional neural network accelerators. The EV7xFS processors were built with a ground-up approach applying the latest state-of-the-art safety concepts to the design of the EV7xFS architecture. This approach was critical to achieve the ideal balance of performance, power, area and safety for use in today’s automotive SoCs requiring safety levels up to ASIL D for autonomous driving.

Please see here for more information on Synopsys’ DesignWare ARC EV7xFS Processor IP for Functional Safety. The Edge AI and Vision Product of the Year Awards celebrate the innovation of the industry’s leading companies that are developing and enabling the next generation of edge AI and computer vision products. Winning a Product of the Year award recognizes a company’s leadership in edge AI and computer vision as evaluated by independent industry experts.

 

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top