LETTER FROM THE EDITOR |
Dear Colleague,
It’s almost here! Next week, May 22-24 to be exact, the Embedded Vision Summit will be returning to the Santa Clara, California Convention Center. As the premier conference and tradeshow for practical, deployable computer vision and edge AI, the Summit focuses on empowering product creators to bring perceptual intelligence to products. This year’s Summit will feature 100+ expert speakers, 75+ exhibitors and hundreds of demos across three days of presentations, exhibits and Deep Dive sessions. If you haven’t already, grab your pass today and be sure to use promo code SUMMIT23-NL to save 15% when you register by next Monday, May 21. Register now and tell a friend! We’ll see you there!
Also be sure you check out the latest press releases from BrainChip, DEEPX, FRAMOS and Vision Components, just a few of the dozens of companies participating in the Summit this year, detailing their activities at the event.
Brian Dipert
Editor-In-Chief, Edge AI and Vision Alliance |
OBJECT DETECTION AND INTERACTION MODELING |
Understanding DNN-Based Object Detectors
Unlike image classifiers, which merely identify the most important objects within or attributes of an image, object detectors determine where objects of interest are located within an image. Consequently, object detectors are central to many computer vision applications including autonomous vehicles and augmented reality. In this presentation from the 2022 Embedded Vision Summit, Azhar Quddus, Senior Computer Vision Engineer at Au-Zone Technologies, provides a technical introduction to deep-neural-network-based object detectors. He explains how these algorithms work, and how they have evolved in recent years, utilizing examples of popular object detectors. Quddus examines some of the trade-offs to consider when selecting an object detector for an application, and touches on accuracy measurement. He also compares performance of various object detection models.
A Cost-Effective Approach to Modeling Object Interactions on the Edge
Determining bird’s eye view (BEV) object positions and tracks, and modeling the interactions among objects, is vital for many applications, including understanding human interactions for security and road object interactions for automotive applications. With traditional methods, this is extremely challenging and expensive due to the supervision required in the training process. In this presentation from the 2022 Embedded Vision Summit, Arun Kumar, Perception Engineer at Nemo @ Ridecell, introduces a weakly supervised end-to-end computer vision pipeline for modeling object interactions in 3D. Nemo @ Ridecell’s architecture trains a unified network in a weakly supervised manner to estimate 3D object positions by jointly learning to regress the 2D object detection and the scene’s depth in a single feed-forward CNN pass, and to subsequently model object tracks. The method learns to model each object as a BEV point, without the need for 3D or BEV annotations for training, and without supplemental (e.g., LiDAR) data. It achieves results comparable to the state-of-the-art while significantly reducing development costs and computation requirements.
|
MLOPS INSIGHTS |
MLOps: Managing Data and Workflows for Efficient Model Development and Deployment
Machine learning operations (MLOps) is the engineering field focused on techniques for developing and deploying machine learning solutions at scale. As the name suggests, MLOps is a combination of machine learning development (“ML”) and software/IT operations (“Ops”). Blending these two worlds is particularly complex, given their diverse nature. ML development is characterized by research and experimental components, dealing with large amounts of data and complex operations, while software and IT operations aim at streamlining software deployment in products. Typical problems addressed by MLOps include data management (labeling, organization, storage), ML model and pipeline training repeatability, error analysis, model integration and deployment and model monitoring. In this talk from the 2022 Embedded Vision Summit, Konstantinos Balafas, Head of AI Data, and Carlo Dal Mutto, Director of Engineering, both of Airbus, present practical MLOps techniques useful for tackling a variety of MLOps needs. They illustrate these techniques with real-world examples from their work developing autonomous flying capabilities as part of the Wayfinder team at Acubed, the Silicon Valley innovation center of Airbus.
Responsible AI and ModelOps in Industry: Practical Challenges and Lessons Learned
How do we develop machine learning models and systems taking fairness, explainability and privacy into account? How do we operationalize models in production, and address their governance, management and monitoring? Model validation, monitoring and governance are essential for building trust and adoption of computer-vision-based AI systems in high-stakes domains such as healthcare and autonomous driving. In this presentation from the 2022 Embedded Vision Summit, Krishnaram Kenthapadi, Chief Scientist at Fiddler AI, first motivates the need for adopting a “responsible AI by design” approach when developing AI/ML models and systems for different consumer and enterprise applications. He then focuses on the application of responsible AI and ModelOps techniques in practice through industry case studies. He discusses the sociotechnical dimensions and practical challenges, and concludes with the key takeaways and open challenges.
|
UPCOMING INDUSTRY EVENTS |
Embedded Vision Summit: May 22-24, 2023, Santa Clara, California
More Events
|
FEATURED NEWS |
New Toolchain and Software Package from STMicroelectronics Ease Development of Edge Processing with Intelligent Inertial Sensors
Teledyne Introduces Shutterless Version of Its Compact Thermal Camera Core
18 Megapixel Sensor Complements Portfolio in Basler’s ace 2 Camera Series
Flex Logix Announces InferX High Performance IP for DSP and AI Inference
Qualcomm Introduces Cutting-edge IoT Solutions to Enable New Industrial Applications and Help Scale the IoT Ecosystem
More News
|
EMBEDDED VISION SUMMIT PARTNER SHOWCASE |
inspect America
inspect America’s Spring 2023 edition is now available! Read the new issue for free and learn everything you need to know about embedded vision, including information on the status of embedded vision market development, an interview with Jeff Bier, the organizer of the Embedded Vision Summit, and details on the most exciting products you will find at the event.
AspenCore
The Edge AI and Vision Alliance is delighted to partner on the Embedded Vision Summit with AspenCore and its industry flagship publications: EE Times, embedded, and EDN. If you’re working in embedded vision, you owe it to yourself to subscribe to these great resources. And like the best things in life, they’re free! Subscribe here.
|