Now available for on-demand viewing is the archive recording of the webinar “Building Scalable Edge ML Solutions with NXP i.MX Applications Processors and AWS Cloud Services,” co-presented by Ali Osman Örs, Director of AI ML Strategy and Technologies for Edge Processing at NXP Semiconductors, and Jack Ogawa, responsible for Strategic Semiconductor Partnerships for IoT at Amazon Web Services. From the event page:
Machine Learning (ML) technology is fast becoming ubiquitous as the heart of differentiated IoT devices, often defining the smart capabilities delivered. ML model monitoring, maintenance and updatability are essential to ensure that these smart capabilities continue to deliver the value promised throughout the lifecycle of the device, and require an MLOps strategy tailored for IoT scale.
In this session, NXP and AWS explore how to build and deploy ML solutions to many edge devices at scale and securely support MLOps for maintaining models through their lifecycle. The audience will learn how to address common MLOps challenges in the context of an IoT application running across multiple embedded devices.
What You Will Learn:
- Building ML solutions with AWS cloud services such as Amazon SageMaker, AWS IoT Greengrass
- Integrating cloud services with edge inference solutions from NXP
- Deploying to an edge i.MX 8M Plus device based board
- Managing a multi device fleet with Amazon SageMaker Edge Manager
For more information and to register, please see the event page.