Optimizing ML Systems for Real-World Deployment
In the real world, machine learning models are components of a broader software application or system. In this talk from the 2021 Embedded Vision Summit, Danielle Dean, Technical Director of Machine Learning at iRobot, explores the importance of optimizing the system as a whole–not just optimizing individual ML models. Based on experience building and deploying deep-learning-based systems for one of the largest fleets of autonomous robots in the world (the Roomba!), Dean highlights critical areas requiring attention for system-level optimization, including data collection, data processing, model building, system application and testing. She also shares recommendations for ways to think about and achieve optimization of the whole system.
A Practical Guide to Implementing Machine Learning on Embedded Devices
Deploying machine learning onto edge devices requires many choices and trade-offs. Fortunately, processor designers are adding inference-enhancing instructions and architectures to even the lowest cost MCUs, tools developers are constantly discovering optimizations that extract a little more performance out of existing hardware, and ML researchers are refactoring the math to achieve better accuracy using faster operations and fewer parameters. In this presentation from the 2021 Embedded Vision Summit, Nathan Kopp, Principal Software Architect for Video Systems at the Chamberlain Group, takes a high-level look at what is involved in running a DNN model on existing edge devices, exploring some of the evolving tools and methods that are finally making this dream a reality. He also takes a quick look at a practical example of running a CNN object detector on low-compute hardware.
|
How to Optimize a Camera ISP with Atlas to Automatically Improve Computer Vision Accuracy
Computer vision (CV) works on images pre-processed by a camera’s image signal processor (ISP). For the ISP to provide subjectively “good” image quality (IQ), its parameters must be manually tuned by imaging experts over many months for each specific lens / sensor configuration. However, “good” visual IQ isn’t necessarily what’s best for specific CV algorithms. In this session from the 2021 Embedded Vision Summit, Marc Courtemanche, Atlas Product Architect at Algolux, shows how to use the Atlas workflow to automatically optimize an ISP to maximize computer vision accuracy. Easy to access and deploy, the workflow can improve CV results by up to 25 mAP points while reducing time and effort by more than 10x versus today’s subjective manual IQ tuning approaches.
10 Things You Must Know Before Designing Your Own Camera
Computer vision requires vision. This is why companies that use computer vision often decide they need to create a custom camera module (and perhaps other custom sensors) that meets the specific needs of their unique application. This 2021 Embedded Vision Summit presentation from Alex Fink, consultant at Panopteo, will help you understand how cameras are different from other types of electronic products; what mistakes companies often make when attempting to design their own cameras; and what you can do to end up with cameras that are built on spec, on schedule and on budget.
|