This blog post was originally published at Vision Systems Design's website. It is reprinted here with the permission of PennWell.
The development of vision algorithms, software and systems is very much an empirical undertaking – informed more my experimentation and experience than by theory. The good news is, you don’t have to do all of the experimentation yourself. By following in the footsteps of those who have come before you, you can gain valuable insights that you can apply to your own development efforts. Learning from the experience of others was the inspiration for three talks presented at the recent Embedded Vision Summit.
In "Lessons Learned from Bringing Mobile and Embedded Vision Products to Market," Tim Hartley, Product Manager in the Personal Mobile Compute Business Line at ARM, presents case studies in which tough challenges put product development at risk, and explores how they are being addressed by leading product developers. Technical and business challenges abound in vision product development. Engineers can quickly come up against thermal and power limitations, for example. And software may perform well on one platform, but poorly on another, similar platform. These are some of the problems that can sink a product, and Hartley provides a number of insightful suggestions for addressing them. Here's a preview:
Next is "Real-world Vision Systems Design: Challenges and Techniques," delivered by Yury Gorbachev, Principal Engineer at Itseez. Implementing robust vision capabilities for demanding applications on embedded platforms is complex and challenging, and requires extensive, deep knowledge and hands-on experience in many areas, such as embedded systems architecture, processor-specific acceleration techniques and memory access patterns. Mistakes in any of these areas can significantly delay, if not bring an outright end to, your project. Gorbachev explores some of the most common pitfalls of vision product development and presents practical ways of avoiding them, drawing on examples from real-world projects. Here's a preview:
Last, but not least, is "Making Computer Vision Software Run Fast on Your Embedded Platform," from Alexey Rybakov, Senior Director at LUXOFT. Many computer vision algorithms perform well on desktop PCs, Rybakov notes, but struggle on resource-constrained embedded platforms. This how-to talk provides an overview of optimization methods that make vision software run fast on the types of low-power, small footprint hardware that is widely used in automotive, surveillance and mobile devices. The presentation explores practical aspects of algorithm and software optimization such as thinning of input data, using dynamic regions of interest, mastering data pipelines and memory access, overcoming compiler inefficiencies, and more. Here's a preview:
If you've got a good vision system development case study, I'd love to hear about it and (with your permission, of course) consider sharing it with your fellow readers in a future column or at an upcoming Embedded Vision Alliance event. Drop me an email when you get a chance!
Regards,
Brian Dipert
Editor-in-Chief, Embedded Vision Alliance
[email protected]