Tom Michiels, System Architect for Embedded Vision Processors at Synopsys, presents the "Moving CNNs from Academic Theory to Embedded Reality" tutorial at the May 2017 Embedded Vision Summit.
In this presentation, you will learn to recognize and avoid the pitfalls of moving from an academic CNN/deep learning graph to a commercial embedded vision design. You will also learn about the cost vs. accuracy trade-offs of CNN bit width, about balancing internal memory size and external memory bandwidth, and about the importance of keeping data local to the CNN processor to improve bandwidth. Michiels also walks through an example customer design for a power- and cost-sensitive automotive scene segmentation application that requires high flexibility to adapt to future CNN graph evolutions.