Sébastien Taylor, Vice President of Research and Development at Au-Zone Technologies, presents the “Introduction to Semantic Segmentation” tutorial at the May 2023 Embedded Vision Summit.
Vision applications often rely on object detectors, which determine the nature and location of objects in a scene. But many vision applications require a different type of visual understanding: semantic segmentation. Semantic segmentation classifies each pixel of an image, associating each pixel with an object class (e.g., pavement, pedestrian). This is required for separating foreground objects from background, for example, or for identifying drivable surfaces for autonomous vehicles. A related type of functionality is object segmentation, which associates each pixel with a specific object (e.g., pedestrian #4), and panoptic segmentation, which combines the functionality of semantic and instance segmentation.
In this talk, Taylor introduces deep learning-based semantic, instance and panoptic segmentation. He explores the network topologies commonly used and how they are trained. He also discusses metrics for evaluating segmentation algorithm output, and considerations when selecting segmentation algorithms. Finally, he identifies resources useful for developers getting started with segmentation.
See here for a PDF of the slides.