“Human-centric Computer Vision with Synthetic Data,” a Presentation from Unity Technologies

Alex Thaman, Chief Software Architect at Unity Technologies, presents the “Human-centric Computer Vision with Synthetic Data” tutorial at the May 2022 Embedded Vision Summit.

Companies are continuing to accelerate the adoption of computer vision to detect, identify and understand humans from camera imagery. Unity sees these human-centric use cases in a growing range of applications including augmented reality, self-checkout in retail, automated surveillance and security, player tracking in sports, and consumer electronics. Creating robust solutions for human-centric computer vision applications requires large, balanced, carefully curated labeled data sets. But acquiring real-world image and video data of humans is challenging due to concerns around bias, privacy and safety. And labeling and curating real-world data is expensive and error-prone.

Synthetic data provides an elegant and cost-effective alternative to these challenges. In this presentation, Thaman shows how Unity’s tools and services can be used to quickly generate perfectly labeled, privacy-compliant, unbiased datasets for human-centric computer vision.

See here for a PDF of the slides.

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top