Ihor Starepravo, Embedded Practice Director at Luxoft, demonstrates the company's latest embedded vision technologies and products at the May 2017 Embedded Vision Summit. Specifically, Starepravo demonstrates how an embedded system platform extracts a depth map out of what’s being filmed. This complex process is done in real time, allowing devices to understand complex dynamic 3D scenes, as well as react to human gestures and other movements. While run on a relatively older embedded system platform, this process is three times faster than non-optimized OpenCV algorithms.