Google’s Project Tango: 3D Mapping Gets Personal

tango-phones

The second-generation HTC One isn't, I'm happy to report, the only recent smartphone to tote a back-side multi-camera array. In fact, according to a Chrome browser issue tracker post (with thanks to TechCrunch for the heads-up), it's got three, along with one more front-side.

The above video, published by Google, showcases the company's Project Tango prototype, which was unveiled a month ago and per the project website:

is a 5” Android phone containing highly customized hardware and software designed to track the full 3-dimensional motion of the device as you hold it while simultaneously creating a map of the environment. These sensors allow the phone to make over a quarter million 3D measurements every second updating its position and orientation in real-time combining that data into a single 3D model of the space around you.

At Project Tango's core is Movidius' Myriad 1 vision processing chip. Surrounding it, per the tracker post:

It contains several special cameras/ranging sensors:
– a standard 4MP colour backside camera.
– a Fish eye (180 degrees Field of View FOV) camera.
– a depth camera 320×180@5Hz.
– a front camera with a 120 degree FOV (similar to the human eye FOV).

And fortunately, thanks to project supplier OmniVision Technologies (PDF), we know quite a big more about those cameras:

OmniVision Technologies, Inc. (NASDAQ: OVTI), a leading developer of advanced digital imaging solutions, today announced that it is working with Google's Advanced Technology and Projects (ATAP) team to develop ground-breaking vision-based mobile devices, capable of tracking and mapping environments and motion in 3D. As part of the collaboration, OmniVision's new OV4682 and OV7251 image sensors provide high performance imaging functionality to the project's Android-based smartphone and development kit….

As the main camera, the OV4682 is the eye of Project Tango's mobile device. The OV4682 is a 4-megapixel RGB IR image sensor that captures high-resolution images and video as well as IR information, enabling depth analysis. The sensor features a 2-micron OmniBSI-2(TM) pixel and records 4-megapixel images and video in a 16:9 format at 90 frames per second (FPS), with a quarter of the pixels dedicated to capturing IR.

The OV7251 CameraChip sensor is capable of capturing VGA resolution video at 100 FPS using a global shutter to reduce or eliminate unwanted image artifacts, which occur with traditional CMOS image sensors as a result of motion during image capture. The low-power sensor plays a critical role in Project Tango's mobile device, providing excellent low-light sensitivity and motion tracking information to enable utilization of accurate and detailed device orientation data.

For the moment, at least, Google is providing only a few hundred Project Tango prototypes to interested developers. One of those lucky partners, Matterport (which makes computer vision and perceptual computing solutions, such as software that maps and creates 3D reconstructions of indoor spaces), has published the video of a real-life room capture using a Project Tango handset:

As TechCrunch notes, "your next smartphone might be able to not only see, but also to understand its surroundings." Sounds like embedded vision to me! And apparently there's no shortage of enthusiasm out there for the concept; as I write this, the Google video has achieved more than 2.5 million views.

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top