This article was originally published at NVIDIA's blog. It is reprinted here with the permission of NVIDIA.
Everybody hates driving through cross-town traffic. This week, Google said they’re doing something about it, announcing that they’ve shifted the focus of their Self-Driving Car Project from cruising down freeways to mastering city streets.
The blog post, by Google’s Chris Urmson, goes to the heart of what NVIDIA’s been mastering for 20 years: visual computing, which will be key to the deployment and success of advanced driver assistance systems.
The power behind visual computing comes from the GPU (graphics processing unit), which delivers the processing power to handle what Chris refers to as the need to “detect hundreds of distinct objects simultaneously,” “pay attention” and “never get tired or distracted.”
Hitting the road; We’re working to put our advanced visual computing technology to work on the streets.
He’s describing the need for computer vision, image processing and machine learning – not only to build the brain that goes inside of the car, but to handle the real-time processing that helps with the instant decision-making needed during a drive.
Outfitted with a 360-degree laser, radar and cameras, the Google Self-Driving Lexus RX 450H collects an incredible amount of visual data – a reported 1GB of data per second. To put that in perspective, I consume about 3-4GB of data over the course of a month on my smartphone.
That data has to be integrated with an embedded map database to build a 3D model of the driving environment.
Just think about a few – not all – of the ways visual computing comes into play:
- Creation of 3D models in real time based on incoming sensor data
- Tracking of stationary and moving objects such as other cars, traffic lights, pedestrians and even a soccer ball rolling across the road
- Identification of each of those objects and classifying whether or not they will have an effect on the next decision the car needs to make
In their demo video, one of Google’s test drivers notes that a moving object like a bicycle rider can even change after it’s been classified as such, if, for example, the rider extends an arm to indicate he or she is making a turn.
GPUs are suited for these challenges because they’re built to handle many tasks at once. Their parallel computing structure means they’re more effective than CPUs when it comes to processing massive amounts of incoming data. In fact, NVIDIA’s GPUs power some of the world’s fastest supercomputers, such as Titan at the Oak Ridge National Laboratory. If a car is going to be doing the driving for me, I want it powered by a supercomputer.
We’re putting the same technology we use for supercomputers into mobile processors for cars and other mobile devices.
NVIDIA has now brought that level of performance to the personal level, with automotive applications like advanced driver assistance systems.
For automakers, NVIDIA offers the Jetson Pro development kit. For other embedded applications, the Jetson TK1 is now available for ordering.