titiklabingapat
Member
At first, it doesn't seems quite as impressive as inventing AI that can learn on the fly and not just some glorified virtual track overlay on top of current road infrastructure. It seems daunting, mapping out the entire US road network the same way they are doing it for Google maps and Streetview.
But here's the thing, they only have to do it once and they have pretty much mapped almost the entirety of the US road network and then some other countries. The real brilliance of the way Google is doing it that the cars learn whatever new thing they encounter after passing through this virtual overlay, the same way how Google maps currently update real time traffic information on your phone, basically giving us virtual real time feedback for the whole network. The way they are doing it has big implications to AI, and will partially explain why they are acquiring so many of the robotics company in the last 12 months. Exciting times!
Anyway, have a read. It's a great write up from The Atlantic:
http://www.theatlantic.com/technolo...-makes-googles-self-driving-cars-work/370871/
But here's the thing, they only have to do it once and they have pretty much mapped almost the entirety of the US road network and then some other countries. The real brilliance of the way Google is doing it that the cars learn whatever new thing they encounter after passing through this virtual overlay, the same way how Google maps currently update real time traffic information on your phone, basically giving us virtual real time feedback for the whole network. The way they are doing it has big implications to AI, and will partially explain why they are acquiring so many of the robotics company in the last 12 months. Exciting times!
Anyway, have a read. It's a great write up from The Atlantic:
http://www.theatlantic.com/technolo...-makes-googles-self-driving-cars-work/370871/
Google's self-driving cars can tour you around the streets of Mountain View, California.
I know this. I rode in one this week. I saw the car's human operator take his hands from the wheel and the computer assume control. "Autodriving," said a woman's voice, and just like that, the car was operating autonomously, changing lanes, obeying traffic lights, monitoring cyclists and pedestrians, making lefts. Even the way the car accelerated out of turns felt right.
It works so well that it is, as The New York Times' John Markoff put it, "boring." The implications, however, are breathtaking.
Perfect, or near-perfect, robotic drivers could cut traffic accidents, expand the carrying capacity of the nation's road infrastructure, and free up commuters to stare at their phones, presumably using Google's many services.
But there's a catch.
Today, you could not take a Google car, set it down in Akron or Orlando or Oakland and expect it to perform as well as it does in Silicon Valley.
Here's why: Google has created a virtual track out of Mountain View.
The key to Google's success has been that these cars aren't forced to process an entire scene from scratch. Instead, their teams travel and map each road that the car will travel. And these are not any old maps. They are not even the rich, road-logic-filled maps of consumer-grade Google Maps.
They're probably best thought of as ultra-precise digitizations of the physical world, all the way down to tiny details like the position and height of every single curb. A normal digital map would show a road intersection; these maps would have a precision measured in inches.
But the "map" goes beyond what any of us know as a map. "Really, [our maps] are any geographic information that we can tell the car in advance to make its job easier," explained Andrew Chatham, the Google self-driving car team's mapping lead.
"We tell it how high the traffic signals are off the ground, the exact position of the curbs, so the car knows where not to drive," he said. "We'd also include information that you can't even see like implied speed limits."
Google has created a virtual world out of the streets their engineers have driven. They pre-load the data for the route into the car's memory before it sets off, so that as it drives, the software knows what to expect.
"Rather than having to figure out what the world looks like and what it means from scratch every time we turn on the software, we tell it what the world is expected to look like when it is empty," Chatham continued. "And then the job of the software is to figure out how the world is different from that expectation. This makes the problem a lot simpler."
While it might make the in-car problem simpler, but it vastly increases the amount of work required for the task. A whole virtual infrastructure needs to be built on top of the road network!
The more you think about it, the more the goddamn Googleyness of the thing stands out: the ambition, the scale, and the type of solution they've come up with to this very hard problem. What was a nearly intractable "machine vision" problem, one that would require close to human-level comprehension of streets, has become a much, much easier machine vision problem thanks to a massive, unprecedented, unthinkable amount of data collection.
Last fall, Anthony Levandowski, another Googler who works on self-driving cars, went to Nissan for a presentation that immediately devolved into a Q&A with the car company's Silicon Valley team. The Nissan people kept hectoring Levandowski about vehicle-to-vehicle communication, which the company's engineers (and many in the automotive industry) seemed to see as a significant part of the self-driving car solution.
Every vehicle's data is being incorporated into the maps. That information "helps them cheat."
He parried all of their queries with a speed and confidence just short of condescension. "Can we see more if we can use another vehicle's sensors to see ahead?" Levandowski rephrased one person's question. "We want to make sure that what we need to drive is present in everyone's vehicle and sharing information between them could happen, but it's not a priority."
What the car company's people couldn't or didn't want to understand was that Google does believe in vehicle-to-vehicle communication, but serially over time, not simultaneously in real-time.
After all, every vehicle's data is being incorporated into the maps. That information "helps them cheat, effectively," Levandowski said. With the map dataor as we might call it, experienceall the cars need is their precise position on a super accurate map, and they can save all that parsing and computation (and vehicle to vehicle communication).
There's a fascinating parallel between what Google's self-driving cars are doing and what the Andreesen Horowitz-backed startup Anki is doing with its toy car racing game. When you buy Anki Drive, they sell you a track on which the cars race, which has positioning data embedded. The track is the physical manifestation of a virtual racing map.
The Google cars are not dumb machines. They have their own set of sensors: radar, a laser spinning atop the Lexus SUV, and a suite of cameras. And they have some processing on board to figure out what routes to take and avoid collisions.
This is a hard problem, but Google is doing the computation with what Levandowski described at Nissan as a "desktop" level system. (The big computation and data processing are done by the teams back at Google's server farms.)
What that on-board computer does first is integrate the sensor data. It takes the data from the laser and the cameras and integrates them into a view of the world, which it then uses to orient itself (with the rough guidance of GPS) in virtual Mountain View. "We can align what we're seeing to what's stored on the map. That allows us to very accuratelywithin a few centimetersposition ourselves on the map," said Dmitri Dolgov, ​the self-driving car team's software lead. "Once we know where we are, all that wonderful information encoded in our maps about the geometry and semantics of the roads becomes available to the car."
The lasers and cameras of a Google self-driving car.
Once they know where they are in space, the cars can do the work of watching for and modeling the behavior of dynamic objects like other cars, bicycles, and pedestrians.
Here, we see another Google approach. Dolgov's team uses machine learning algorithms to create models of other people on the road. Every single mile of driving is logged, and that data fed into computers that classify how different types of objects act in all these different situations. While some driver behavior could be hardcoded in ("When the lights turn green, cars go"), they don't exclusively program that logic, but learn it from actual driver behavior.
In the way that we know that a car pulling up behind a stopped garbage truck is probably going to change lanes to get around it, having been built with 700,000 miles of driving data has helped the Google algorithm to understand that the car is likely to do such a thing.
Most driving situations are not hard to comprehend, but what about the tough ones or the unexpected ones? In Google's current process, a human driver would take control, and (so far) safely guide the car. But fascinatingly, in the circumstances when a human driver has to take over, what the Google car would have done is also recorded, so that engineers can test what would have happened in extreme circumstances without endangering the public.
So, each Google car is carrying around both the literal products of previous drivesthe imagery and data captured from crawling the physical worldas well as the computed outputs of those drives, which are the models for how other drivers might behave.