Researchers at Stanford University are leveraging the power of artificial intelligence to help their autonomous prototypes learn from the past as they prepare for the future. They have designed a control system that drives a car using a blend of real-time data and data generated by past experiences, and they’re putting it through its paces on California’s Thunderhill race track.
Stanford — an early pioneer in self-driving technology — explained the control systems in many of the autonomous prototypes zigzagging the world’s roads rely on up-to-the-minute information to plan their next move. If a car detects a bend in the road, it knows it needs to turn in order to avoid going straight where the pavement doesn’t. The school developed a neural network capable of powering what it calls a flexible, responsive control system that, to use our previous example, tells the car how fast it can go around the bend by searching through its vast data library for information about previous bends it has encountered.
“With the techniques available today, you often have to choose between data-driven methods and approaches grounded in fundamental physics. We think the path forward is to blend these approaches in order to harness their individual strengths. Physics can provide insight into structuring and validating neural network models that, in turn, can leverage massive amounts of data,” explained J. Christian Gerdes, a professor of mechanical engineering at Stanford.
Stanford took two of its prototypes called Shelley and Niki, respectively, to Thunderhill to test the new system. Researchers sent Shelley — a last-generation Audi TTS — around the track and programmed existing technology to guide it. The car knew the track, and it was aware of key parameters which affect lap times, like the weather. It lapped Thunderhill about as quickly as a skilled amateur driver. Its neural network allowed Niki — a Volkswagen GTI — to post approximately the same lap time as Shelley without knowing anything about the track it was driving on.
Watching driverless cars go around a track is awesome, but that’s not why Stanford researchers are working on this project. Their neural network can help driverless cars navigate through blizzards, rainstorms, and other low-visibility, low-traction conditions, which would knock down one of the biggest hurdles manufacturers face as they attempt to make autonomous technology mainstream. It can also help these vehicles make emergency maneuvers. The technology isn’t ready for production yet, however.
Stanford explained its neural network works best when it encounters situations it’s familiar with. If it has never seen a roundabout before, it likely won’t know what to do as it approaches one. Existing sensor-based technology would recognize a slab of concrete in the middle of the road, and guide the car around it. Researchers are confident the power and accuracy of its neural network will improve as it’s exposed to a growing number of road types and conditions.
“We want our algorithms to be as good as the best skilled drivers — and, hopefully, better,” said Nathan Spielberg, a graduate student in mechanical engineering at Stanford, and the lead author of the paper about the neural network.