Before it can do anything, a self-driving car needs to figure out whether a vehicle in front of it is double parked. To do this, the car can use “contextual cues,” such as the appearance of hazard lights, or the amount of time a vehicle has been stationary, according to a Cruise blog post. Self-driving cars can also recognize if the vehicle in front is a type that tends to double park frequently, such as a delivery truck. Cruise’s cars rely on cameras, radar, and lidar to “see” what’s around them, and machine learning to synthesize information into a conclusion. Human beings do this all the time, but it’s something autonomous cars must be painstakingly taught.
A self-driving car can’t just sit behind a double-parked vehicle indefinitely. A human driver would simply look to see if there was a clear path and drive around the stationary vehicle, but a self-driving car’s control software must break that action down to its discrete parts. Algorithms consider everything from the potential actions of other road users, to how quickly the car will respond to control inputs. Cruise uses what it calls a “model predictive control” algorithm to try to chart how the situation around the car may change, and how the car is expected to react to a given command.
Cruise does most of its testing in San Francisco, providing a more challenging environment than some other popular testing locations. That exposes Cruise’s test cars to more difficult scenarios, giving engineers more opportunities to improve the autonomous-driving tech. But it also shows just how complicated it is to get a self-driving car to respond to a scenario most human drivers can easily figure out. Cruise parent GM hopes to put large fleets of autonomous cars on the road within the next few years, but getting the tech to work everywhere may take much longer.