We already know that track stands confuse autonomous cars, but now, we’re being given more reasons to put quotations around the term “smart car.” According to Jonathan Petit, the principal scientist at software company Security Innovation, a well-intentioned safety feature in self-driving cars can become their Achilles heel when faced with malintentioned hackers armed with nothing more than low cost laser and a Raspberry Pi. By combining a laser with a pulse generator (easily produced by a cheap computer), Petit claims that he is able to create phantom objects, like pedestrians, other cars, or just general obstacles in the road that could either slow down or entirely paralyze a car trying to avoid hitting objects in its path. And the total cost of this potentially dangerous prank? Just $60.
“I can take echoes of a fake car and put them at any location I want,” Petit told IEEE. “And I can do the same with a pedestrian or a wall.” This means that an autonomous vehicle could be tricked into thinking that there’s something in its path to be avoided, or worse yet, inundated by so many signals from all different directions that it is forced into a perplexed standstill. Petit continued, “I can spoof thousands of objects and basically carry out a denial of service attack on the tracking system so it’s not able to track real objects.”
Petit’s findings, which are due to be presented in November at the Black Hat Europe security conference, focus on sensors as the most vulnerable parts of these self-driving vehicles. “This is a key point, where the input starts,” he said. “If a self-driving car has poor inputs, it will make poor driving decisions.”
In conducting his experiment, Petit first recorded the unencoded, unencrypted pulses from a commercial IBEO Lux lidar unit, a sensor system that combines light and radar and serves as the “eyes” of many autonomous cars. He and then simply replayed them, creating the illusion of a vehicle, a person, or something else entirely. “The only tricky part was to be synchronized, to fire the signal back at the lidar at the right time,” he told IEEE. Bu afterwards, everything was easy, as “the lidar thought that there was clearly an object there.”
More concerning still is the range from which Petit’s attacks could theoretically work — up to 100 meters and from effectively any direction. While Petit has only tested his laser powered car disabling setup on one lidar model, this certainly seems like a gaping security hole that should be addressed sooner rather than later. Still, Petit says, “The point of my work is not to say that IBEO has a poor product. I don’t think any of the lidar manufacturers have thought about this or tried this.”
Ultimately, Petit hopes that his work will inspire not only cause for concern, but necessary improvements that will make self-driving cars safer as they begin to enter the roadways. “There are ways to solve it,” he said optimistically. “A strong system that does misbehavior detection could cross-check with other data and filter out those that aren’t plausible. But I don’t think carmakers have done it yet. This might be a good wake-up call for them.”