Skip to main content

Finding the ‘blind spots’ in autonomous vehicle artificial intelligence

Autonomous vehicles are becoming more and more sophisticated, but concerns still abound about the safety of such systems. Creating an autonomous system that drives safely in laboratory conditions is one thing, but being confident in those systems’ ability to navigate in the real world is quite another.

Researchers from Massachusetts Institute of Technology (MIT) have been working on just this problem, looking at the differences between how autonomous systems learn in training and the issues that arise in the real world. They have created a model of situations in which what an autonomous system has learned does not match actual events that occur on the road.

An example the researchers give is understanding the difference between a large white car and an ambulance. If an autonomous car has not been trained on or does not have the sensors to differentiate between these two types of vehicle, then the car may not know that it should slow down and pull over when an ambulance approaches. The researchers describe these kind of scenarios as “blind spots” in training.

A model by MIT and Microsoft researchers identifies instances where autonomous cars have “learned” from training examples that don’t match what’s actually happening on the road, which can be used to identify which learned actions could cause real-world errors. MIT News

To identify these blind spots, the researchers used human input to oversee an artificial intelligence (A.I.) as it goes through simulation training, and to give feedback on any mistakes that the system makes. The human feedback can then be compared with the A.I. training data to identify any situations where the A.I. needs more or better information to make safe and correct choices.

“The model helps autonomous systems better know what they don’t know,” author of the paper Ramya Ramakrishnan, a graduate student in the Computer Science and Artificial Intelligence Laboratory, said in a statement. “Many times, when these systems are deployed, their trained simulations don’t match the real-world setting [and] they could make mistakes, such as getting into accidents. The idea is to use humans to bridge that gap between simulation and the real world, in a safe way, so we can reduce some of those errors.”

This can also work in real-time, with a person in the driving seat of an autonomous vehicle. As long as the A.I. is maneuvering the car correctly, the person does nothing, but if they spot a mistake then they can take the wheel, indicating to the system that there is something that it missed. This teaches the A.I. in which situations there are conflicts between how it expects to behave and what a human driver deems safe and responsible driving.

Currently the system has only been tested in virtual video game environments, so the next step is to take the system on the road and test it in real vehicles.

Editors' Recommendations

Georgina Torbet
Georgina is the Digital Trends space writer, covering human space exploration, planetary science, and cosmology. She…
Tesla recalls 130,000 U.S. vehicles over touchscreen safety issue
tesla wants youtube on touchscreens touchscreen

Tesla is recalling 129,960 of its electric cars in the U.S. over an issue with the touchscreen that could result in the device overheating or losing its image.

This is considered a safety issue as the display provides a feed from the rearview camera, as well as settings linked to the vehicle’s windshield defrosters. It also shows if the vehicle is in drive, neutral, or reverse. Tesla said it isn't aware of any crashes, injuries, or deaths linked to the issue.

Read more
Ford recalls over half a million vehicles over safety issues
ford recall concerns steering wheels logo

Ford is recalling more than 650,000 trucks and SUVs in the U.S. over an issue with the windshield wipers.

According to the National Highway Traffic Safety Administration (NHTSA), the wipers could suddenly stop working, or even detach from the vehicle, causing a possible safety hazard.

Read more
How a big blue van from 1986 paved the way for self-driving cars
Lineup of all 5 Navlab autonomous vehicles.

In 1986, a blue Chevy van often cruised around the streets of Pittsburgh, Pennsylvania near Carnegie Mellon University. To the casual observer, nothing about it appeared out of the ordinary. Most people would pass by it without noticing the camcorder peeking out from its roof, or the fact that there were no hands on the steering wheel.

But if any passerby had stopped to inspect the van and peer into its interior, they would have realized it was no ordinary car. This was the world's first self-driving automobile: A pioneering work of computer science and engineering somehow built in a world where fax machines were still the predominant way to send documents, and most phones still had cords. But despite being stuck in an era where technology hadn't caught up to humanity's imagination quite yet, the van -- and the researchers crammed into it -- helped to lay the groundwork for all the Teslas, Waymos, and self-driving Uber prototypes cruising around our streets in 2022.

Read more