Skip to main content

Finding the ‘blind spots’ in autonomous vehicle artificial intelligence

Autonomous vehicles are becoming more and more sophisticated, but concerns still abound about the safety of such systems. Creating an autonomous system that drives safely in laboratory conditions is one thing, but being confident in those systems’ ability to navigate in the real world is quite another.

Researchers from Massachusetts Institute of Technology (MIT) have been working on just this problem, looking at the differences between how autonomous systems learn in training and the issues that arise in the real world. They have created a model of situations in which what an autonomous system has learned does not match actual events that occur on the road.

Recommended Videos

An example the researchers give is understanding the difference between a large white car and an ambulance. If an autonomous car has not been trained on or does not have the sensors to differentiate between these two types of vehicle, then the car may not know that it should slow down and pull over when an ambulance approaches. The researchers describe these kind of scenarios as “blind spots” in training.

A model by MIT and Microsoft researchers identifies instances where autonomous cars have “learned” from training examples that don’t match what’s actually happening on the road, which can be used to identify which learned actions could cause real-world errors. MIT News

To identify these blind spots, the researchers used human input to oversee an artificial intelligence (A.I.) as it goes through simulation training, and to give feedback on any mistakes that the system makes. The human feedback can then be compared with the A.I. training data to identify any situations where the A.I. needs more or better information to make safe and correct choices.

“The model helps autonomous systems better know what they don’t know,” author of the paper Ramya Ramakrishnan, a graduate student in the Computer Science and Artificial Intelligence Laboratory, said in a statement. “Many times, when these systems are deployed, their trained simulations don’t match the real-world setting [and] they could make mistakes, such as getting into accidents. The idea is to use humans to bridge that gap between simulation and the real world, in a safe way, so we can reduce some of those errors.”

This can also work in real-time, with a person in the driving seat of an autonomous vehicle. As long as the A.I. is maneuvering the car correctly, the person does nothing, but if they spot a mistake then they can take the wheel, indicating to the system that there is something that it missed. This teaches the A.I. in which situations there are conflicts between how it expects to behave and what a human driver deems safe and responsible driving.

Currently the system has only been tested in virtual video game environments, so the next step is to take the system on the road and test it in real vehicles.

Georgina Torbet
Georgina has been the space writer at Digital Trends space writer for six years, covering human space exploration, planetary…
Ford and VW close down Argo AI autonomous car unit
An Argo AI autonomous car on the road.

Autonomous-car specialist Argo AI is closing down after Ford and Volkswagen, Argo's main backers, ended support for the Pittsburgh-based company.

First reported by TechCrunch and later confirmed by the two auto giants, some of the 2,000 workers at Argo will transfer to Ford and Volkswagen, while others without an offer will receive a severance package. Argo’s technology is also set to end up in the possession of the two companies, though at this stage it’s not clear how it might be shared.

Read more
A weird thing just happened with a fleet of autonomous cars
A passenger getting into a Cruise robotaxi.

In what must be one of the weirder stories linked to the development of autonomous vehicles, a fleet of Cruise self-driving cars gathered together at an intersection in San Francisco earlier this week, parked up, and blocked traffic for several hours. And to be clear: No, they weren't supposed to do that.

Some observers may have thought they were witnessing the start of the robot uprising, but the real reason for the mishap was more prosaic: An issue with the platform's software.

Read more
Tesla recalls 130,000 U.S. vehicles over touchscreen safety issue
tesla wants youtube on touchscreens touchscreen

Tesla is recalling 129,960 of its electric cars in the U.S. over an issue with the touchscreen that could result in the device overheating or losing its image.

This is considered a safety issue as the display provides a feed from the rearview camera, as well as settings linked to the vehicle’s windshield defrosters. It also shows if the vehicle is in drive, neutral, or reverse. Tesla said it isn't aware of any crashes, injuries, or deaths linked to the issue.

Read more