They did it for a good reason, too — and it is not because they have a whole lot of old quadcopters to get rid of before the start of the next academic year.
“We are interested in the problem of drone navigation: How does a UAV learn to avoid obstacles and learn to navigate,” Abhinav Gupta, an assistant professor in CMU’s Robotics Institute, told Digital Trends. “Unlike most other problems where data is the answer to many hard questions, what makes this problem hard is [a] scarcity of relevant data. We can use human experts and ask them to fly drones, but such data is small in size and biased towards success since the number of crashes is very low.”
Instead of using a computer simulation to solve the problem, Gupta and colleagues set out to build a framework where the goal of the drone is to crash. In their study, the drones were instructed to fly slowly until colliding with something, after which they would return to the starting position and set off in a new direction. By doing this repeatedly and then feeding the crash data into a convolutional neural network, the team was able to train a drone to be able to more successfully fly autonomously — even in narrow, cluttered environments.
The algorithm controlling the drone works by splitting the picture the drone sees into two separate images and then turning in the direction of whichever looks less likely to result in a crash. The results were surprisingly effective.
The drone still runs into problems, particularly involving glass doors and plain walls, but it is a whole lot better than it was before its training. Should we wind up living in a world where thousands of drones are constantly buzzing around, carrying out a range of tasks in complex real-world environment, research like this is going to be vital to developing better autonomous flying machines.
In the meantime, researchers get to exercise their destructive whims by making robots crash for the sake of “science.”