What happens when an immovable object meets an unstoppable force? That is an age-old conundrum Google is trying to solve, as when you have two artificial intelligence systems that are programmed to complete conflicting tasks, how do you stop them from fighting about it? To that end, Google is using its DeepMind subsidiary to figure out how to have AIs play nicely together.
DeepMind is running experiments on robotic “social dilemmas” and published the results in a new report, The Verge reported. The idea was to see how AIs interacted with one another when their tasks might interfere with that of another AI. Would they push through and achieve their goal regardless or do they need specialized programming to make them cooperate?
Cooperation is the key here. While Google’s experiment might seem silly (or at the very least fun to watch) as AI become smarter and take control of more facets of our lives and societal structure, we need them to work together. It’s no good if the AI powering your car decides it is more important than the AI controlling the traffic lights.
To figure out how AI might function in these environments and to try and understand the methods of improving the rate of cooperation, rather than antagonistic selfishness, Google ran some AI through a couple of games.
In “gathering,” the AI earn points for collecting apples (the green squares) but have the ability to freeze their opponent. In that context, if there are plenty of apples, the AI cooperated without much interference but when there was a scarcity of apples, they would zap each other much more often.
In the “wolfpack” game, two AI must work together to corral a third, cooperation was much more apparent because they had a common goal and because points were shared, rather than awarded for selfish actions.
This might all seem obvious to us, because that is similar to how the human brain responds to such stimuli. To be able to see AI making similar choices though, gives us a much better understanding of how AI might react to conflict in the future. That, in turn, makes it easier for us to program around it.
It’s just a case of encouraging cooperation through programming and not rewarding selfish behaviour as much. Perhaps programming AIs to believe they are all part of the same system, working toward a common goal. That sounds an awful lot like some sort of Super Matrix.