As technology grows more advanced, some (very smart) people are worried that the nightmare scenarios posited in science fiction might actually come true.
Stephen Hawking has discussed the potential dangers of artificial intelligence, and apparently Elon Musk feels the same way.
Over the weekend, the head of Tesla Motors and SpaceX tweeted that humanity should be careful about developing machines that can think for themselves.
“Worth reading Superintelligence by Bostrom,” Musk said, “We need to be careful with A.I. Potentially more dangerous than nukes.”
Yikes. The book Musk refers to is Superintelligence: Paths, Dangers, Strategies by Swedish philosopher Nick Bostrom. In it Bostrom argues that machine intelligence could eventually surpass human intelligence and become the dominant force on Earth.
In a later tweet, Musk mused that he hoped “we’re not just the biological boot loader for digital super intelligence. Unfortunately, that is increasingly probable.”
This isn’t the first time Musk has expressed concern over A.I., something that goes against the utopian view of the technology professed by other Silicon Valley notables.
Back in June, he told CNBC that he believes a “Judgment Day”-scenario straight out of Terminator is possible, and that he’s been investing in companies working on A.I. just to keep an eye on them.
The fact that a man who spends his days selling electric cars and pushing for widely-available space travel doesn’t like robots shouldn’t be too surprising. These different technologies aren’t inevitably linked.
It’s easy to assume that electric cars, smartphones, and data glow clouds will inexorably lead to other technologies like AI, but that doesn’t have to be the case.
Humans have to make the conscious decision to develop (or not develop) different technologies. Musk has chosen to opt out of one, and he’s probably right to do so.