Skip to main content

Experts think America should consider giving A.I. control of the nuclear button

In news to file under “What could possibly go wrong,” two U.S. deterrence experts have penned an article suggesting that it might be time to hand control of the launch button for America’s nuclear weapons over to artificial intelligence. You know, that thing which can mistake a 3D-printed turtle for a rifle!

In an article titled “America Needs a ‘Dead Hand,’” Dr. Adam Lowther and Curtis McGiffin suggest that “an automated strategic response system based on artificial intelligence” may be called for due to the speed with which a nuclear attack could be leveled against the United States. Specifically, they are worried about two weapons — hypersonic glide vehicles and hypersonic cruise missiles — which reduce response times to mere minutes from when an attack is launched until it strikes.

Recommended Videos

They acknowledge that such a suggestion is likely to “generate comparisons to Dr. Strangelove’s doomsday machine, War Games’ War Operation Plan Response, and The Terminator’s Skynet. But they also argue that “the prophetic imagery of these science fiction films is quickly becoming reality.” As a result of the compressed response time frame from modern weapons of war, the two experts think that an A.I. system “with predetermined response decisions, that detects, decides, and directs strategic forces” could be the way to go.

As with any nuclear deterrent, the idea is not to use such a system. Nuclear deterrents are based on the concept that adversaries know that the U.S. will detect and nuclear launch and answer with a devastating response. That threat should be enough to put them off. Using A.I. tools for this decision-making process would just update this idea for 2019.

But is it really something we should consider? The researchers stop short of suggesting such a thing should definitely be embraced. “Artificial intelligence is no panacea,” they write. “Its failures are numerous. And the fact that there is profound concern by well-respected experts in the field that science fiction may become reality, because artificial intelligence designers cannot control their creation, should not be dismissed.”

Still, the fact that this is even something feasible, both technologically and strategically, means it’s no longer quite as science fiction as many would like. And just when we thought the worst thing that could come out of Terminator was the increasingly terrible sequels…

Luke Dormehl
Former Digital Trends Contributor
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
Scientists are using A.I. to create artificial human genetic code
Profile of head on computer chip artificial intelligence.

Since at least 1950, when Alan Turing’s famous “Computing Machinery and Intelligence” paper was first published in the journal Mind, computer scientists interested in artificial intelligence have been fascinated by the notion of coding the mind. The mind, so the theory goes, is substrate independent, meaning that its processing ability does not, by necessity, have to be attached to the wetware of the brain. We could upload minds to computers or, conceivably, build entirely new ones wholly in the world of software.

This is all familiar stuff. While we have yet to build or re-create a mind in software, outside of the lowest-resolution abstractions that are modern neural networks, there are no shortage of computer scientists working on this effort right this moment.

Read more
A.I. teaching assistants could help fill the gaps created by virtual classrooms
AI in education kid with robot

There didn’t seem to be anything strange about the new teaching assistant, Jill Watson, who messaged students about assignments and due dates in professor Ashok Goel’s artificial intelligence class at the Georgia Institute of Technology. Her responses were brief but informative, and it wasn’t until the semester ended that the students learned Jill wasn’t actually a “she” at all, let alone a human being. Jill was a chatbot, built by Goel to help lighten the load on his eight other human TAs.

"We thought that if an A.I. TA would automatically answer routine questions that typically have crisp answers, then the (human) teaching staff could engage the students on the more open-ended questions," Goel told Digital Trends. "It is only later that we became motivated by the goal of building human-like A.I. TAs so that the students cannot easily tell the difference between human and A.I. TAs. Now we are interested in building A.I. TAs that enhance student engagement, retention, performance, and learning."

Read more
Wild new ‘brainsourcing’ technique trains A.I. directly with human brainwaves
brainsourcing university of helsinki study a mannequin is fitted with prototype o

Picture a room full of desks, numbering more than two dozen in total. At each identical desk, there is a computer with a person sitting in front of it playing a simple identification game. The game asks the user to complete an assortment of basic recognition tasks, such as choosing which photo out of a series that shows someone smiling or depicts a person with dark hair or wearing glasses. The player must make their decision before moving onto the next picture.

Only they don’t do it by clicking with their mouse or tapping a touchscreen. Instead, they select the right answer simply by thinking it.

Read more