Skip to main content

Can AI smart enough to play poker be weaponized without turning Terminator?

artificial intelligence could be a boon to the military army unmanned head
ARMY
Last month, some of the world’s best Texas Hold’em poker players gathered at the Rivers Casino in Pittsburgh to take on an unusual opponent. Over the course of 20 days and 120,000 hands, they were utterly outmatched by an artificial intelligence known as Libratus.

This isn’t the first time an AI has beaten humans in a test of wits, and it won’t be the last. Last year, Google’s DeepMind beat champion Go player Lee Sedol in a high-profile series, and there are plans to teach AIs how to play Starcraft II.

However, these AIs aren’t being developed just to beat human players at games. The same groundwork that helps a computer excel at poker can be applied to all kinds of different scenarios. Right now, we’re seeing the capabilities of AIs that can think three moves ahead of their opponent — and soon, systems like these could be arbitrating matters of life and death.

Imperfect Information

Shortly after Libratus saw off its competition at the Rivers Casino, its creator, Carnegie Mellon professor Tuomas Sandholm, was interviewed about the project by Time. When asked about potential applications for the AI, he reeled off a list of “high stakes” possibilities including business negotiations, cybersecurity, and military strategy planning.

Image used with permission by copyright holder

Libratus hit the headlines because of its ability to play poker, but it’s capable of much more than that. Sandholm didn’t spend twelve years of his life working on the project to spot his friends’ bluffs when game night rolls around.

The real strength of Libratus is its capacity to figure out scenarios where information is either imperfect or incomplete. This is what sets the AI apart from the DeepMind implementation that beat Lee Sedol in Go last year. Unlike Go, a game where all information about the game state is known, Libratus had to contend with Poker, a game that revolves around incomplete information. The AI couldn’t know what cards other players had in their respective hands, and had to play around that restriction.

Sandholm described heads-up, no-limit Texas Hold’em as the “last frontier” among games that have been subjected to significant AI research. The fact that Libratus was so successful against high-level human players represents a benchmark for the problem-solving capacity of AIs working with imperfect information.

Tough Poker Player: Brains Vs. AI Update

It’s no secret that AIs are getting smarter — exhibitions like last month’s high-stakes poker game are intended to publicize the most recent advances. AI has long been a touchstone for cutting-edge technology, and now there’s plenty of easily digestible evidence that points to how advanced work in this field has become. Now, we’re seeing the financial industry and the medical industry speak on how they can make these advances work for them, and they’re not alone.

The United States military is already deep in the process of establishing the best way to implement this kind of technology on the battlefield. It’s not a case of ‘if’; it’s a case of ‘how’.

Lieutenant Libratus

As it stands, the U.S. military is embroiled in a fierce discussion as to how best to use AI to wage war. Opinion is split between using the technology to aid and assist human operatives, and allowing for the creation of autonomous AI-controlled entities.

Libratus hit the headlines because it can play poker, but it’s capable of much more than that.

It’s easy to see why some are eager to pursue AI-controlled forces. On the surface, it’s a straightforward way of diminishing human casualties in combat operations. However, this type of technology must be seized from Pandora’s Box. Once it’s available to some, it’s quickly going to be adopted by all.

Whether you trust any country’s government to utilize AI-controlled forces ethically, it seems plainly obvious that allowing these weapons of war out into the open would result in heinous acts of a magnitude that we can’t even comprehend.

However, there’s also an argument to be made that someone, somewhere will implement this technology eventually. Ignoring advances for ethical reasons is perhaps naïve, if the results are going to end up in enemy hands regardless.

This dispute has come to be known as the Terminator conundrum, a turn of phrase that’s been used on several occasions by Paul J. Selva, the acting Vice Chairman of the Joint Chiefs of Staff.

“I don’t think it’s impossible that somebody will try to build a completely autonomous system,” said General Selva at a Military Strategy Forum held at the Center for Strategic and International Studies in August 2016. “And I’m not talking about something like a cruise missile or a smart torpedo or a mine, that requires a human to target it and release it, and it goes and finds its target. I’m talking about a wholly robotic system that decides whether or not — at the point of decision — it’s going to do lethal harm.”

Selva argued that it’s important that a set of conventions is established to govern this emerging form of warfare. He acknowledges that these rules will need to be iterated upon, and that there will always be entities that disregard any regulation — but without a baseline for fair usage, all bets are off.

It won’t be long before simple AI is used in warfare.

Many experts would agree that AI hasn’t yet reached the stage of sophistication required for ethical use in military operations. However, it won’t be long before simple AI can be used in warfare, even if the implementation is clumsy.

Without rules in place, there’s no way to differentiate between ethical usage, and clumsy usage. Establishing guidelines might require a dip into Pandora’s box, but you could argue that the alternative amounts to leaving the box wide open.

Advanced warfare requires advanced ethics

After Libratus dominated its opposition in Texas Hold’em, Sandholm told Time that before the contest, he thought that the AI had a “50-50 chance” to win. It doesn’t take one of the world’s best poker players to recognize those aren’t great odds.

Sandholm is likely playing up his self-doubt for the sake of the interview, but it certainly seems that he wasn’t completely confident that Libratus had victory within its grasp. That’s fine when the stakes are limited to his reputation, and the reputation of the university he represents. However, when talking about using AI on the battlefield, a 50-50 chance that everything goes to plan isn’t anywhere near good enough.

Libratus is an amazing accomplishment in the field of AI, but it’s also a reminder of how much work there is still to be done. The “imperfect information” that can impact the way a game of Texas Hold’em plays out is limited to the 52 cards in a standard deck; in combat operations, there are countless other known and unknown variables that come into play.

Once military implementation of AI becomes commonplace, it will be too late to start regulating its usage. It’s fortunate that there’s still work to be done before today’s leading AI is competent enough to answer to a commanding officer, because there’s plenty of legislative groundwork to be laid before that kind of practice can be considered ethically acceptable.

Once the military implementation of AI becomes commonplace, it will be too late to start regulating.

During the Military Strategy Forum mentioned earlier, General Selva noted that experts thought the creation of a wholly autonomous machine soldier was around a decade away. It’s perhaps relevant that when DeepMind beat Lee Sedol last year, the accomplishment came a decade earlier than expected, according to a report from MIT’s Technology Review.

Research into AI is progressing at a rate that’s surprising even to experts working in the field, and that’s great news. However, there’s a marked difference between useful progress, and technology that’s ready to do the job when lives are on the line.

Military implementation of AI will become a reality, and it’ll probably happen sooner than we expect. Now is the time to put guidelines in place, so we don’t run the risk of seeing these technologies abused once they’re advanced enough to be put in the line of fire.

Brad Jones
Former Digital Trends Contributor
Brad is an English-born writer currently splitting his time between Edinburgh and Pennsylvania. You can find him on Twitter…
Is AI already plateauing? New reporting suggests GPT-5 may be in trouble
A person sits in front of a laptop. On the laptop screen is the home page for OpenAI's ChatGPT artificial intelligence chatbot.

OpenAI's next-generation Orion model of ChatGPT, which is both rumored and denied to be arriving by the end of the year, may not be all it's been hyped to be once it arrives, according to a new report from The Information.

Citing anonymous OpenAI employees, the report claims the Orion model has shown a "far smaller" improvement over its GPT-4 predecessor than GPT-4 showed over GPT-3. Those sources also note that Orion "isn’t reliably better than its predecessor [GPT-4] in handling certain tasks," specifically coding applications, though the new model is notably stronger at general language capabilities, such as summarizing documents or generating emails.

Read more
Runway brings precise camera controls to AI videos
Gen-3 alpha advanced camera controls

Content creators will have more control over the look and feel of their AI-generated videos thanks to a new feature set coming to Runway's Gen-3 Alpha model.

Advanced Camera Control is rolling out on Gen-3 Alpha Turbo starting today, the company announced via a post on X (formerly Twitter).

Read more
Google’s AI detection tool is now available for anyone to try
Gemini running on the Google Pixel 9 Pro Fold.

Google announced via a post on X (formerly Twitter) on Wednesday that SynthID is now available to anybody who wants to try it. The authentication system for AI-generated content embeds imperceptible watermarks into generated images, video, and text, enabling users to verify whether a piece of content was made by humans or machines.

“We’re open-sourcing our SynthID Text watermarking tool,” the company wrote. “Available freely to developers and businesses, it will help them identify their AI-generated content.”

Read more