Skip to main content

I brought ChatGPT to the board game world. Is it ready for game night?

We all know that ChatGPT is great at speeding up mundane tasks. What could be drier than explaining the rules of a complicated game at board game night?

There’s no substitute for just knowing the game, but being able to reach for AI instead of the rulebook could make things a whole lot easier. Nothing derails a great game of Twilight Imperium like breaking out the Living Rules and endlessly scrolling. So, if I were to bring ChatGPT to board game night, I could definitely see it coming in handy. But before I subjected my friends to a robot reading them the rules, I decided to test it out with some basic questions to see if it was up to snuff.

Recommended Videos

A defined ruleset sounds ideal

I have no idea why ChatGPT knows the rules of so many games, but it does. Or at least thinks it does. While it might be tricky to find an online manual for some of my collection, ChatGPT seems to have it all — some of those billions of data points it was trained on reportedly included the errata for something as obscure as the third Battlestar Galactica expansion.

That’s great, though, because it should know all the rules I don’t, right? Within the limited, constrained, and very particular environment of a board game, ChatGPT with its limited understanding of anything, but extensive knowledge of certain topics, should be great at it.

Unfortunately, as with everything else ChatGPT confidently posits to know, it’s often not quite right, and sometimes it’s outright wrong.

ChatGPT tries to answer a question on board gaming.
Image used with permission by copyright holder

The initial answer is quite correct: you don’t add the dice to the hunt pool. But hunt tiles aren’t added every turn. Maybe it’d be better if I had ChatGPT tell me where in the rulebook I can find this information?

Asking ChatGPT more board game questions.
Image used with permission by copyright holder

Hmm. Apparently putting it on the spot means it then corrects itself and gets the rule more wrong than it did before. It thinks there might be multiple “Gandalfs” in the Fellowship, and that there are special “Will of the West,” dice, rather than that being one of the possible results on the game’s action dice.

It then goes on double down on that error by citing a page in the rulebook that doesn’t have anything to do with “The Hunt.” There’s a section called “The Hunt For the Ring,” but it’s not until Page 40.

War of the Ring rulebook.
Image used with permission by copyright holder

But maybe this isn’t ChatGPT’s best game. Let’s give it one more chance to help with a game that’s somehow even bigger and more complicated than War of the Ring: Twilight Imperium.

ChatGPT answering board game queries.
Image used with permission by copyright holder

Here, ChatGPT does an admirable job, in that it does get the answer right, but for the wrong reasons. You can’t take a home system because you can’t invade it, not because you can’t move ships there.

If that seems pedantic, I get it. I don’t like telling my friend they can’t do something because they’ve misunderstood the very specifically worded text on the card they’ve played. These details matter in games, and if I’m going to get ChatGPT to do it for me, I need to be able to fully trust it.

It’s back to reading the rulebooks over and over

This was just a snippet of my time quizzing ChatGPT on how to play my favorite games. It knew how to launch ships in Battlestar Galactica, even if it wasn’t clear what part of your turn you do it in. It had a good idea of how to get cave tokens in Quest for El Dorado, but was very wrong on the cost you had to pay for them.

It did know Kingdom Death: Monster quite well, though, accurately reporting the stats of some of the monsters, and even making suggestions on how to modify those stats to my advantage.

It was a fun exercise seeing what ChatGPT knows about games, and it feels like one area where in the future, it could be invaluable. It wouldn’t even need to know all games. I can imagine a scenario where game publishers could have their own AI to help teach you their games, and I wouldn’t be surprised if it could act as a stand-in player one day too.

And who knows, maybe GPT-4 in ChatGPT Plus would have already solved this problem.

For now though, since I can’t trust it, it’s back to reading rulebooks on the toilet so that when one of the players has a question, I can answer it. Because ChatGPT can’t. Yet.

Jon Martindale
Jon Martindale is a freelance evergreen writer and occasional section coordinator, covering how to guides, best-of lists, and…
ChatGPT’s new Canvas feature sure looks a lot like Claude’s Artifacts
ChatGPT's Canvas screen

Hot on the heels of its $6.6 billion funding round, OpenAI on Thursday debuted the beta of a new collaboration interface for ChatGPT, dubbed Canvas.

"We are fundamentally changing how humans can collaborate with ChatGPT since it launched two years ago," Canvas research lead Karina Nguyen wrote in a post on X (formerly Twitter). She describes it as "a new interface for working with ChatGPT on writing and coding projects that go beyond simple chat."

Read more
ChatGPT’s Advanced Voice feature is finally rolling out to Plus and Teams subscribers
The Advanced Voice Mode's UI

OpenAI announced via Twitter on Tuesday that it will begin rolling out its Advanced Voice feature, as well as five new voices for the conversational AI, to subscribers of the Plus and Teams tiers throughout this week. Enterprise and Edu subscribers will gain access starting next week.

https://x.com/OpenAI/status/1838642444365369814

Read more
ChatGPT’s resource demands are getting out of control
a server

It's no secret that the growth of generative AI has demanded ever increasing amounts of water and electricity, but a new study from The Washington Post and researchers from University of California, Riverside shows just how many resources OpenAI's chatbot needs in order to perform even its most basic functions.

In terms of water usage, the amount needed for ChatGPT to write a 100-word email depends on the state and the user's proximity to OpenAI's nearest data center. The less prevalent water is in a given region, and the less expensive electricity is, the more likely the data center is to rely on electrically powered air conditioning units instead. In Texas, for example, the chatbot only consumes an estimated 235 milliliters needed to generate one 100-word email. That same email drafted in Washington, on the other hand, would require 1,408 milliliters (nearly a liter and a half) per email.

Read more