Skip to main content

As AI gets smarter, humans need to stop being sore losers

AlphaGo
Google
Earlier this month, Google’s DeepMind team made history when its AlphaGo software managed to defeat professional Go player Lee Sedol in a five-game match. The contest was billed as a battle between man and machine — and it saw the human player largely outclassed by his AI opponent.

Artificial intelligence is only going to grow more sophisticated in coming years, becoming more of a factor in everyday life as the development of this technology continues to progress. With artificial minds growing ever more powerful, humans may have to change the game to maintain superiority.

Recommended Videos

Back to the 90s

To get a true impression of how much progress has been made in the field of artificial intelligence in recent years, it’s useful to compare the AlphaGo AI facing Lee with IBM’s Deep Blue computer, which faced Chess grandmaster Garry Kasparov in the 1990s.

At the time, Kasparov was widely considered to be the best Chess player on the face of the planet. He had already seen off an AI opponent quite handily, dispensing of IBM’s Deep Thought computer — named for the fictional system capable of deciphering the answers to life, the universe and everything in The Hitchhiker’s Guide to the Galaxy — in a two-game series held in 1989.

Undeterred, IBM continued development of its Chess-playing computer. In 1996, a new iteration of the project known as Deep Blue was transported to Philadelphia to face Kasparov. The computer became the first to win a game against the reigning world champion under normal time controls, but was dominated by Kasparov after that early victory and lost the series 4-2, with two draws contributing half a point each.

Fifteen months later, a rematch was held in New York City. Deep Blue took the match with two draws and two outright wins. Frustrated, Kasparov accused IBM of cheating and demand a rematch. The company flatly refused, and the system was dismantled.

Advance to Go

Kasparov made attempts to explain away the loss to Deep Blue, pitching the idea that the match was a publicity stunt carried out by IBM. He theorized that human players had intervened to improve the computer’s performance, something that the company vehemently denied. Others in the Chess community would suggest that the machine was simply running a problem-solving program, and that it shouldn’t be considered true intelligence, or a real mastery of the game.

That said, the result of the high-profile series was enough proof for many that computers had eclipsed human ability in the game of Chess.

AlphaGo beats Lee Sedol

Deep Blue beat Kasparov using the ‘”brute force” method of computation. This technique is an exhaustive search that systematically works its way through all possible solutions until it finds the appropriate option. Its greatest strength is that it will always find a solution if one exists, but it’s let down by the fact that complex queries can take a great deal of time to work through.

Given that this brute force technique had proven so effective to the game of Chess, it became clear that any subsequent challenge would have to change the parameters in some manner. As such, competition veered away from Chess and towards Go.

In the end, it’s meaningless that Deep Blue’s move came as a result of a glitch.

While Chess and Go are both equally revered as classical strategy games, there is little doubt that the latter is the more complex. A Chess board is made up of 64 squares, compared to the 361 intersections on the playing field of Go. And the fact that Chess centers around putting your opponent in Checkmate, as opposed to the land-grab tactics necessary in Go, makes the latter a more complex problem for a computer to solve.

When discussing these games in relation to computer play, the numbers are all that really matter. There are 400 possible opening moves in Chess, compared to 32,490 in Go — a figure that rises to a staggering 129,960 when symmetrically identical moves are taken into account.

This staggering complexity means that brute force techniques are not enough to crack the game of Go. As well as an extensive training program playing against computer and human opposition, AlphaGo used a potent mixture of different approaches.

Monte Carlo tree search, an algorithm devised to help computers make quick and potent decisions during gameplay, was implemented to help AlphaGo prioritize between options under the tight time constraints of competitive Go. Meanwhile, neural networks inspired by biological brains provided a groundwork for the system to actively learn.

The computer that mastered Go

A brisk training regimen is one element that sets AlphaGo apart from Deep Blue. Google’s computer played countless practice games against human and machine opponents, and its neural networks. The researchers working on the project have referred to the process as trial-and-error, which would seem to bear a closer resemblance to the way a human would prepare for professional competition than brute force methodology.

These techniques were enough to defeat Lee — with the exception of the anomalous fourth game, where Lee managed to conquer the AI.

Move 78 and move 44

Pundits looking for an explanation of why Lee managed to best AlphaGo point to his 78th move in the fourth game. This play was met with praise from all directions, being called “beautiful” by Wired and described as a “masterful comeback” by Go Game Guru.

Chinese commentator Gu Li described Lee’s 78th move — which was an example of a play referred to as a Tesuji by players — as the “hand of God.” English language commentator Michael Redmond noted that the move “would take most opponents by surprise.”

Some Chess fans contended Kasparov’s loss to Deep Blue didn’t mean the AI had ‘real’ mastery of the game.

AlphaGo crumbled in its response to Lee’s power play. Shortly afterward, the human won his first game in the series.

Compare this to an exchange from the 1997 series between Kasparov and Deep Blue. The 44th move from the computer baffled Kasparov — he would later attribute it to “superior intelligence.” While the champion would win the first bout very soon after the play, that move in particular has been singled out as the turning point that threw Kasparov off his game. It made him doubt his chances.

We now know that the move was not genius, but a fall-back in Deep Blue’s programming. Unable to find a useful play, the computer simply resorted to a fail-safe.

These plays by Lee and Deep Blue both had the same effect. They shocked onlookers, and provided a win over a favored opponent. More importantly, both moves led to victory through the same means — confounding an adversary by making an audacious play.

In the end, it’s meaningless that Deep Blue’s move came as a result of a glitch. Part of the strategy involved in games like Chess and Go is holding your nerve, making your rival feel as if you’re one step ahead of them, even when you’re not. Kasparov — albeit erroneously — allowed the “superior intelligence” of the computer get in his head.

Match 5 15 Minute Summary - Google DeepMind Challenge Match 2016

As a human reading this article, it’s only natural to jump to the defense of an organic life-form over his machine opponent. Fluke or not, Deep Blue managed to carry its momentum from an irregular play into a five-game unbeaten run.

A Time headline from last year asked the question, “did Deep Blue beat Kasparov because of a system glitch?” Implicitly, that wording tries to explain away the result as a failing of the computer. Compare that to praise-laden coverage of Lee’s victory in the fourth match against AlphaGo which, while worthy of congratulations, didn’t prompt the kind of turnaround that Deep Blue accomplished.

This contrast should illustrate something that may well be obvious. Humans are bad losers.

The AI effect

For all that humans might celebrate accomplishments made relating to artificial intelligence — and it’s important to remember that it was people who built these systems — we’re only too quick to write off machine minds as some lesser branch of thought.

After beating Kasparov, some critics of Deep Blue noted that it ‘just’ used brute force to pick up the win. Similar efforts in fault-finding have plagued this type of technology for decades, with the phenomenon coming to be known as the AI effect.

In a Q&A portion of her book Machines Who Think, noted tech commentator Pamela McCorduck gave the following summation of the problem:

Q: What so-called smart computers do–is that really thinking?

A: No, if you insist that thinking can only take place inside the human cranium. But yes, if you believe that making difficult judgments, the kind usually left to experts, choosing among plausible alternatives, and acting on those choices, is thinking. That’s what artificial intelligences do right now. Along with most people in AI, I consider what artificial intelligences do as a form of thinking, though I agree that these programs don’t think just like human beings do, for the most part. I’m not sure that’s even desirable.

The divide between ‘real’ thinking, and whatever the alternative is, can only serve to downplay the results of research into artificial intelligence. Every time a machine can fulfill our expectations of thought, we change the parameters — and, even then, our superiority isn’t guaranteed.

Moving the goalposts

In 2003, American computer scientists decided to create a game that computers would find difficult. The result was Arimaa, a contest that uses all the same elements of a standard Chess set, but is specifically designed such that the brute force tactics used to beat Kasparov don’t offer an advantage.

Arimaa sets itself apart from Chess in a variety of ways; players lay out their pieces in the same two rows but can arrange them however they wish, pieces are captured via a push-pull system based on manipulating their opponent into trap squares, and the game is won by reaching your opponent’s side of the board with one of your weakest units.

arimaa
Image used with permission by copyright holder

All of these mechanics were designed with the intent of making the game difficult for computers. Compared to the 400 opening moves in Chess, and the 32,490 in Go, there are 64,864,800 possible configurations of the board available to each player before they make a single play.

To further complicate matters, the pieces inhabit a hierarchy where each individual unit — an elephant, a camel, two horses, two dogs, two cats and eight rabbits — can only push or pull a weaker unit. The added potential for different outcomes means that computers are unable to rely on brute force. Its creator put up a $10,000 reward  in 2004 for any computer program running on off-the-shelf hardware that could defeat the game’s three best human players.

For the first several years of competition, computer players could only pick up the occasional game when the human offered up a handicap. Then, in 2015, David Wu’s three-time computer world champion program Sharp managed to pick up seven wins from nine games played and comfortably took home the prize fund.

With the right (human) minds working on the project, it seems that there’s no game that computers can’t beat people at — ranging from 2,000-year-old tests of strategic insight to fifteen-year-old challenges created for this very purpose.

It seems likely that artificial intelligence will be implemented into more and more aspects of our everyday life in years to come. As that process takes place, we’ll hopefully see less of a tendency to downplay these advances as parlor tricks. Computers are only going to improve as competitors, so humans need to be ready to take losses with grace.

Brad Jones
Former Digital Trends Contributor
Brad is an English-born writer currently splitting his time between Edinburgh and Pennsylvania. You can find him on Twitter…
Stop using generative-AI tools such as ChatGPT, Samsung orders staff
Samsung logo

Samsung has told staff to stop using generative AI tools such as ChatGPT and Bard over concerns that they pose a security risk, Bloomberg reported on Monday.

The move follows a string of embarrassing slip-ups last month when Samsung employees reportedly fed sensitive semiconductor-related data into ChatGPT on three occasions.

Read more
These are the new AI features coming to Gmail, Google Docs, and Sheets
Google has announced a host of new writing focused AI features for its Workspace suite.

Google Workspace is getting a generative AI boost at the same time that many other productivity suites are adding new features that allow users to simplify clerical tasks with just a prompt.

Following up on the visual redesign to Google Docs and the announcement of Google Bard, these new AI features are the company's latest attempt to bring more buzzy goodness to its most popular applications.

Read more
Oops — Google Bard AI demo is disproven by the first search result
A Google blog post discussing its LaMBDA artificial intelligence technology displayed on a smartphone screen.

These are heady days if you’re following the world of artificial intelligence (AI). ChatGPT is taking over the world, Microsoft is adding its tech to Bing, and Google is working on its own AI called Bard.

Except, Bard might not quite be ready for prime time -- and Google just proved it during its own tech demonstration. Oops.

Read more