Skip to main content

Do humans make computers smarter?

germany self driving car tests mercedes autonomous
Image used with permission by copyright holder
As machine learning makes computers smarter than us in some important ways, does adding a human to the mix make the overall system smarter? Does human plus machine always beat the machine by itself?

The question is easy when we think about using computers to do, say, long division. Would it really help to have a human hovering over the machine reminding it to carry the one? The issue is becoming less clear, and more important, as autonomous cars start to roam our streets.

Recommended Videos

Siri, you can drive my car

Many wary citizens assume that for safety’s sake an autonomous car ought to have a steering wheel and brakes that a human can use to override the car’s computer in an emergency. They assume – correctly for now – that humans are better drivers: so far, autonomous cars have more accidents, mainly minor and caused by human-driven cars, but I’m willing to bet that the accident rate for cars without human overrides will be significantly lower than for cars with them, as the percentage of driverless cars increases, and as they get smarter.

Does human plus machine always beat the machine by itself?

After all, autonomous cars have a 360-degree view of their surroundings, while humans are lucky to have half that. Autonomous cars react at the speed of light. Human react at the speed of neuro-chemicals, contradictory impulses, and second thoughts. Humans often make decisions that preserve their own lives above all others, while autonomous cars, especially once they’re networked, can make decisions to minimize the sum total of bad consequences. (Maybe. Mercedes has announced that its autonomous cars will save passengers over pedestrians).

In short, why would we think that cars would be safer if we put a self-interested, fear-driven, lethargic, poorly informed animal in charge?

A game of Go

But take a case where reaction time doesn’t matter, and where machines have access to the same information as humans. For example, imagine a computer playing a game of Go against a human. Surely adding a highly-skilled player to the computer’s side — or, put anthropocentrically, providing a computer to assist a highly-skilled human — would only make the computer better.

Actually not. AlphaGo, Google’s system that beat the third-ranked human player, makes its moves based on its analysis of 30 million moves in 160,000 games, processed through multiple levels of artificial neural networks that implement a type of machine learning called deep learning.

AlphaGo’s analysis assigns weights to potential moves and calculates the one most likely to lead to victory. The network of weighted moves is so large and complex that a human being simply could not comprehend the data and their relations, or predict their outcome.

AlphaGo
Google
Alpha (Photo: Google)

The process is far more complex than this, of course, and includes algorithms to winnow searches and to learn from successful projected behaviors. Another caveat: Recent news from MIT suggests we may be getting better at enabling neural nets to explain themselves.

Still, imagine that we gave AlphaGo a highly-ranked human partner and had that team play against an unassisted human. AlphaGo comes up with a move. Its human partner thinks it’s crazy. AlphaGo literally cannot explain why it disagrees, for the explanation is that vast network of weighted possibilities that surpasses the capacity of the human brain.

But maybe good old human intuition is better than the cold analysis of a machine. Maybe we should let the human’s judgment override the machine’s calculations.

Maybe, but nah. In the situation we’ve described, the machine wants to make one move, and the human wants to make another. Whose move is better? For any particular move, we can’t know, but we could set up some trials of AlphaGo playing with and without a human partner. We could then see which configuration wins more games.

The proof is in the results

But we don’t even need to do that to get our answer. When a human partner disagrees with AlphaGo’s recommendation, the human is in effect playing against AlphaGo: Each is coming up with its own moves. So far, evidence suggests that when humans do that, they usually lose to the computer.

Maybe we should let the human’s judgment override the machine’s calculations.

Now, of course there are situations where humans plus machines are likely to do better than machines on their own, at least for the foreseeable future. A machine might get good at recommending which greeting card to send to a coworker, but the human will still need to make the judgment about whether the recommended card is too snarky, too informal, or overly saccharine. Likewise, we may like getting recommendations from Amazon about the next book to read, but we are going to continue to want to be given a selection, rather than having Amazon automatically purchase for us the book it predicts we’ll like most.

We are also a big cultural leap away from letting computers arrange our marriages, even though they may well be better at it than we are, since our 40 to 50 percent divorce rate is evidence that we suck at it.

In AI we trust

As we get used to the ability of deep learning to come to conclusions more reliable than the ones our human brains come up with, the fields we preserve for sovereign human judgment will narrow. After all, the computer may well know more about our coworker than we do, and thus will correctly steer us away from the card with the adorable cats because one of our coworker’s cats just died, or because, well, the neural network may not be able to tell us why. And if we find we always enjoy Amazon’s top recommendations, we might find it reasonable to stop looking at its second choices, much less at its explanation of its choices for us.

After all, we don’t ask our calculators to show us their work.

David Weinberger
Former Digital Trends Contributor
Dr. Weinberger is a senior researcher at the Berkman Center. He has been a philosophy professor, journalist, strategic…
Many hybrids rank as most reliable of all vehicles, Consumer Reports finds
many hybrids rank as most reliable of all vehicles evs progress consumer reports cr tout cars 0224

For the U.S. auto industry, if not the global one, 2024 kicked off with media headlines celebrating the "renaissance" of hybrid vehicles. This came as many drivers embraced a practical, midway approach rather than completely abandoning gas-powered vehicles in favor of fully electric ones.

Now that the year is about to end, and the future of tax incentives supporting electric vehicle (EV) purchases is highly uncertain, it seems the hybrid renaissance still has many bright days ahead. Automakers have heard consumer demands and worked on improving the quality and reliability of hybrid vehicles, according to the Consumer Reports (CR) year-end survey.

Read more
U.S. EVs will get universal plug and charge access in 2025
u s evs will get universal plug charge access in 2025 ev car to charging station power cable plugged shutterstock 1650839656

And then, it all came together.

Finding an adequate, accessible, and available charging station; charging up; and paying for the service before hitting the road have all been far from a seamless experience for many drivers of electric vehicles (EVs) in the U.S.

Read more
Rivian tops owner satisfaction survey, ahead of BMW and Tesla
The front three-quarter view of a 2022 Rivian against a rocky backdrop.

Can the same vehicle brand sit both at the bottom of owner ratings in terms of reliability and at the top in terms of overall owner satisfaction? When that brand is Rivian, the answer is a resonant yes.

Rivian ranked number one in satisfaction for the second year in a row, with owners especially giving their R1S and R1T electric vehicle (EV) high marks in terms of comfort, speed, drivability, and ease of use, according to the latest Consumer Reports (CR) owner satisfaction survey.

Read more