Skip to main content

Do humans make computers smarter?

germany self driving car tests mercedes autonomous
Image used with permission by copyright holder
As machine learning makes computers smarter than us in some important ways, does adding a human to the mix make the overall system smarter? Does human plus machine always beat the machine by itself?

The question is easy when we think about using computers to do, say, long division. Would it really help to have a human hovering over the machine reminding it to carry the one? The issue is becoming less clear, and more important, as autonomous cars start to roam our streets.

Recommended Videos

Siri, you can drive my car

Many wary citizens assume that for safety’s sake an autonomous car ought to have a steering wheel and brakes that a human can use to override the car’s computer in an emergency. They assume – correctly for now – that humans are better drivers: so far, autonomous cars have more accidents, mainly minor and caused by human-driven cars, but I’m willing to bet that the accident rate for cars without human overrides will be significantly lower than for cars with them, as the percentage of driverless cars increases, and as they get smarter.

Does human plus machine always beat the machine by itself?

After all, autonomous cars have a 360-degree view of their surroundings, while humans are lucky to have half that. Autonomous cars react at the speed of light. Human react at the speed of neuro-chemicals, contradictory impulses, and second thoughts. Humans often make decisions that preserve their own lives above all others, while autonomous cars, especially once they’re networked, can make decisions to minimize the sum total of bad consequences. (Maybe. Mercedes has announced that its autonomous cars will save passengers over pedestrians).

In short, why would we think that cars would be safer if we put a self-interested, fear-driven, lethargic, poorly informed animal in charge?

A game of Go

But take a case where reaction time doesn’t matter, and where machines have access to the same information as humans. For example, imagine a computer playing a game of Go against a human. Surely adding a highly-skilled player to the computer’s side — or, put anthropocentrically, providing a computer to assist a highly-skilled human — would only make the computer better.

Actually not. AlphaGo, Google’s system that beat the third-ranked human player, makes its moves based on its analysis of 30 million moves in 160,000 games, processed through multiple levels of artificial neural networks that implement a type of machine learning called deep learning.

AlphaGo’s analysis assigns weights to potential moves and calculates the one most likely to lead to victory. The network of weighted moves is so large and complex that a human being simply could not comprehend the data and their relations, or predict their outcome.

AlphaGo
Google
Alpha (Photo: Google)

The process is far more complex than this, of course, and includes algorithms to winnow searches and to learn from successful projected behaviors. Another caveat: Recent news from MIT suggests we may be getting better at enabling neural nets to explain themselves.

Still, imagine that we gave AlphaGo a highly-ranked human partner and had that team play against an unassisted human. AlphaGo comes up with a move. Its human partner thinks it’s crazy. AlphaGo literally cannot explain why it disagrees, for the explanation is that vast network of weighted possibilities that surpasses the capacity of the human brain.

But maybe good old human intuition is better than the cold analysis of a machine. Maybe we should let the human’s judgment override the machine’s calculations.

Maybe, but nah. In the situation we’ve described, the machine wants to make one move, and the human wants to make another. Whose move is better? For any particular move, we can’t know, but we could set up some trials of AlphaGo playing with and without a human partner. We could then see which configuration wins more games.

The proof is in the results

But we don’t even need to do that to get our answer. When a human partner disagrees with AlphaGo’s recommendation, the human is in effect playing against AlphaGo: Each is coming up with its own moves. So far, evidence suggests that when humans do that, they usually lose to the computer.

Maybe we should let the human’s judgment override the machine’s calculations.

Now, of course there are situations where humans plus machines are likely to do better than machines on their own, at least for the foreseeable future. A machine might get good at recommending which greeting card to send to a coworker, but the human will still need to make the judgment about whether the recommended card is too snarky, too informal, or overly saccharine. Likewise, we may like getting recommendations from Amazon about the next book to read, but we are going to continue to want to be given a selection, rather than having Amazon automatically purchase for us the book it predicts we’ll like most.

We are also a big cultural leap away from letting computers arrange our marriages, even though they may well be better at it than we are, since our 40 to 50 percent divorce rate is evidence that we suck at it.

In AI we trust

As we get used to the ability of deep learning to come to conclusions more reliable than the ones our human brains come up with, the fields we preserve for sovereign human judgment will narrow. After all, the computer may well know more about our coworker than we do, and thus will correctly steer us away from the card with the adorable cats because one of our coworker’s cats just died, or because, well, the neural network may not be able to tell us why. And if we find we always enjoy Amazon’s top recommendations, we might find it reasonable to stop looking at its second choices, much less at its explanation of its choices for us.

After all, we don’t ask our calculators to show us their work.

David Weinberger
Former Digital Trends Contributor
Dr. Weinberger is a senior researcher at the Berkman Center. He has been a philosophy professor, journalist, strategic…
The UK’s Wayve brings its AI automated driving software to U.S. shores
wayve ai automated driving us driver assist2 1920x1152 1

It might seem that the autonomous driving trend is moving at full speed and on its own accord, especially if you live in California.Wayve, a UK startup that has received over $1 billion in funding, is now joining the crowded party by launching on-road testing of its AI learning system on the streets of San Francisco and the Bay Area.The announcement comes just weeks after Tesla unveiled its Robotaxi at the Warner Bros Studios in Burbank, California. It was also in San Francisco that an accident last year forced General Motors’ robotaxi service Cruise to stop its operations. And it’s mostly in California that Waymo, the only functioning robotaxi service in the U.S., first deployed its fleet of self-driving cars. As part of its move, Wayve opened a new office in Silicon Valley to support its U.S. expansion and AI development. Similarly to Tesla’s Full-Self Driving (FSD) software, the company says it’s using AI to provide automakers with a full range of driver assistance and automation features.“We are now testing our AI software in real-world environments across two continents,” said Alex Kendall, Wayve co-founder and CEO.The company has already conducted tests on UK roads since 2018. It received a huge boost earlier this year when it raised over $1 billion in a move led by Softbank and joined by Microsoft and Nvidia. In August, Uber also said it would invest to help the development of Wayve’s technology.Just like Tesla’s FSD, Wayve’s software provides an advanced driver assistance system that still requires driver supervision.Before driverless vehicles can legally hit the road, they must first pass strict safety tests.So far, Waymo’s technology, which relies on pre-mapped roads, sensors, cameras, radar, and lidar (a laser-light radar), is the only of its kind to have received the nod from U.S. regulators.

Read more
Aptera’s 3-wheel solar EV hits milestone on way toward 2025 commercialization
Aptera 2e

EV drivers may relish that charging networks are climbing over each other to provide needed juice alongside roads and highways.

But they may relish even more not having to make many recharging stops along the way as their EV soaks up the bountiful energy coming straight from the sun.

Read more
Ford ships new NACS adapters to EV customers
Ford EVs at a Tesla Supercharger station.

Thanks to a Tesla-provided adapter, owners of Ford electric vehicles were among the first non-Tesla drivers to get access to the SuperCharger network in the U.S.

Yet, amid slowing supply from Tesla, Ford is now turning to Lectron, an EV accessories supplier, to provide these North American Charging Standard (NACS) adapters, according to InsideEVs.

Read more