Skip to main content

A learning bias found in kids could help make A.I. technology better

The theory behind machine learning tools that are like neural networks is that they function and, more specifically, learn in a similar way to the human brain. Just as we discover the world through trial and error, so too does modern artificial intelligence. In practice, however, things are a bit different. There are aspects of childhood learning that machines can’t replicate — and they’re one of the things which, in many domains, make humans superior learners.

Researchers at New York University are working to change that. Researchers Kanishk Gandhi and Brenden Lake have explored how something called “mutual exclusivity bias,” which is present in kids, could help make A.I. better when it comes to learning tasks like understanding language.

Recommended Videos

“When children endeavor to learn a new word, they rely on inductive biases to narrow the space of possible meanings,” Gandhi, a graduate student in New York University’s Human & Machine Learning Lab, told Digital Trends. “Mutual exclusivity (ME) is a belief that children have that if an object has one name, it cannot have another. Mutual exclusivity helps us in understanding the meaning of a novel word in ambiguous contexts. For example, [if] children are told to ‘show me the dax’ when presented with a familiar and an unfamiliar object, they tend to pick the unfamiliar one.”

Image used with permission by copyright holder

The researchers wanted to explore a couple of ideas with their work. One was to investigate if deep learning algorithms trained using common learning paradigms would reason with mutual exclusivity. They also wanted to see if reasoning by mutual exclusivity would help learning algorithms in tasks that are commonly tackled using deep learning.

To carry out these investigations, the researchers first trained 400 neural networks to associate pairs of words with their meanings. The neurals nets were then tested on 10 words they had never seen before. They predicted that new words were likely to correspond to known meanings rather than unknown ones. This suggests that A.I. does not have an exclusivity bias. Next, the researchers analyzed datasets which help A.I. to translate languages. This helped to show that exclusivity bias would be beneficial to machines.

“Our results show that these characteristics are poorly matched to the structure of common machine learning tasks,” Gandhi continued. “ME can be used as a cue for generalization in common translation and classification tasks, especially in the early stages of training. We believe that exhibiting the bias would help learning algorithms to learn in faster and more adaptable ways.”

As Gandhi and Lake write in a paper describing their work: “Strong inductive biases allow children to learn in fast and adaptable ways … There is a compelling case for designing neural networks that reason by mutual exclusivity, which remains an open challenge.”

Luke Dormehl
Former Digital Trends Contributor
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
Zoom’s A.I. tech to detect emotion during calls upsets critics
coronavirus crisis not ready for an online first world analysis zoom conference lifestyle image

Zoom has begun to develop A.I. technology which can reportedly scan the faces and speech of users in order to determine their emotions, which was first reported by Protocol.

While this technology appears to still be in its early phases of development and implementation, several human rights groups project that it could be used for more discriminatory purposes down the line, and are urging Zoom to turn away from the practice.

Read more
Lambda’s machine learning laptop is a Razer in disguise
The Tensorbook ships with an Nvidia RTX 3080 Max-Q GPU.

The new Tensorbook may look like a gaming laptop, but it's actually a notebook that's designed to supercharge machine learning work.

The laptop's similarity to popular gaming systems doesn't go unnoticed, and that's because it was designed by Lambda through a collaboration with Razer, a PC maker known for its line of sleek gaming laptops.

Read more
Analog A.I.? It sounds crazy, but it might be the future
brain with computer text scrolling artificial intelligence

Forget digital. The future of A.I. is … analog? At least, that’s the assertion of Mythic, an A.I. chip company that, in its own words, is taking “a leap forward in performance in power” by going back in time. Sort of.

Before ENIAC, the world’s first room-sized programmable, electronic, general-purpose digital computer, buzzed to life in 1945, arguably all computers were analog -- and had been for as long as computers have been around.

Read more