Skip to main content

A.I. researchers create a facial-recognition system for chimps

Kyoto University, Primate Research Institute

From unlocking smartphones to spotting criminals in crowds, there’s no shortage of reminders that facial recognition technology has gotten pretty darn good. Among humans, that is. But now researchers from the U.K.’s University of Oxford and Japan’s Kyoto University want to expand the tech’s capabilities — by making a facial recognition system that works with chimpanzees, too.

The A.I. system showcased an impressive recognition accuracy level of 92.5%. It was able to correctly identify a chimp’s sex 96.2% of the time. In a competition against humans, the system was 84% accurate when asked to identify chimps in 100 random still images. Humans managed precisely half as well, guessing the chimps’ identity correctly just 42% of the time. But the real improvement was time. Humans took 55 minutes to complete the task, whereas machines took just 30 seconds.

Recommended Videos

As much as we’d love for this to be designed to let chimps use Face ID on the iPhone, it’s actually intended to help researchers who are tracking chimpanzees in the wild. Rather than having to manually tag animals, or recognize them in some other way, such tools could make it easier to study animals in their natural habitats.

Please enable Javascript to view this content

“For my Ph.D. research, I have access to a large video archive of chimpanzees from Guinea in West Africa since 1988,” Dan Schofield, one of the researchers on the project, told Digital Trends. “This is a unique opportunity to study several generations of chimpanzees, but manually extracting information from over a thousand hours of footage is overwhelming and would be extremely time-consuming. To solve this problem we developed a face recognition model which can be applied directly to raw video footage to automatically detect, track, and recognize individuals in time and space. We used the output of this system to generate social networks to examine the social interactions of the group over many years. This can save countless hours and resources for researchers and conservationists.”

The A.I. employs a deep learning neural network, trained on 10 million face images from 23 individuals in more than 50 hours of footage. This kind of dataset made it possible to create an accurate A.I. system. However, Schofield said that’s not always information that’s readily available.

“There are now large open-source datasets of human faces to train deep learning models, but this is not the case for other species,” he said. “For example, recognizing identities of chimpanzees is a challenge for the untrained human eye, and requires extensive training and expert knowledge. We provided the framework and tools to help researchers quickly label and analyze their own datasets. We hope this will drive the development of new recognition systems for other species.”

A paper describing the research was recently published in the journal Science Advances.

Luke Dormehl
Former Digital Trends Contributor
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
The future of A.I.: 4 big things to watch for in the next few years
brain with computer text scrolling artificial intelligence

A.I. isn’t going to put humanity on the scrap heap any time soon. Nor are we one Google DeepMind publication away from superintelligence. But make no mistake about it: Artificial intelligence is making enormous strides.

As noted in the Artificial Intelligence Index Report 2021, last year the number of journal publications in the field grew by 34.5%. That’s a much higher percentage than the 19.6% seen one year earlier. A.I. is going to transform everything from medicine to transportation, and there are few who would argue otherwise.

Read more
Google’s LaMDA is a smart language A.I. for better understanding conversation
LaMDA model

Artificial intelligence has made extraordinary advances when it comes to understanding words and even being able to translate them into other languages. Google has helped pave the way here with amazing tools like Google Translate and, recently, with its development of Transformer machine learning models. But language is tricky -- and there’s still plenty more work to be done to build A.I. that truly understands us.
Language Model for Dialogue Applications
At Tuesday’s Google I/O, the search giant announced a significant advance in this area with a new language model it calls LaMDA. Short for Language Model for Dialogue Applications, it’s a sophisticated A.I. language tool that Google claims is superior when it comes to understanding context in conversation. As Google CEO Sundar Pichai noted, this might be intelligently parsing an exchange like “What’s the weather today?” “It’s starting to feel like summer. I might eat lunch outside.” That makes perfect sense as a human dialogue, but would befuddle many A.I. systems looking for more literal answers.

LaMDA has superior knowledge of learned concepts which it’s able to synthesize from its training data. Pichai noted that responses never follow the same path twice, so conversations feel less scripted and more responsively natural.

Read more
How the USPS uses Nvidia GPUs and A.I. to track missing mail
A United States Postal Service USPS truck driving on a tree-lined street.

The United States Postal Service, or USPS, is relying on artificial intelligence-powered by Nvidia's EGX systems to track more than 100 million pieces of mail a day that goes through its network. The world's busiest postal service system is relying on GPU-accelerated A.I. systems to help solve the challenges of locating lost or missing packages and mail. Essentially, the USPS turned to A.I. to help it locate a "needle in a haystack."

To solve that challenge, USPS engineers created an edge A.I. system of servers that can scan and locate mail. They created algorithms for the system that were trained on 13 Nvidia DGX systems located at USPS data centers. Nvidia's DGX A100 systems, for reference, pack in five petaflops of compute power and cost just under $200,000. It is based on the same Ampere architecture found on Nvidia's consumer GeForce RTX 3000 series GPUs.

Read more