The wilderness is vast and varied, home to millions of animal species. For ecologists, identifying and describing those animals is key to successful research. That can prove to be a tall order — but artificial intelligence may be able help.
In a new report out this week, researchers show how they trained a deep learning algorithm to automatically identify, count, and characterize animals in images. The system used photographs captured from motion-sensing camera traps, which snap pictures of the animals without seriously disturbing them.
“We have shown that we can use computers to automatically extract information from wildlife photos, such as species, number of animals, and what the animals are doing.” Margaret Kosmala, a research associate at Harvard University, told Digital Trends. “What’s novel is that this is the first time it’s been shown that it’s possible to do this as accurately as humans. Artificial intelligence has been getting good at recognizing things in the human domain — human faces, interior spaces, specific objects if well-positioned, streets, and so forth. But nature is messy and in this set of photos, the animals are often only partially in the photo or very close or far away or overlapping. As an ecologist, I find this very exciting because it gives us a new way to use technology to study wildlife over broad areas and long time spans.”
The researchers used images captured and collected by Snapshot Serengeti, a citizen science project with stealth wildlife cameras spread throughout Tanzania. From elephant to cheetahs, Snapshot Serengeti has gathered millions of wildlife photographs. But the images themselves aren’t as valuable as the data contained within the frame, including details like number and type of animals.
Automated identification and descriptions has a lot of benefits for ecologists. For years, Snapshot Serengeti used to crowdsource the task of describing wildlife images. With the help of some 50,000 volunteers, the group labeled over three million images. It was this treasure trove of labeled imagery that the researchers used to train their algorithm.
Now, rather than turn to citizen scientists, researchers may be able to assign the laborious task to an algorithm, which can quickly process the photographs and label their key details.
“Any scientific research group or conservation group that is trying to understand and protect a species or ecosystem can deploy motion-sensor cameras in that ecosystem,” Jeff Clune, a professor of computer science at the University of Wyoming, said. “For example, if you are studying jaguars in a forest, you can put out a network of motion-sensor cameras along trails. The system will then automatically take pictures of the animals when they move in front of the cameras, and then the A.I. technology will count the number of animals that have been seen, and automatically delete all the images that were taken that do not have animals in them, which turns out to be a lot because motion-sensor cameras are triggered by wind, leaves falling, etcetera.”
A paper detailing the research was published this week in the journal the Proceedings of the National Academy of Sciences.