Skip to main content

One fish, two fish: A.I. labels wildlife photos to boost conservation

The wilderness is vast and varied, home to millions of animal species. For ecologists, identifying and describing those animals is key to successful research. That can prove to be a tall order — but artificial intelligence may be able help.

In a new report out this week, researchers show how they trained a deep learning algorithm to automatically identify, count, and characterize animals in images. The system used photographs captured from motion-sensing camera traps, which snap pictures of the animals without seriously disturbing them.

Recommended Videos

“We have shown that we can use computers to automatically extract information from wildlife photos, such as species, number of animals, and what the animals are doing.” Margaret Kosmala, a research associate at Harvard University, told Digital Trends. “What’s novel is that this is the first time it’s been shown that it’s possible to do this as accurately as humans. Artificial intelligence has been getting good at recognizing things in the human domain — human faces, interior spaces, specific objects if well-positioned, streets, and so forth. But nature is messy and in this set of photos, the animals are often only partially in the photo or very close or far away or overlapping. As an ecologist, I find this very exciting because it gives us a new way to use technology to study wildlife over broad areas and long time spans.”

Please enable Javascript to view this content

The researchers used images captured and collected by Snapshot Serengeti, a citizen science project with stealth wildlife cameras spread throughout Tanzania. From elephant to cheetahs, Snapshot Serengeti has gathered millions of wildlife photographs. But the images themselves aren’t as valuable as the data contained within the frame, including details like number and type of animals.

Automated identification and descriptions has a lot of benefits for ecologists. For years, Snapshot Serengeti used to crowdsource the task of describing wildlife images. With the help of some 50,000 volunteers, the group labeled over three million images. It was this treasure trove of labeled imagery that the researchers used to train their algorithm.

Now, rather than turn to citizen scientists, researchers may be able to assign the laborious task to an algorithm, which can quickly process the photographs and label their key details.

“Any scientific research group or conservation group that is trying to understand and protect a species or ecosystem can deploy motion-sensor cameras in that ecosystem,” Jeff Clune, a professor of computer science at the University of Wyoming, said. “For example, if you are studying jaguars in a forest, you can put out a network of motion-sensor cameras along trails. The system will then automatically take pictures of the animals when they move in front of the cameras, and then the A.I. technology will count the number of animals that have been seen, and automatically delete all the images that were taken that do not have animals in them, which turns out to be a lot because motion-sensor cameras are triggered by wind, leaves falling, etcetera.”

A paper detailing the research was published this week in the journal the Proceedings of the National Academy of Sciences.

Dyllan Furness
Former Digital Trends Contributor
Dyllan Furness is a freelance writer from Florida. He covers strange science and emerging tech for Digital Trends, focusing…
Google’s LaMDA is a smart language A.I. for better understanding conversation
LaMDA model

Artificial intelligence has made extraordinary advances when it comes to understanding words and even being able to translate them into other languages. Google has helped pave the way here with amazing tools like Google Translate and, recently, with its development of Transformer machine learning models. But language is tricky -- and there’s still plenty more work to be done to build A.I. that truly understands us.
Language Model for Dialogue Applications
At Tuesday’s Google I/O, the search giant announced a significant advance in this area with a new language model it calls LaMDA. Short for Language Model for Dialogue Applications, it’s a sophisticated A.I. language tool that Google claims is superior when it comes to understanding context in conversation. As Google CEO Sundar Pichai noted, this might be intelligently parsing an exchange like “What’s the weather today?” “It’s starting to feel like summer. I might eat lunch outside.” That makes perfect sense as a human dialogue, but would befuddle many A.I. systems looking for more literal answers.

LaMDA has superior knowledge of learned concepts which it’s able to synthesize from its training data. Pichai noted that responses never follow the same path twice, so conversations feel less scripted and more responsively natural.

Read more
How the USPS uses Nvidia GPUs and A.I. to track missing mail
A United States Postal Service USPS truck driving on a tree-lined street.

The United States Postal Service, or USPS, is relying on artificial intelligence-powered by Nvidia's EGX systems to track more than 100 million pieces of mail a day that goes through its network. The world's busiest postal service system is relying on GPU-accelerated A.I. systems to help solve the challenges of locating lost or missing packages and mail. Essentially, the USPS turned to A.I. to help it locate a "needle in a haystack."

To solve that challenge, USPS engineers created an edge A.I. system of servers that can scan and locate mail. They created algorithms for the system that were trained on 13 Nvidia DGX systems located at USPS data centers. Nvidia's DGX A100 systems, for reference, pack in five petaflops of compute power and cost just under $200,000. It is based on the same Ampere architecture found on Nvidia's consumer GeForce RTX 3000 series GPUs.

Read more
Algorithmic architecture: Should we let A.I. design buildings for us?
Generated Venice cities

Designs iterate over time. Architecture designed and built in 1921 won’t look the same as a building from 1971 or from 2021. Trends change, materials evolve, and issues like sustainability gain importance, among other factors. But what if this evolution wasn’t just about the types of buildings architects design, but was, in fact, key to how they design? That’s the promise of evolutionary algorithms as a design tool.

While designers have long since used tools like Computer Aided Design (CAD) to help conceptualize projects, proponents of generative design want to go several steps further. They want to use algorithms that mimic evolutionary processes inside a computer to help design buildings from the ground up. And, at least when it comes to houses, the results are pretty darn interesting.
Generative design
Celestino Soddu has been working with evolutionary algorithms for longer than most people working today have been using computers. A contemporary Italian architect and designer now in his mid-70s, Soddu became interested in the technology’s potential impact on design back in the days of the Apple II. What interested him was the potential for endlessly riffing on a theme. Or as Soddu, who is also professor of generative design at the Polytechnic University of Milan in Italy, told Digital Trends, he liked the idea of “opening the door to endless variation.”

Read more