Skip to main content

A.I. can remove distortions from underwater photos, streamlining ocean research

Light behaves differently in water than it does on the surface — and that behavior creates the blur or green tint common in underwater photographs as well as the haze that blocks out vital details. But thanks to research from an oceanographer and engineer and a new artificial intelligence program called Sea-Thru, that haze and those occluded colors could soon disappear.

Besides putting a downer on the photos from that snorkeling trip, the inability to get an accurately colored photo underwater hinders scientific research at a time when concern for coral and ocean health is growing. That’s why oceanographer and engineer Derya Akkaynak, along with Tali Treibitz and the University of Haifa, devoted their research to developing an artificial intelligence that can create scientifically accurate colors while removing the haze in underwater photos.

Recommended Videos

As Akkaynak points out in her research, imaging A.I. has exploded in recent years. Algorithms have been developed that can tackle everything from turning an apple into an orange to reversing manipulated photos. Yet, she says, the development of underwater algorithms is still behind because of how the water obscures many of the elements in the scene that the A.I. uses.

Please enable Javascript to view this content

When light hits the water, it’s both absorbed and scattered. That creates what’s called backscatter, or haze that prevents the camera from seeing the scene in full detail. The light absorption also prevents color from reproducing accurately under water.

This researcher created an algorithm that removes the water from underwater images

To tackle the problem, Akkaynak trained the software using sets of underwater images the team shot themselves, using gear that’s readily available — a consumer camera, underwater housing, and a color card. First, she’d find a subject. In particular, Akkaynak was looking for coral with a lot of depth and dimension, since the farther away objects are underwater, the more those objects are obscured. Akkaynak would then place the color card near the coral, and then photograph the coral from both multiple distances and multiple angles.

Using those images as a data set, the researchers then trained the program to mathematically look at images and remove the backscatter and adjust the color, working on a pixel level. The resulting program, called Sea-thru, can correct the haze and color detail automatically. The software still requires multiple images of the same subject to work because the process uses a known range map to estimate and correct the backscatter. The researchers say, however, that the color card is no longer a necessity.

The resulting photos aren’t the same as the images that could be generated using tools like Lightroom’s dehaze slider and color correction tools. “This method is not Photoshopping an image,” Akkaynak told Scientific American. “It’s not enhancing or pumping up the colors in an image. It’s a physically accurate correction, rather than a visually pleasing modification.”

The team’s goal is to use large volumes of image data for research, explaining that, without the program, much of the work that requires color and detail must be done manually, since too many details are obscured in the photographs. “Sea-thru is a significant step towards opening up large underwater datasets to powerful computer vision and machine learning algorithms, and will help boost underwater research at a time when our oceans are [under] increasing stress from pollution, overfishing, and climate change,” the research paper concludes.

Hillary K. Grigonis
Hillary never planned on becoming a photographer—and then she was handed a camera at her first writing job and she's been…
The BigSleep A.I. is like Google Image Search for pictures that don’t exist yet
Eternity

In case you’re wondering, the picture above is "an intricate drawing of eternity." But it’s not the work of a human artist; it’s the creation of BigSleep, the latest amazing example of generative artificial intelligence (A.I.) in action.

A bit like a visual version of text-generating A.I. model GPT-3, BigSleep is capable of taking any text prompt and visualizing an image to fit the words. That could be something esoteric like eternity, or it could be a bowl of cherries, or a beautiful house (the latter of which can be seen below.) Think of it like a Google Images search -- only for pictures that have never previously existed.
How BigSleep works
“At a high level, BigSleep works by combining two neural networks: BigGAN and CLIP,” Ryan Murdock, BigSleep’s 23-year-old creator, a student studying cognitive neuroscience at the University of Utah, told Digital Trends.

Read more
Clever new A.I. system promises to train your dog while you’re away from home
finding rover facial recognition app dog face big eyes

One of the few good things about lockdown and working from home has been having more time to spend with pets. But when the world returns to normal, people are going to go back to the office, and in some cases that means leaving dogs at home for a large part of the day, hopefully with someone coming into your house to let them out at the midday point.

What if it was possible for an A.I. device, like a next-generation Amazon Echo, to give your pooch a dog-training class while you were away? That’s the basis for a project carried out by researchers at Colorado State University. Initially spotted by Chris Stokel-Walker, author of YouTubers:How YouTube Shook Up TV and Created a New Generation of Stars, and reported by New Scientist, the work involves a prototype device that’s able to give out canine commands, check to see if they’re being obeyed, and then provide a treat as a reward when they are.

Read more
New A.I. hearing aid learns your listening preferences and makes adjustments
Widex Moment hearing aids.

One of the picks for this year’s CES 2021 Innovation Awards is a smart hearing aid that uses artificial intelligence to improve the audio experience in a couple of crucial ways.

Among the improvements the Widex Moment makes to conventional hearing aids is reducing the standard sound delay experienced by wearers from 7 to 10 milliseconds seconds down to just 0.5 milliseconds. This results in a more natural sound experience for users, rather than the out-of-sync audio experience people have had to settle for up until now.

Read more