Skip to main content

Using this A.I.-based healing brush, repairing an image is no biggie

Research at NVIDIA: AI Reconstructs Photos with Realistic Results

Photoshop’s healing brush can use surrounding pixels to repair an image or remove an object — but what if the surrounding pixels don’t have enough data to fill in those holes? Researchers from Nvidia recently used artificial intelligence to help fill in those gaps to create a tool similar to the healing brush tool that is able to intelligently fill in missing pieces, a technique called image inpainting for irregular holes using partial convolutions — here’s hoping Nvidia comes up with a nickname before making the tool widely accessible.

Recommended Videos

Nvidia isn’t the first to try to reinvent the healing brush using A.I., but the researchers say that earlier attempts left artifacts and blur. Rather than using full convolutional filter responses to those holes, Nvidia instead created a partial convolution method — in other words, the software creates a layer that renormalizes the pixels to create a more normal looking image without those artifacts. The new program also uses an automatically generated layer masks.

Get your weekly teardown of the tech behind PC gaming
Check your inbox!

Unlike the traditional healing brush tool that only uses surrounding pixels to determine what to fill in the gap with, Nvidia trained its tool using three different sets of images, resulting in thousands of images to train the tool with. The researchers randomly applied masks to intentionally remove sections of images to those groups of photos. By showing the computer both the before and after, the program could learn how to fill in some of those gaps. “Our model can robustly handle holes of any shape, size location or distance from the image borders,” the researchers wrote. “Further, our performance does not deteriorate catastrophically as holes increase in size.”

While more robust than previous attempts, the researchers said that the tool struggles with the largest holes and images without a lot of structure.

The research could bring some significant changes to photo editing if the tool makes its way into an image editor. Nvidia’s program, for example, could replace an eye in an old, damaged portrait, even without the other eye to replicate. The resulting eye, of course, isn’t the same eye — in one example, the replaced eye is an entirely different color. In the demonstration video, the program also appears to give a young woman and an older gentleman the same pair of eyes. 

While giving a person different features using the database of images should raise ethical considerations for portraits, the concept could speed up the process of removing distractions from other types of images. The tool could potentially help remove objects in photographs, such as removing power lines or signs from the background, with more accuracy than current tools.

Hillary K. Grigonis
Hillary never planned on becoming a photographer—and then she was handed a camera at her first writing job and she's been…
Nvidia’s new voice A.I. sounds just like a real person
Nvidia Voice AI

The "uncanny valley" is often used to describe artificial intelligence (A.I.) mimicking human behavior. But Nvidia's new voice A.I. is much more realistic than anything we've ever heard before. Using a combination of A.I. and a human reference recording, the fake voice sounds almost identical to a real one.

All the Feels: NVIDIA Shares Expressive Speech Synthesis Research at Interspeech

Read more
Nvidia lowers the barrier to entry into A.I. with Fleet Command and LaunchPad
laptop running Nvidia Fleet Command software.

Nvidia is expanding its artificial intelligence (A.I.) offerings as part of its continued effort to "democratize A.I." The company announced two new programs today that can help businesses of any size to train and deploy A.I. models without investing in infrastructure. The first is A.I. LaunchPad, which gives enterprises access to a stack of A.I. infrastructure and software, and the second is Fleet Command, which helps businesses deploy and manage the A.I. models they've trained.

At Computex 2021, Nvidia announced the Base Command platform that allows businesses to train A.I. models on Nvidia's DGX SuperPod supercomputer.  Fleet Command builds on this platform by allowing users to simulate A.I. models and deploy them across edge devices remotely. With an Nvidia-certified system, admins can now control the entire life cycle of A.I. training and edge deployment without the upfront cost.

Read more
IBM’s A.I. Mayflower ship is crossing the Atlantic, and you can watch it live
Mayflower Autonomous Ship alone in the ocean

“Seagulls,” said Andy Stanford-Clark, excitedly. “They’re quite a big obstacle from an image-processing point of view. But, actually, they’re not a threat at all. In fact, you can totally ignore them.”

Stanford-Clark, the chief technology officer for IBM in the U.K. and Ireland, was exuding nervous energy. It was the afternoon before the morning when, at 4 a.m. British Summer Time, IBM’s Mayflower Autonomous Ship — a crewless, fully autonomous trimaran piloted entirely by IBM's A.I., and built by non-profit ocean research company ProMare -- was set to commence its voyage from Plymouth, England. to Cape Cod, Massachusetts. ProMare's vessel for several years, alongside a global consortium of other partners. And now, after countless tests and hundreds of thousands of hours of simulation training, it was about to set sail for real.

Read more