Like the Neural Photo Editor project from researchers at the University of Edinburgh we covered earlier this week, the Adobe/Berkeley project uses adversarial networks to train the system to make realistic edits based on minimum user input. Three brush tools — coloring, sketching, and warping — allow for different aspects of the image to be manipulated. The demonstration video above shows how a user can simply paint a stroke of blue over a black shoe and the system automatically applies a blue shade to the entire shoe.
The warping tool allows for a user to alter the shape of the object in ways either subtle or extreme, while keeping it within the bounds of realism. In the video, it was used to make a shoe slimmer and change a handbag into a completely different shape. Fine details can be added with the sketching tool.
One of the tool’s most impressive features, however, is its ability to generate new images on the fly. A user simply draws an outline of a shape (in this case, another shoe) and the system generates images that fit the mold. But it can also generate full scenes: The presenter in the video created a mountain landscape painting with just a few brush strokes of different shapes and colors.
As the researchers explain in their paper, they envision this tool as speeding up workflows for image retouchers and opening up the world of image manipulation to people without advanced artistic skills, but it will likely be some time before this technology makes its way into an Adobe Creative Cloud update. As we’ve seen in AI-based image generators before, the resolution appears to be very limited. One day, however, maybe we’ll all be using this or a similar system to design our own sneakers from our laptops.