In what has become a popular tradition at the annual Adobe MAX show, Adobe showed off several sneak previews of in-development technologies that may make their way to future versions of Photoshop, Premiere Pro, Illustrator, and more. For example, Character Animator, which recently came out of beta, was unveiled during a previous MAX show. While all of the projects are interesting in their own right as examples of cutting-edge software tech, a few stand out for photographers and video editors.
Powered by the Adobe Sensei AI engine, Project Scribbler can take a black and white photograph and automatically colorize it with surprisingly realistic results. The program was trained on tens of thousands of images to be able to recognize facial features of a monochrome image and appropriately apply correct colors to different regions of a face, from the hair to the skin to the lips and teeth.
Although Project Scribbler is currently limited to faces — it can’t colorize full-body portraits — it is not limited to photos; it can colorize sketches, as well. In a live demonstration, Adobe showed how it can help an artist ideate a character or do a quick mockup to show a client before diving in and finishing the color by hand.
Sensei was definitely a running theme at MAX this year, and two additional projects are using it to provide a much more robust alternative to Photoshop’s Content Aware Fill option for removing and replacing objects in a scene. Project Scene Stitch draws on deep learning and semantic cues to replace a photo’s foreground with one built from Adobe Stock images, while Project Deep Fill applies similar technology to replace smaller objects within an image. Deep Fill can also reshape objects based on user input, which Adobe demonstrated by sketching a heart line beneath a rock arch which caused the arch to conform to the shape of the sketch.
For video editors, Project Cloak is essentially Content Aware Fill applied to video. It automatically removes objects from a video shot without the user needing to clone out the object on a frame-by-frame basis, and it does it in a way that is much more accurate that per-frame editing.
In a series of examples, Adobe demonstrated the impressive range Project cloak offers, from removing a lamppost to erasing two people from a shot where both the people and the camera were moving. If this technology makes it way into a shipping product (our guess is it would end up in After Effects), it would undoubtedly be a game changer for many editors.
For immersive video creators, Adobe also showed off two projects for working in 360-degree space. Project Sidewinder builds a depth map from stereoscopic 360 video which then creates a convincingly real three-dimensional effect and allows the viewer to change perspective beyond simply rotating, moving from side to side or up and down. When it comes to audio, Project SonicScape offers a visual way to see and reposition audio sources with the spherical space.
Adobe showed off 11 development projects in total that ran the gamut from photography and video to design and 3D modeling and even data visualization. As with past Adobe sneaks, none of the technology demonstrated at MAX is guaranteed to be incorporated into a commercially available product, but the projects do offer a very real look at what Adobe is looking into and the types of tools we can expect to see in the not-too-distant future.