Adobe’s artificial intelligence is already introducing features that streamline the creative process but a look at what is next for Adobe Sensei could push the technology beyond just time-saving tools. During Adobe Max 2019, Adobe unveiled several sneak peeks at what the software engineers are developing next using the company’s artificial intelligence, Adobe Sensei. From creating a moving photo in one click, converting a recorded voice into a musical instrument to designing an animation that reacts to real-time tweets, creatives could soon have some crazy new tools inside the Creative Cloud.
Project Moving Stills
Project Moving Stills converts a still image into an animation, but unlike a cinemagraph, Moving Stills creates realistic 3D-camera movements in one click. Demonstrated on stage, the software uses A.I. to move into the scene, creating an effect that looks more like a videographer moving through the image than a boring 2D-slideshow transition.
Adobe Sensei understands how objects are arranged in a 3D space, Adobe explained, and uses that information to not only create a 3D-motion effect, but to also determine the best effect for that scene. The setting can be applied with one click, but the software also includes a handful of tools to change the 3D movement, from moving into the scene to panning up and down. The software also allows for custom camera movement by selecting a starting and ending point.
Moving Stills can also apply effects to multiple effects at once, creating a photo slideshow that makes the objects in the images pop as if shot with a 3D camera instead of the boring traditional zoom-pans on stills. (Sorry, Ken Burns.)
The popularity of cinemagraphs and GIFs have inspired several new tools like Plotagraph, but instead of animating an object in the photo, Moving Stills appears to take a step into the photo. As a sneak peek, Adobe hasn’t yet shared when Moving Stills will actually launch, or even if it will launch as a stand-alone or as part of existing software.
Project Kazoo
Don’t know how to play an instrument or sing, but want to create your own audio? A.I. may soon be able to help with that. Project Kazoo is a program that turns recordings of your voice into notes on an instrument — or even notes from a soprano singer.
Demonstrated on stage at Max by Adobe’s Zeyu Jin, the software first takes a recording and arranges that audio as notes on a scale, noting the note and duration. Users then select an instrument from a drop-down menu and the program replays those notes in that instrument. Besides working with voice, the program can also turn a recording of one instrument into another. A transpose slider also allows adjusting the audio to a higher or lower pitch.
Besides helping the non-musically inclined create their own audio, Adobe says the prototype can also be used to create effects for cartoon characters, like imagining what a cartoon violin’s laugh would sound like.
Brush Bounty
What if Photoshop brushes could paint motion? That’s the idea behind Brush Bounty, a tool that allows animators to paint in effects that would otherwise be time-consuming or even impossible to animate. Adobe’s Fabin Rasheed demonstrated Project Brush Bounty at Max, including brushes that could paint in the rain, hair blown by the wind, a sparkling night sky or a glowing orb.
Besides just saving animators from individually creating each drop of rain and each strand of hair, Brush Bounty can also tie those animations into locations or even tweets. Adding a hashtag to the project allows the animation to react in real time to any tweets using that hashtag. For example, on stage, Rasheed created a superhero with a glowing orb that increased in size and intensity with each tweet of #BrushBounty.
The animation can also be tied to the viewer’s location — like matching the weather in the animation to the location. Another sneak peek showed the tool changing the direction of the wind based on how the viewer was holding the smartphone, changing wind directions with the movement of the smartphone.
The files can be exported as videos or GIFs, along with web elements for the interactive animations. Like the other sneak peeks, Adobe hasn’t shared just when the tool will launch.
Project Fastmask
For video editors, creating masks is a time-consuming task, particularly in scenarios with so much movement that the auto mask options don’t work. Project Fastmask is an A.I.-powered tool that masks out moving subjects — and even works after they leave the frame.
The person (or animal) is masked out by placing a handful of boundary points in the first frame. Clicking propagate will then adjust those boundary points for the next frame, continuing through the end of the clip and leaving a well-masked character for further adjustments.
Project Smooth Operator
Vertical videos are a headache for creatives shooting a video for multiple platforms (we’re looking at you, IGTV). Project Smooth Operator uses A.I. to automatically crop horizontal videos to vertical ones — without leaving the subject behind.
Smooth Operator uses A.I. to analyze the video and determine the most important parts. The tool will then keep those elements in the frame using the selected aspect ratio. If the subject moves, the crop will follow in a manner that feels similar to real panning. Besides converting from horizontal to vertical, the tool can also crop to a less drastic different aspect ratio.
Even more impressive, the demonstration included a video with two different subjects, a dog and its owner playing fetch. Smooth Operator panned between the two subjects, deciding where to be based on the action in the video.
Project Fontphoria
Getting just the right font is often a struggle for designers, but A.I. will soon be able to generate a font from an image — including characters that weren’t actually in the image. Fontphoria generates a font based on an image of text, such as a photo of hand-lettering or the font from a vintage poster. The program can also similarly generate fonts when you open a document but don’t have all the fonts used, applying the characteristics of the existing characters to create a full font.
A lens mode will allow Fontphoria to preview the font on existing text, using augmented reality to replace the existing text with the new font. Another feature allows custom modifications to be applied to all the characters at once, instead of manually applying special effects to each letter.
Adobe’s list of sneak peeks also included Fantastic Fold, a design program for packaging, Project Model Morph for manipulating 3D objects inside (eventually) Adobe Dimension, and Project Good Bones, a tool allowing for shape-aware editing of vector graphics. Adobe also offered a sneak at Project Waltz, which uses a smartphone to take photos or video from inside a 3D project.
As sneak peeks, Adobe hasn’t shared what or even how the tools will arrive — some will likely be stand-alone programs while others may be integrated into existing programs.