Skip to main content

A.I. will cause a tectonic shift in human creativity, but don’t be scared yet

Earlier this month, an autonomous test vehicle veered out of its lane to avoid a merging car, only to hit a motorcycle in the lane it moved into. If this is all you know about the story, it sounds like the kind of moral choice conundrum that often comes up in discussions about artificial intelligence behavior: between two bad outcomes, how does the machine decide which course to pursue? Do you swerve to avoid hitting the pedestrian crossing the street if it means running over the cyclist in the bike lane?

Only, that’s not at all what happened. The vehicle, operated by Waymo, had a human “safety driver” behind the wheel, who had taken manual control of the car moments before the accident occurred. Reacting to the sudden movement of the merging car, the driver steered to avoid it and simply did not see the motorcyclist in the adjacent lane. Had the driver done nothing, it is likely the autonomous vehicle would have avoided both potential accidents.

Recommended Videos

That artificial intelligence and automation will eventually make our roads safer is now a well understood fact, even if many of us find it difficult to fully embrace the idea of driverless cars. Translating language, buying and selling stocks, forecasting the weather — these are all areas where A.I.’s contributions are of obvious benefit. But the rise of A.I. isn’t just happening in places where human error is common; far from it, A.I. has already been trained to do, or at least mimic, one of the very things that makes us human: our creativity.

A helping, robotic hand

In October, an A.I. painting sold at auction for over $400,000. The A.I., used by French art collective Obvious, was trained on 15,000 portraits made between the 14th and 20th centuries, studying their styles and blending them into its own. This is creativity by brute force. A machine can’t feel, but nor does it sleep; give it enough data to crunch, and it can give you back something that honestly appears to be creative.

christie's auction a.i. painting
Timothy A. Clary/Getty Images

Understandably, this type of A.I. may worry working creatives, but even as machines write their own scripts and produce entire albums, they won’t actually be able to replace human artists anytime soon. And, as the novelty wanes, there’s significant doubt that other A.I.-produced artworks will earn equal acclaim to that from Obvious.

Fortunately, “friendlier” A.I.s are already here, not to do our work for us, but to make our jobs easier. Artificial intelligence has taken center stage the past two years at Adobe MAX, the annual show and conference put on by the company behind Photoshop, Lightroom, After Effects, and many other creative applications. Adobe’s A.I. engine is named Sensei, and it now powers a number of tools throughout the Creative Cloud suite.

The A.I. was trained on 15,000 portraits made between the 14th and 20th centuries. This is creativity by brute force.

In an interview with Digital Trends at MAX this year, Tom Hogarty, senior director of Digital Imaging at Adobe, likened the arrival of A.I. to the move from PC to mobile devices as creative tools. Adobe had just showed off Photoshop for iPad and Premiere Rush, a multi-device video editing app.

“That was a seismic refocusing of resources and priorities,” Hogarty told Digital Trends. “I think the shift shift to A.I. and ML [machine learning] is an equal magnitude tectonic shift in the industry right now.”

Adobe envisions Sensei as filling in the gap between human and machine, to make a task that is conceptually simple but mechanically difficult as easy to pull off as it is to think about.

People are innately good at certain tasks where machines traditionally perform poorly, such as recognizing objects in a photograph. Computers, on the other hand, are great at cataloging, altering, or removing and replacing those objects — but a human must clearly define them first, either by adding keywords in the case of cataloging, or by establishing the boundaries of an object through the selecting and masking process.

These mechanical tasks often require a painstaking attention to detail and take a frustratingly long time for a human to complete. Adding keywords, a prerequisite for organizing images based on their content, is such a daunting process that few photographers reliably do it. Lightroom Product Manager Josh Haftel knows the struggle well.

“You as a human being will probably be a lot better [than a computer] at being able to say, ‘chair, camera, phone, sunglasses, laptop,’ but it’s going to take you forever,” Haftel told Digital Trends.

Adobe

But what if the computer was as good at adding keywords to photos as you are? Or what if removing an object required little more than clicking on it? With the A.I.-enabled search in Adobe Lightroom CC and the Select Subject tool in Photoshop CC, that dream is just about a reality. Those tools are not yet 100-percent accurate, but they already give creatives a fast head start on would be time-consuming tasks.

While Sensei-powered search was in Lightroom CC from the start, it made a big leap forward this year with the addition of facial recognition. Sensei can detect and organize people — and other objects — at a rate of tens of thousands of images per second. The potential time savings is massive.

Adobe’s A.I. efforts also go beyond still images. The upcoming Content Aware Fill tool in After Effects will actually remove an object from every frame of video and seamlessly fill in the background with minimal user input. Seeing this in action is a bit mind-boggling, like watching a magician pull a rabbit out of a hat — you know there’s an explanation for it, but you can’t figure it out.

The tip of the iceberg

A.I. isn’t magic; it’s science. Even so, nobody really knows exactly how A.I. does what it does, and that’s a potentially terrifying premise. The computer trains itself; a human is merely required to give it the initial training data set. What’s interesting, if not exactly surprising, is that it’s in that training data where problems often arise in A.I. programming, in the form of human bias leaking into the machine. If the data is biased, the resulting algorithm could make decisions that are inaccurate, even sexist or racist.

A.I. isn’t magic; it’s science. Even so, nobody really knows exactly how A.I. does what it does.

“It’s going to be really difficult to not have bias, it’s always going to be there,” Adobe VP of Experience Design Jamie Myrold told the press at MAX 2018. “But it’s something that we do definitely focus on, and it’s another skill that designers are going to have to consider as something that they definitely own, and not just sort of allow the black box of the algorithm to frighten them.”

As frightening as it can be, this unknowable nature of A.I. is also what makes it so exciting. The potential for A.I. to solve problems is nearly limitless, and so long as we have measures in place to identify and correct for biases, it can do a lot of good while making our lives easier.

For creatives, particularly those making their money from creative pursuits, concern arises when an A.I. can adequately imitate a human. Why hire a graphic artist to design you a new logo if you can just plug some parameters into a computer and let it spit one out that’s perfectly fine?

We are still a ways away from that reality, but it’s not hard to look at where we are now and extrapolate that scenario as an eventuality. Already, A.I. in Adobe Lightroom can automatically enhance photos — lifting shadows, recovering highlights, adjusting exposure and saturation — with surprising adeptness.

Skylum Luminar Sky Enhancer

While Adobe’s implementation of this is aimed at giving photographers a solid starting point for additional editing, other developers have already gone beyond that. Skylum, previously Macphun, has multiple A.I.-powered adjustments in its Luminar photo editing software to do everything from automatically enhancing skies to adding realistic sun rays to an image. You can now dramatically alter a photograph with a bare minimum of photo retouching knowledge.

This doesn’t necessarily signal trouble for working creatives. We can draw comparisons here to the rise of digital photography, home PC video editing, smartphones, and any number of other technologies that drastically lowered the bar for entry to professional content production. While creative industries have been remixed time and time again, they have always survived. Whether or not they are better or worse for it depends on who you ask, but the quantity and diversity of creative content has never been higher than it is now.

And despite its successes, A.I. still faces some significant challenges. Currently in technology preview, Adobe Lightroom’s Best Photos feature is another Sensei-powered technology with huge time saving potential. It analyzes your images across a variety of parameters to automatically show you the best ones, while also taking into account your manual ratings. Even in its early state, it works impressively well, but it also displays the current limits of A.I.

“Where machine learning, at least today, fails, is understanding emotional context.”

“Where machine learning, at least today, fails, is understanding emotional context,” Haftel explained. “So the machine doesn’t know that that really dark, grainy picture is a picture of your grandma and it’s the last photo you have of her. And it won’t ever be able to tell that.”

Haftel was quick to add, “I shouldn’t say never — never say never — but at least it can’t do that today.”

Does A.I. have limits? Will we ever reach a plateau in A.I. development? Nvidia’s Andrew Page, product manager in the company’s Media and Entertainment Technologies division, doesn’t think so. Nvidia servers power all of the Adobe Sensei training, and the company’s latest RTX graphics cards include tensor cores built specifically for accelerating A.I. commands. Nvidia clearly sees A.I. playing a huge role in its future.

“We’re still in the infancy of [A.I.],” Page told Digital Trends. “Since the computer is kind of teaching itself how to do stuff, there’s really never a measure of done. Just like us as humans, we’re never done learning. I think we’re seeing just the tip of the iceberg of what A.I. can do for creatives, or for other industries, as well.”

One potential shift is the move from server-trained A.I. to locally trained A.I., which would be better able to respond to an individual user’s unique needs or artistic style. When machine learning can be done on a home PC rather than requiring a data center, it will open up new avenues for A.I. development. For now, the computational requirements and sheer size of the training data sets make local training difficult for all but the simplest tasks, but this will likely change in time.

How this all plays out may end up changing our very definition of creativity. As Adobe’s Haftel put it, without the requisite grind of using software manually to make art, “We can focus on the next level of creativity. We don’t know what that’s going to be, but our job at Adobe is to continuously support it and empower that.”

Topics
Daven Mathies
Former Digital Trends Contributor
Daven is a contributing writer to the photography section. He has been with Digital Trends since 2016 and has been writing…
The Canon EOS R100 mirrorless is ideal for beginners — $170 off today
Canon EOS R100 mirrorless camera with lens attached

If you don't want to spend the money on a high-end DSLR camera, mirrorless cameras are an excellent alternative. But they, too, can be expensive, at least normally. Right now, thanks to a bevy of Prime Big Deal Days discounts, the best camera deals, in general, and super great Prime Day camera deals there are a ton of offers available. So, it's a great time to shop if you've been on the fence. However, we wanted to call out a great deal for novice photographers on the Canon EOS R100 mirrorless camera. Normally $600, it's just $429 right now as part of the event, saving you over $170. The bundle includes the RF-S18-45mm lens. Again, a great starting point for casual or novice photographers.

 
Why shop this Canon EOS R100 mirrorless camera deal for Prime Big Deal Days?

Read more
My favorite SD card reader is a mere $15 for Prime Big Deal Days
The Lexar USB-C SD card reader.

I take a lot of pictures on a daily basis. And while I usually get to offload them in the comfort of my home, sometimes I need to export as quickly as possible. Maybe even straight from my phone.

That's where my new favorite SD card reader comes in.

Read more
Astronaut enjoys out-of-this-world view from his bedroom window
An aurora as seen from a Crew Dragon spacecraft docked at the ISS.

A NASA astronaut aboard the International Space Station (ISS) has posted a beautiful image showing an aurora over Earth.

Matthew Dominick has been aboard the ISS since March and is due to return home on a SpaceX Crew Dragon spacecraft on Sunday. In fact, it was from the docked Crew Dragon that he captured the stunning shot.

Read more