Skip to main content

A.I.-powered website creates freakishly lifelike faces of people who don’t exist

Image used with permission by copyright holder

Have you seen this man? No, he’s not a missing person. Or at least not in any conventional sense. The face in question was generated by an artificial intelligence on the new website ThisPersonDoesNotExist.com. While it’s been clear for quite some time that modern A.I. is getting pretty darn good at generating accurate human faces, it’s a reminder of just how far we’ve come from the uncanny valley effect seen in movies like 2004’s Polar Express.

The site is the work of Philip Wang, a software engineer at Uber. Wang says that the idea for the project started in 2014 with a conversation with Ian Goodfellow, a deep -earning research scientist, currently at Google Brain, who introduced the concept of a generative adversarial network (GAN). They discussed the notion of pitting two neural networks against one another: One designed to generate new images, and the other to figure out which images are computer-generated and which are real. Over time, the “generator” network becomes adept at creating images so realistic they can fool the “discriminator.” (If that sounds familiar, it’s because it was also the technology behind last year’s art-generating A.I., which created a painting that sold for big bucks at a Christie’s auction.)

Recommended Videos

Using the latest state-of-the-art GAN, devised by Nvidia A.I. Labs, Wang trained his deep-learning algorithm to generate faces, based on a data set of 70,000 high-resolution images. The results … well, you can see them for yourself by checking out the website. Hitting refresh will iterate an entirely new face.

“[The faces you see] are entirely original, by our definitions of ‘original,’” Wang told Digital Trends. “No human being is truly ‘original’ [since] we are all bounded by the data we are exposed to since birth. Even an artist, when asked to draw up anything, would only be able to draw things they have seen in their lifetimes. These neural networks are quickly approaching originality on the same level as we would. If given the same amount of data and enough training, it learns to break down the data into its most fundamental features, and then reconstitute them into new believable forms. It isn’t just a program that cuts and pastes memorized parts of the training set into a new image.”

Luke Dormehl
Former Digital Trends Contributor
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
BYD’s cheap EVs might remain out of Canada too
BYD Han

With Chinese-made electric vehicles facing stiff tariffs in both Europe and America, a stirring question for EV drivers has started to arise: Can the race to make EVs more affordable continue if the world leader is kept out of the race?

China’s BYD, recognized as a global leader in terms of affordability, had to backtrack on plans to reach the U.S. market after the Biden administration in May imposed 100% tariffs on EVs made in China.

Read more
Tesla posts exaggerate self-driving capacity, safety regulators say
Beta of Tesla's FSD in a car.

The National Highway Traffic Safety Administration (NHTSA) is concerned that Tesla’s use of social media and its website makes false promises about the automaker’s full-self driving (FSD) software.
The warning dates back from May, but was made public in an email to Tesla released on November 8.
The NHTSA opened an investigation in October into 2.4 million Tesla vehicles equipped with the FSD software, following three reported collisions and a fatal crash. The investigation centers on FSD’s ability to perform in “relatively common” reduced visibility conditions, such as sun glare, fog, and airborne dust.
In these instances, it appears that “the driver may not be aware that he or she is responsible” to make appropriate operational selections, or “fully understand” the nuances of the system, NHTSA said.
Meanwhile, “Tesla’s X (Twitter) account has reposted or endorsed postings that exhibit disengaged driver behavior,” Gregory Magno, the NHTSA’s vehicle defects chief investigator, wrote to Tesla in an email.
The postings, which included reposted YouTube videos, may encourage viewers to see FSD-supervised as a “Robotaxi” instead of a partially automated, driver-assist system that requires “persistent attention and intermittent intervention by the driver,” Magno said.
In one of a number of Tesla posts on X, the social media platform owned by Tesla CEO Elon Musk, a driver was seen using FSD to reach a hospital while undergoing a heart attack. In another post, a driver said he had used FSD for a 50-minute ride home. Meanwhile, third-party comments on the posts promoted the advantages of using FSD while under the influence of alcohol or when tired, NHTSA said.
Tesla’s official website also promotes conflicting messaging on the capabilities of the FSD software, the regulator said.
NHTSA has requested that Tesla revisit its communications to ensure its messaging remains consistent with FSD’s approved instructions, namely that the software provides only a driver assist/support system requiring drivers to remain vigilant and maintain constant readiness to intervene in driving.
Tesla last month unveiled the Cybercab, an autonomous-driving EV with no steering wheel or pedals. The vehicle has been promoted as a robotaxi, a self-driving vehicle operated as part of a ride-paying service, such as the one already offered by Alphabet-owned Waymo.
But Tesla’s self-driving technology has remained under the scrutiny of regulators. FSD relies on multiple onboard cameras to feed machine-learning models that, in turn, help the car make decisions based on what it sees.
Meanwhile, Waymo’s technology relies on premapped roads, sensors, cameras, radar, and lidar (a laser-light radar), which might be very costly, but has met the approval of safety regulators.

Read more
Waymo, Nexar present AI-based study to protect ‘vulnerable’ road users
waymo data vulnerable road users ml still  1 ea18c3

Robotaxi operator Waymo says its partnership with Nexar, a machine-learning tech firm dedicated to improving road safety, has yielded the largest dataset of its kind in the U.S., which will help inform the driving of its own automated vehicles.

As part of its latest research with Nexar, Waymo has reconstructed hundreds of crashes involving what it calls ‘vulnerable road users’ (VRUs), such as pedestrians walking through crosswalks, biyclists in city streets, or high-speed motorcycle riders on highways.

Read more