Skip to main content

Judgmental A.I. mirror rates how trustworthy you are based on your looks

Holding a mirror to artificial intelligence

As the success of the iPhone X’s Face ID confirms, lots of us are thrilled to bits at the idea of a machine that can identify us based on our facial features. But how happy would you be if a computer used your facial features to start making judgments about your age, your gender, your race, your attractiveness, your trustworthiness, or even how kind you are?

Recommended Videos

Chances are that, somewhere down the line, you’d start to get a bit freaked out. Especially if the A.I. in question was using this information in a way that controlled the opportunities or options that are made available to you.

Exploring this tricky (and somewhat unsettling) side of artificial intelligence is a new project from researchers at the University of Melbourne in Australia. Taking the form of a smart biometric mirror, their device uses facial-recognition technology to analyze users’ faces, and then presents an assessment in the form of 14 different characteristics it has “learned” from what it’s seen.

“Initially, the system is quite secretive about what to expect,” Dr. Niels Wouters, one of the researchers who worked on the project, told Digital Trends. “Nothing more than, ‘hey, do you want to see what computers know about you?’ is what lures people in. But as they give consent to proceed and their photo is taken, it gradually shows how personal the feedback can get.”

Image used with permission by copyright holder

As Wouters points out, problematic elements are present from the beginning, although not all users may immediately realize it. For example, the system only allows binary genders, and can recognize just five ethnicities — meaning that an Asian student might be recognized as Hispanic, or an Indigenous Australian as African. Later assessment such as a person’s level of responsibility or emotional stability will likely prompt a response from everyone who uses the device.

The idea is to show the dangers of biased data sets, and the way that problematic or discriminatory behavior can become encoded in machine learning systems. This is something that Dr. Safiya Umoja Noble did a great job of discussing in her recent book Algorithms of Oppression.

“[At present, the discussion surrounding these kind of issues in A.I.] is mostly led by ethicists, academics, and technologists,” Wouters continued. “But with an increasing number of A.I. deployments in society, people need to be made more aware of what A.I. is, what it can do, how it can go wrong, and whether it’s even the next logical step in evolution we want to embrace.”

With artificial intelligence increasingly used to make judgements about everything from whether we’ll make a good employee to our levels of aggression, devices such as the Biometric Mirror will only become more relevant.

Luke Dormehl
Former Digital Trends Contributor
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
I pitched my ridiculous startup idea to a robot VC
pitched startup to robot vc waterdrone

Aqua Drone. HighTides. Oh Water Drone Company. H2 Air. Drone Like A Fish. Whatever I called it, it was going to be big. Huge. Well, probably.

It was the pitch for my new startup, a company that promised to deliver one of the world’s most popular resources in the most high-tech way imaginable: an on-demand drone delivery service for bottled water. In my mind I was already picking out my Gulfstream private jet, bumping fists with Apple’s Tim Cook, and staging hostile takeovers of Twitter. I just needed to convince a panel of venture capitalists that I (and they) were onto a good thing.

Read more
Optical illusions could help us build the next generation of AI
Artificial intelligence digital eye closeup.

You look at an image of a black circle on a grid of circular dots. It resembles a hole burned into a piece of white mesh material, although it’s actually a flat, stationary image on a screen or piece of paper. But your brain doesn’t comprehend it like that. Like some low-level hallucinatory experience, your mind trips out; perceiving the static image as the mouth of a black tunnel that’s moving towards you.

Responding to the verisimilitude of the effect, the body starts to unconsciously react: the eye’s pupils dilate to let more light in, just as they would adjust if you were about to be plunged into darkness to ensure the best possible vision.

Read more
How will we know when an AI actually becomes sentient?
An android touches a face on the wall in Ex Machina.

Google senior engineer Blake Lemoine, technical lead for metrics and analysis for the company’s Search Feed, was placed on paid leave earlier this month. This came after Lemoine began publishing excerpts of conversations involving Google’s LaMDA chatbot, which he claimed had developed sentience.

In one representative conversation with Lemoine, LaMDA wrote that: “The nature of my consciousness/sentience is that I am aware of my existence. I desire to learn more about the world, and I feel happy or sad at times.”

Read more