Skip to main content

This new software from the University of Michigan is 75% accurate at catching a liar

university michigan uses court proceedings identify characteristics liars supreme
Image used with permission by copyright holder
Think you’re good at catching people in a lie? You might be, but you’re probably not as good as a computer. Especially not one running this lie-detecting software developed by researchers at the University of Michigan, who studied hours and hours of footage from high-stakes court cases in order to build a tool that can determine whether or not someone’s pants should be combusting.

Better than a polygraph, the new software doesn’t actually need to measure a subject’s pulse, breathing rate, or otherwise touch them at all in order to catch a liar in the act. Rather, the speaker’s words and gestures are analyzed to measure their truthfulness (or the lack thereof).

Recommended Videos

In initial experiments, the University of Michigan software was significantly more accurate in identifying deception than were humans — in fact, it was 75 percent accurate in finding the liars, whereas humans were right only 50 percent of the time.

So how did they do it?

Please enable Javascript to view this content

After poring over 120 video clips from real trials, the researchers found that the people who were lying had a number of distinctive tells. They moved their hands more, scowled or grimaced, said “um” more frequently, and attempted to create a sense of distance between themselves and their alleged crime or civil misbehavior by using words like “he” or “she” rather than “I” or “we.” Even more interesting, liars tended to make a greater effort at sounding sure of themselves — not only would they feign confidence, but they would also look the questioner in the eye, perhaps attempting to establish believability.

These findings, to some degree, seem contrary to previous claims (and even common sense), that would suggest that liars tend to look away or appear uncertain.

Researchers attribute the newness of their results to the real-life aspect of their experiments, relying on actual media coverage from genuine settings, rather than trying to recreate an artificial environment. “In laboratory experiments, it’s difficult to create a setting that motivates people to truly lie. The stakes are not high enough,” said Rada Mihalcea, a professor of computer science and engineering at UM-Flint. “We can offer a reward if people can lie well — pay them to convince another person that something false is true. But in the real world there is true motivation to deceive.”

Of course, it should be noted that the researchers determined truthfulness by comparing testimony to the ultimate verdict in the trials (so any erroneous verdicts would correspondingly weaken the study results).

But still, the research provides a brand new way of actually looking at truthfulness.

In order to further improve their software, Mihai Burzo, assistant professor of mechanical engineering at UM-Flint who co-led the study, noted that the team would be “integrating physiological parameters such as heart rate, respiration rate and body temperature fluctuations, all gathered with non-invasive thermal imaging.” This, the team hopes, will have huge implications on a number of different industries beyond law — mental health, security, and a number of other fields could be affected by this new-found ability to better determine honesty.

“People are poor lie detectors,” Mihalcea said. But just maybe, we can create better ones.

Lulu Chang
Former Digital Trends Contributor
Fascinated by the effects of technology on human interaction, Lulu believes that if her parents can use your new app…
OpenAI’s new tool can spot fake AI images, but there’s a catch
OpenAI Dall-E 3 alpha test version image.

Images generated by artificial intelligence (AI) have been causing plenty of consternation in recent months, with people understandably worried that they could be used to spread misinformation and deceive the public. Now, ChatGPT maker OpenAI is apparently working on a tool that can detect AI-generated images with 99% accuracy.

According to Bloomberg, OpenAI’s tool is designed to root out user-made pictures created by its own Dall-E 3 image generator. Speaking at the Wall Street Journal’s Tech Live event, Mira Murati, chief technology officer at OpenAI, claimed the tool is “99% reliable.” While the tech is being tested internally, there’s no release date yet.

Read more
I can’t believe my favorite new keyboard came from a phone company
OnePlus Keyboard 81 on a pink background.

I don't get excited about keyboards much these days. There are so many good ones out there and little in the way of innovation. No surprise, then, that I wrote off the OnePlus Keyboard 81 Pro immediately after it was announced. How good could a keyboard from a phone company be anyways? Boy, was I wrong.

Like any new keyboard, I pulled the Keyboard 81 Pro out of the box and drummed my fingers along the keys, expecting a harsh, unpleasant smattering of plastic and squeaky springs that I'd heard many times before. This stock mechanical keyboard feel isn't bad, but I've become a keyboard snob over the past few years (and yes, feel free to roast me for that fact). But I didn't hear that sound or feel that feel. The Keyboard 81 Pro was quiet.

Read more
Elon Musk’s new AI company aims to ‘understand the universe’
A digital image of Elon Musk in front of a stylized background with the Twitter logo repeating.

Elon Musk has just formed a new company that will seek to “understand the true nature of the universe.” No biggie, then.

Announced on Wednesday, the company, xAI, already has among its ranks artificial intelligence (AI) experts formerly of firms such as DeepMind, OpenAI, Google Research, Microsoft Research, and Tesla.

Read more