Insurmountable Shortcoming? AI Cannot Detect Lies

By Dawn Allcot

You might remember that the android Data, from Star Trek: The Next Generation couldn’t recognize a joke. He also struggled with reading fellow crewmembers’ emotions, although he never stopped trying. Not surprisingly, these are also shortcomings of today’s artificial intelligence algorithms.

There’s another thing today’s AI can’t do very well: detect lies. Researchers out of the University of Southern California’s Institute for Creative Technologies discovered that AI algorithms fail basic tests that have to do with discerning truth from lies.

The research team completed a pair of studies using science that undermines popular psychology and AI expression understanding techniques, both of which assume facial expressions reveal what people are thinking.

“Both people and so-called ’emotion reading’ algorithms rely on folk wisdom that our emotions are written on our face,” said Jonathan Gratch, director for virtual human research at ICT and a professor of computer science at the USC Viterbi School of Engineering. “This is far from the truth.”

Poker players bluff. People smile when they’re upset. Detecting lies actually has little do with reading facial expressions, yet that’s what most truth-detecting algorithms are based on.

Gratch and fellow researchers Su Lei and Rens Hoegen at ICT, along with Brian Parkinson and Danielle Shore at the University of Oxford, examined spontaneous facial expressions in social situations. In one study, they developed a game that 700 people played for money and then captured how people’s expressions impacted their decisions and how much they earned. Next, they allowed subjects to review their behavior and provide insights into how they were using expressions to gain advantage and if their expressions matched their feelings.

Using several novel approaches, the team examined the relationships between spontaneous facial expressions and key events during the game. They adopted a technique from psychophysiology called “event-related potentials” to address the extreme variability in facial expressions and used computer vision techniques to analyze those expressions. To represent facial movements, they used a recently proposed method called facial factors, which captures many nuances of facial expressions without the difficulties modern analysis techniques provide.

The scientists found that smiles were the only expressions consistently provoked, regardless of the reward or fairness of outcomes. Additionally, participants were fairly inaccurate in perceiving facial emotion and particularly poor at recognizing when expressions were regulated. The findings show people smile for lots of reasons, not just happiness, a context important in the evaluation of facial expressions.

“These discoveries emphasize the limits of technology use to predict feelings and intentions,” Gratch said.

The bottom line? Evaluating human emotion and intention, including detecting lies for purposes such as hiring job candidates or evaluating loan applicants, may not be a job for AI just yet—if ever.

Leave A Reply

Your email address will not be published.