Will we ever have a fool-proof lie detector?

By Ed Yong | April 9, 2012 8:57 am

Here’s the fourth piece from my new BBC column

In The Truth Machine, a science-fiction novel published in 1996, scientists invent a device that can detect lies with perfect accuracy. It abolishes crime, changes the world, and generally saves humanity from self-destruction. Which is nice.

Could such a machine ever be a reality? Not if our current technology is anything to go by. The polygraph has been around for almost a century, with wired-up offenders and twitching needles becoming a staple of criminal investigations. But there is no solid evidence that the signs it looks for – faster heart rates, shallower breaths and moist skin – can accurately indicate whether someone is telling a lie. Underpinned by fluffy theory and backed by a weak and stagnant evidence base, this lie-detection device is unlikely to get any better.

Inside the brain

Abandoning the polygraph, some scientists have turned to brain scanners. Two technologies have dominated the field. The first uses electronic sensors on a person’s scalp to measure an electrical signal, or “brainwave”, called the P300, which appears when we recognise something. By looking for this signal, you could potentially tell if someone is hiding knowledge about something they are already familiar with, like a murder weapon. This is certainly useful, but it is a long way from an all-purpose lie-detection method, and two of the key figures in the field have been arguing about how effective this is for many years.

The second technique is functional magnetic resonance imaging (fMRI), affectionately known as blobology for the colourful pictures it produces. It shows the location of firing neurons in an indirect manner, by tracking the blood flow that supplies them with nutrients and oxygen. Several fMRI studies have shown that some parts of the brain are consistently more active when people tell untruths rather than truths, particularly areas at the very front that help us to suppress unwanted actions. Successful lying, it seems, is mainly about repressing the urge to be honest.

But there is no “centre for dishonesty” in the brain, as such. The areas illuminated in fMRI scans have many functions. They can even be more active when people tell the truth, especially if they are trying to decide whether to be honest or not.

So, how accurate are the scans? In simple lab experiments, they can detect lies around 78 to 85% of the time. “We’re not that close to a perfect lie detector,” says Giorgio Ganis from the University of Plymouth, who uses fMRI to study deception. “There’s also a 15-20% chance of an innocent person being wrongly determined to be a liar.”

Tell me lies

What is particularly troubling is that these limitations crop up in simplified and artificial conditions, like volunteers lying about a playing card they have been given. So we know very little about how fMRI would fare at detecting lies in more realistic settings – for example, not a single study has scanned people’s brains when they lie during conversations.

There are also different types of lie. If you have been pulled over for speeding, you would need to come up with a tall tale spontaneously. If you were on trial for a crime, you would have more time to rehearse your story. Ganis found that these brands of lie produce different patterns of brain activity: rehearsed ones are accompanied by a weaker buzz in so-called action-repression areas, and a stronger one in memory centres.

Lie detection also depends on a person’s memories, which are subjective and fallible. In 2008, Jesse Rissman from Stanford University showed that fMRI scans can reveal if volunteers thought they had seen a face before, but not if they had actually done so. If people are convinced of their lies, or if they have simply forgotten crucial information, the scans will not pick that up.

Finally, there are ways of fooling a brain scanner, just as there are countermeasures for other lie-detection techniques. Ganis says, “I’ve done a study showing that you can play mental tricks with fMRI. You mentally associate the important events of your life to items that are shown during the test.” By bringing those events to mind at the right time, volunteers could bamboozle the scans, and slash their accuracy from 100% to just 33%. “If you ever want to apply a technique like this in real cases, where people have motivation to beat the test, that’ll become an important issue,” says Ganis.

Doubts and concerns

FMRI scanners will undoubtedly improve, but the problems of countermeasures and the subjectivity of memory, may be harder to solve. A report from the Royal Society on neuroscience and the law said that these problems were “seemingly insuperable”. Ganis agrees: “If you want a general lie detector, that’s definitely science fiction right now.”

That hasn’t stopped fMRI from being marketed as a tool for lie detection – two companies called Cephos and No Lie MRI currently offer such services, the latter under a tagline of “New truth verification technology”. Nor has it deterred brain scans from being presented in courtrooms, with varying success. In recent years, two US judges have dismissed fMRI-based evidence, but a murder suspect in India was sentenced to life imprisonment after brain scans supposedly revealed that she had knowledge about a crime that only the killer could have possessed.

Possible misuse of this developing technology has raised ethical concerns about the future of brain-based lie detection. Daniel Langleben from the University of Pennsylvania, who did much of the pioneering work in this field, recognises the limitations of the technique, but thinks that it could be improved to the point where it could be usefully applied in practical settings. But he worries that the current doubts will stifle the research necessary to improve the technology.

“It would be nice if for every new review and commentary, and I include myself here, there was new data,” he says. “Every time you have a negative critical review, it has a chilling effect on people who want to do this research. As we speak, I’m sitting on a data set that I haven’t submitted because I just don’t have the energy to deal with [the reactions].”

For now, we know there are broad differences between an honest brain and a dishonest one. To turn that knowledge into a practical test, “you need a lot of boring validation work,” says Langleben. “We need clinical trials, just as for every medical device or test.” Such trials would try to work out how accurate the scans are in more realistic settings, and how often they make errors. They would assess the effects of age, motivation, mental disorders, medication, countermeasures and more. They would likely cost tens of millions of dollars, and would need to include thousands of people – far more than the dozens who take part in typical fMRI lab studies.

For now, a foolproof lie detector is a far-away goal, but it will be even more distant if no one can afford to do the necessary research. That, at least, is no lie.


Comments (9)

  1. Eric T

    I recall reading an essay at school in which a businessman upset the Devil who cursed him by making it impossible for him to tell lies. Does anyone have the reference? [The book may have been called something like “More essays by modern masters”.]

  2. JMW

    There is a perfect lie detector. It is called a political campaign.

  3. how about those people who lie to themselves and up believing their own lie?

  4. Jon Hoekstra

    Nice work. Use of “lie detectors” just reinforces a big problem, to my mind, in how people evaluate statements. People like to think in terms of just two categories: truth and lies. They forget about another category: honest mistakes.

    Faulty eyewitness testimony will only seem more convincing if a “lie detector” validates it.

    Too many people will make the following assumption:
    person believes what he or she says is true = person is telling “the truth.”

    Thanks for making note of this issue in your article.

  5. Han

    this ^^ is something that I don’t see any future ‘lie detectors’ overcome.
    Especially after some time, people’s narratives take on a life of their own.
    there’s been some research on where people were on momentous occasions like ‘when the towers fell’ where those memories were found to be untrustworthy, pliable if you will. Each retelling re-shapes the memory.
    If I recall correctly, there was a blog post dealing with this and the proteins involved.

    For those in the US, the refound memories of abuse victims spring to mind. People traumatised by abuse that never happened.

  6. Pippa

    Will we ever have a perfect lie detector? No one nose!

  7. Ron Kaminsky


    Perhaps you meant:

    Will we ever have a perfect lie detector? Only Pinocchio nose!

  8. @old_chap

    These measure a reaction but we cannot readily identify the cause. Maybe I stole some money as a youngster. When asked “Did you take the money?” I may well react but who knows to which incident? The existing polygraph is useful for screening out ppl in a large scale inquiry but can never be trusted as a true or false arbiter

  9. Euan

    So if lie detection is mainly about looking for signs that someone is suppressing the urge to be honest does that mean it would be unlikely to work on sociopaths who don’t have that urge in the first place? If it only works on nice people that’s really going to limit its usefulness.


Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

Not Exactly Rocket Science

Dive into the awe-inspiring, beautiful and quirky world of science news with award-winning writer Ed Yong. No previous experience required.

See More

Collapse bottom bar