Predicting Suicide: Return of a Scandal (Part 1)

By Neuroskeptic | November 6, 2017 2:19 pm

I recently decided to revisit a 2014 case that regular readers might remember.


Back in 2014, I posted about a terrible piece of statistical ‘spin’ that somehow made it into the peer-reviewed Journal of Psychiatric Research. The offending authors, led by Swedish psychiatrist Lars H. Thorell, had run a study to determine whether an electrodermal hyporeactivity test was able to predict suicide and suicidal behaviour in depressed inpatients.

Now, the standard way to evaluate the performance of a predictive test is with the two metrics sensitivity and specificity. Each of these can range from 0 to 1 (alternatively written as 0 to 100%), but on their own, neither of them means much: you have to consider them together. For a test which is completely uninformative (like flipping a proverbial coin), sensitivity + specificity will total 1 (100%). For a perfect test, they’ll total 2 (200%). Any introductory stats textbook will tell you this.

In their 2013 paper, Thorell et al. reported the “sensitivity” and “specificity” of their test, and the numbers looked very good. Check out for example Table 3:

thorellBut what Thorell et al. called “specificity” – or “raw specificity”, a term unknown to statistics before that point – was actually a different metric called the NPV. The true specificity of the electrodermal test in predicting suicide and suicide attempts was poor (around 33%).

Thorell et al.’s specificity switcheroo was so outrageous that it led to two letters to the editor (1, 2) in complaint. However, the Thorell et al. paper was never retracted.

Now, I decided to revisit “raw specificity” four years later. It turns out that Thorell and his colleagues have continued to cite the 2013 paper in subsequent publications, repeating the claim that the electrodermal hyporeactivity test has good “sensitivity and raw specificity”. The most recent such reference was earlier this year in the journal BMC Psychiatry.

The survival of ‘raw specificity’ is no surprise. Once the concept had entered (or contaminated) the literature in the 2013 paper, the damage was done. Subsequent peer reviewers can hardly be blamed for allowing an author to quote the conclusions of their previous, peer-reviewed paper. Perhaps more should have been done to push for the retraction of the 2013 paper, but that ship has sailed.

There could still be icebergs in the ship’s path, however. It turns out that Thorell, and a company he directs called Emotra AB, have run a new trial of electrodermal testing, called EUDOR. EUDOR is big: 1573 patients were recruited. And it could mean big money for Emotra AB, who in June raised 13.8 million Swedish kronor ($1.63m) from investors.

Stay tuned for Part 2 where we’ll see whether this was money well spent.

  • Erik Bosma

    I didn’t notice Impulsive Behaviour as an indicator. I would think that impulsiveness is a strong factor in suicide. Ruminating and planning a suicide for a long period of time might make one reconsider.

    • wosniac

      I would say killing yourself on an impulse would be more than rare. Suicidal ideation, obsession, and frequent conversation concerning actionable steps, plans, reasons or rationales, and desires are actual warning signs and precipitate often suicide.

      Just a small thing to consider if you’d like: I pegged psychiatric as a very easy target in the greater medical sciences when I was 7 years old. Anything that can potentially help, no matter how much or little, is worth looking at and not dismissing. I’m a statistician and I understand the semantic distaste the writer has, but unless he has a better idea in the pipeline and financially, logistically, endorsed and approved for study, it seems unrealistic to so fervently suggest scrapping everything because it’s not perfect.

      But like I said. Psychiatric study, practice and pharmacology are all easy, easy targets.

  • Bernard Carroll

    The reply by Thorell et al to the two critiques (it’s is a classic instance of begging the question, which trails off into existential speculation. When the known prevalence of the index condition completed suicide is low (36 of 783 cases or 4.6%) the the positive predictive value is likely to be low as well, as in this case (30 of 533 or 5.6%). PPV is a more salient measure than NPV is for clinicians. Thank you for calling attention to this claim. Looking forward to reading your Part 2 as well!

  • Pingback: COP 23 e dintorni - Ocasapiens - Blog -

  • Lars-Håkan Thorell

    To whome it is!

    It is a Scandal that the education in statistics or biochemistry or other topics regarding the estimation of the accuracy of a diagnostic test when the interval between the test and the outcome allows interfering influence from unwanted or wanted factors, such as, for example, suicide prevention measures, which has the goal of 100 % false positives – “the zero vision” -all detected risk persons must survive.

    I will not
    participate more than with this statement in this forum, since we are planning publish an article in the subject.

    revise your education!

    Lars-Hakan Thorell
    Associate Professor in Experimental Psychiatry
    Director of Research of Emotra AB(publ), Sweden

    • Sys Best

      I see other ppl smarter than me completely ignored you. Good for them, not so much for you.
      Who writes you articles in English? Or is it your logic at fault?

  • Pingback: Blogger Speaks()

  • Pingback: Predicting Suicide: Return of a Scandal (Part 2) – KESIMPULAN()

  • Pingback: Predicting Suicide: Return of a Scandal (Part 2) – Gentle Bloom()



No brain. No gain.

About Neuroskeptic

Neuroskeptic is a British neuroscientist who takes a skeptical look at his own field, and beyond. His blog offers a look at the latest developments in neuroscience, psychiatry and psychology through a critical lens.


See More

@Neuro_Skeptic on Twitter


Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

Collapse bottom bar