No Need To Worry About False Positives in fMRI?

By Neuroskeptic | December 31, 2016 9:52 am

Earlier this year, neuroscience was shaken by the publication in PNAS of Cluster failure: Why fMRI inferences for spatial extent have inflated false-positive rates. In this paper, Anders Eklund, Thomas E. Nichols and Hans Knutsson reported that commonly used software for analysing fMRI data produces many false-positives.

cluster

But now, Boston College neuroscientist Scott D. Slotnick has criticized Eklund et al.’s alarming conclusions in a new piece in Cognitive Neuroscience.

In my view, while Slotnick makes some valid points, he falls short of debunking Eklund et al.’s worrying findings.

Slotnick argues that Eklund et al.’s main approach was flawed, because they used resting state fMRI data to estimate false-positive rates. This, Slotnick says, is a mistake because

Resting-state periods produce fMRI activity in the default network… [so] it is not surprising that the “false clusters” reported by Eklund et al. occurred in default network regions. These findings indicate that Eklund et al.’s assumption that resting state fMRI data reflects null data is incorrect. As such, the large majority of “false clusters” reported by Eklund et al. likely reflected true activations (i.e. true clusters) that inflated familywise error.

Hmm. While the default mode network (DMN) surely was active in the resting state participants, I don’t see how this activation could have produced any false positives.

As in their previous work, Eklund et al. compared (arbitrary) “task” segments of the resting state time-series to a “baseline” consisting of the rest of the same time-series. DMN activity should not have differed systematically between the two conditions. Therefore the “task-related” activation clusters that Eklund et al. found can’t have been true clusters, because there was no task.

It is true that if the DMN happened to be more active during the “task” periods for a given individual, this would manifest as DMN activations, and these would correspond to ‘real’ brain activity (not just noise). However, they wouldn’t truly be task-related, so they’d still be false positives.

Also, in the Appendix to their PNAS paper, Eklund et al. present additional analyses using task-based, not resting state, fMRI data. They found high false-positive rates here as well, suggesting that the problem is not limited to resting-state data. Slotnick doesn’t discuss these results.

Slotnick goes on to conduct a new analysis, using his own task fMRI data, in which he reports finding no false positives. He concludes that in task fMRI, false-positive rates are not a problem. However, I wasn’t reassured by this for a number of reasons.

Firstly, Slotnick used a cluster defining threshold (CDT) of p<0.001. Yet Eklund et al. report that p<0.001 generally leads to only slightly elevated false-positive rates; the really serious problems arise when a CDT of p<0.01 is used, as Eklund et al.’s chart of false-positive rates shows:

eklund_fig_1Second, for his analysis Slotnick used his own custom script (here). Eklund et al. on the other hand tested the three most popular fMRI software packages (FSL, SPM and AFNI). So even if Slotnick’s script is false-positive free, this doesn’t prove that the widely used packages are.

Finally, Slotnick only performed one analysis, while Eklund et al. performed many thousands of them, repeatedly drawing random groups of participants from a large dataset. Slotnick may have simply got lucky in that his analysis happened not to produce false positives. After all, even under the worst conditions, Eklund et al. found that false-positive rates never reached 100%.

So overall, I think the problem of ‘cluster failure’ is still a serious one.

ResearchBlogging.orgSlotnick SD (2016). Resting-state fMRI data reflects default network activity rather than null data: A defense of commonly employed methods to correct for multiple comparisons. Cognitive Neuroscience PMID: 28002981

ADVERTISEMENT
  • http://www.mazepath.com/uncleal/qz4.htm Uncle Al

    false-positive rates never reached 100%” A presidential election was thus universally analyzed for two years running before 08 November 2016. Were there any preening academic intellectual vacuity-excluded empirical sequelae of merit?

    Youtube v=zT0Rjc6jKCg
    Youtube v=O7Bkh9Wo2vE

  • Nils Kroemer

    I think one aspect of the Eklund et al. analysis that might have inflated false-positive rates in the resting state data was pointed out by Flandin & Friston (2016) already:

    “The effect of one versus two-sample t-tests is slightly more difficult to in-
    terpret. This is because the authors used the same regressor for all subjects. Arguably, this was a mistake because any systematic
    fluctuation in resting state timeseries that correlates with the regressor will lead to signicant one-sample t-tests against the null hypothesis of zero”.

    Maybe Slotnick got rid of this problem?

    • http://blogs.discovermagazine.com/neuroskeptic/ Neuroskeptic

      Hmm, thanks for the comment! But I don’t think Flandin & Friston’s point is entirely accurate. Eklund et al. didn’t just use one regressor, rather they tried four of them:

      “Block activity paradigm:B1 (10-s on off), B2 (30-s on off)

      Event activity paradigms E1 (2-s activation, 6-s rest), E2 (1- to 4-s activation, 3- to 6-s rest, randomized)”

      And in previous papers they’ve used still more.

      The regressor used (colored bars) made little difference to the false positive rate. It seems unlikely that the resting state timeseries would just by chance contain activity that was correlated with all of the different regressors.

      https://uploads.disquscdn.com/images/22a532cef0f7b5e11c1162579ec7efaad716534c5aee736b3ee1096f9a179787.jpg

  • http://www.conxz.net/blog/aboutme/ Xiangzhen Kong

    agree. but there are some publications showing that the rest state is actually dynamic. that is, our rest state includes multiple states. so, do you think there may be actually some/many meaning differences in these “false positives”?

  • Thomas Nichols

    We (the Cluster Failure authors) have now replied to Slotnick’s paper: https://arxiv.org/abs/1701.02942

    In short: (1) We feel that there is no better source of null data for task fMRI than resting state fMRI; we specifically defended this in the paper and re-state the case for this. (2) The evaluation in Slotnick was based on a single dataset; evaluation of whether a *probability* of false positives is controlled requires many many instances. It’s like flipping a coin once, finding ‘heads’, and then asserting this tells you it’s a fair coin. (3) The critique suggests that Slotnick has a fundamentally different approach to cluster inference that is better, when in fact it is simply a standard Monte Carlo approach, pretty much identical to the original AFNI 3dClustSim program (i.e. possibly with the edge effect bug).

    • http://blogs.discovermagazine.com/neuroskeptic/ Neuroskeptic

      Thanks for the comment!

NEW ON DISCOVER
OPEN
CITIZEN SCIENCE
ADVERTISEMENT

Neuroskeptic

No brain. No gain.

About Neuroskeptic

Neuroskeptic is a British neuroscientist who takes a skeptical look at his own field, and beyond. His blog offers a look at the latest developments in neuroscience, psychiatry and psychology through a critical lens.

ADVERTISEMENT

See More

@Neuro_Skeptic on Twitter

ADVERTISEMENT

Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

Collapse bottom bar
+