Earlier this year, neuroscience was shaken by the publication in PNAS of Cluster failure: Why fMRI inferences for spatial extent have inflated false-positive rates. In this paper, Anders Eklund, Thomas E. Nichols and Hans Knutsson reported that commonly used software for analysing fMRI data produces many false-positives.
But now, Boston College neuroscientist Scott D. Slotnick has criticized Eklund et al.’s alarming conclusions in a new piece in Cognitive Neuroscience.
In my view, while Slotnick makes some valid points, he falls short of debunking Eklund et al.’s worrying findings.
Slotnick argues that Eklund et al.’s main approach was flawed, because they used resting state fMRI data to estimate false-positive rates. This, Slotnick says, is a mistake because
Resting-state periods produce fMRI activity in the default network… [so] it is not surprising that the “false clusters” reported by Eklund et al. occurred in default network regions. These findings indicate that Eklund et al.’s assumption that resting state fMRI data reflects null data is incorrect. As such, the large majority of “false clusters” reported by Eklund et al. likely reflected true activations (i.e. true clusters) that inflated familywise error.
Hmm. While the default mode network (DMN) surely was active in the resting state participants, I don’t see how this activation could have produced any false positives.
As in their previous work, Eklund et al. compared (arbitrary) “task” segments of the resting state time-series to a “baseline” consisting of the rest of the same time-series. DMN activity should not have differed systematically between the two conditions. Therefore the “task-related” activation clusters that Eklund et al. found can’t have been true clusters, because there was no task.
It is true that if the DMN happened to be more active during the “task” periods for a given individual, this would manifest as DMN activations, and these would correspond to ‘real’ brain activity (not just noise). However, they wouldn’t truly be task-related, so they’d still be false positives.
Also, in the Appendix to their PNAS paper, Eklund et al. present additional analyses using task-based, not resting state, fMRI data. They found high false-positive rates here as well, suggesting that the problem is not limited to resting-state data. Slotnick doesn’t discuss these results.
Slotnick goes on to conduct a new analysis, using his own task fMRI data, in which he reports finding no false positives. He concludes that in task fMRI, false-positive rates are not a problem. However, I wasn’t reassured by this for a number of reasons.
Firstly, Slotnick used a cluster defining threshold (CDT) of p<0.001. Yet Eklund et al. report that p<0.001 generally leads to only slightly elevated false-positive rates; the really serious problems arise when a CDT of p<0.01 is used, as Eklund et al.’s chart of false-positive rates shows:
Second, for his analysis Slotnick used his own custom script (here). Eklund et al. on the other hand tested the three most popular fMRI software packages (FSL, SPM and AFNI). So even if Slotnick’s script is false-positive free, this doesn’t prove that the widely used packages are.
Finally, Slotnick only performed one analysis, while Eklund et al. performed many thousands of them, repeatedly drawing random groups of participants from a large dataset. Slotnick may have simply got lucky in that his analysis happened not to produce false positives. After all, even under the worst conditions, Eklund et al. found that false-positive rates never reached 100%.
So overall, I think the problem of ‘cluster failure’ is still a serious one.
Slotnick SD (2016). Resting-state fMRI data reflects default network activity rather than null data: A defense of commonly employed methods to correct for multiple comparisons. Cognitive Neuroscience PMID: 28002981