Two new papers urge scientists to make research more reproducible.
First off, Russ Poldrack and colleagues writing in Nature Reviews Neuroscience discuss how to achieve transparent and reproducible neuroimaging research. Neuroimaging techniques, such as fMRI, are enormously powerful tools for neuroscientists but, Poldrack et al. say, they are at risk of “a ‘perfect storm’ of irreproducible results”, driven by the “high dimensionality of fMRI data, the relatively low power of most fMRI studies and the great amount of flexibility in data analysis.”
Regarding sample sizes and statistical power, for instance, Poldrack et al. warn that despite a trend for increasing sample sizes in fMRI studies in recent years, “in 2015 the median study was only sufficiently powered detect relatively large effects” as their graph shows.
The average modern fMRI study has 80% statistical power to detect an effect with Cohen’s d effect size of about 0.75. Poldrack et al. show that a typical task-evoked fMRI effect size is smaller than this, suggesting that “the average fMRI study remains poorly powered for capturing realistic effects.” Which suggests that many of the positive findings reported may not be genuine ones.
Potentially an even bigger problem is the undisclosed flexibility in fMRI data analysis, which creates the potential for p-hacking, as first highlighted by Joshua Carp in 2012 (I also raised the issue myself.) Poldrack et al. say that the solution to this problem is to adopt the “preregistration of methods and analysis plans” so that readers can know which analyses were only conceived of once the data had been collected.
Neuroskeptic readers will know that I’ve long been an advocate of preregistration in neuroscience and elsewhere.
Poldrack et al. discuss several other issues facing neuroscience, many of which I’ve blogged about down the years, such as the flaws in certain fMRI analysis tools and researchers failing to use multiple-comparison correction. They make many sensible recommendations as to how to fix these problems.
They conclude that
It is likely that the reproducibility of neuroimaging research is no better than that of many other fields in which it has been shown to be surprisingly low. Given the substantial amount of research funds that are currently invested in neuroimaging research, we believe that it is essential that the field address the issues raised here
Meanwhile, in a paper published yesterday in Nature Human Behaviour, Marcus R. Munafò and colleagues present A manifesto for reproducible science. Munafò was also an author on the Poldrack et al. paper.
Munafò et al. address many of the same issues as Poldrack et al., such as flexible methods, p-hacking, and publication bias in favor of positive results, although they take a broader perspective, considering the problems with science as a whole rather than specifically neuroimaging research. Like Poldrack et al. they recommend preregistration as a solution to many of these problems. Munafò et al. also discuss open access, data sharing and adherence to reporting guidelines such as the TOP guidelines.
Reproducible research practices are at the heart of sound research and integral to the scientific method. How best to achieve rigorous and effient knowledge accumulation is a scientific question; the most effctive solutions will be identified by a combination of brilliant hypothesizing and blind luck, by iterative examination of the effectiveness of each change, and by a winnowing of many possibilities to the broadly enacted few.
Both of these papers offer a comprehensive guide to the problems plaguing the modern scientific process. Ten years ago, hardly anyone was talking about these issues, and five years ago, they were only beginning to be discussed. It’s fantastic that so much attention is now being given to these problems and to how to solve them.
Yet it remains to be seen whether it will be possible to actually implement the necessary reforms on a large scale. There are many very exciting practical initiatives such as the Open Science Framework (OSF) and Registered Reports, which show that a better scientific process is possible, but to date only a minority of scientists have participated in these programs.
Also, it’s notable that both of these pro-reproducibility papers appeared in journals owned by the prestigious Nature Publishing Group (NPG). Does this signal that NPG is going to lend its weight to the cause?
I hope it does, but some scientists are skeptical of this kind of thing. In response to Nature‘s recent promises e.g. to promote replication studies, one PubPeer commenter warned that the journal “cannot and will not keep those promises, because of editorial and corporate conflicts of interest.” Another PubPeer-ite asks whether reformist editorials from journals such as Nature are “a disingenuous marketing ploy to absolve themselves of any responsibility” for causing the problems in the first place.
I am not so cynical, but it’s true that it will be easier said than done to implement these reforms.
Poldrack RA, Baker CI, Durnez J, Gorgolewski KJ, Matthews PM, Munafò MR, Nichols TE, Poline JB, Vul E, & Yarkoni T (2017). Scanning the horizon: towards transparent and reproducible neuroimaging research. Nature reviews. Neuroscience PMID: 28053326
Marcus R. Munafò, Brian A. Nosek, Dorothy V. M. Bishop, Katherine S. Button,, Christopher D. Chambers, Nathalie Percie du Sert, Uri Simonsohn, Eric-Jan Wagenmakers,, & Jennifer J. Ware and John P. A. Ioannidis (2017). A manifesto for reproducible science Nat Hum Behav