Neuroskeptic readers will know that there’s been a lot of concern lately over unreproducible results and false positives in psychology and neuroscience.
In response to these worries, there have been growing calls for reform of the way psychology is researched and published. We’ve seen several initiatives promoting replication and, to my mind even more importantly, registration of studies to prevent bad scientific practice in future.
But the problem is not limited to psychology. Concern is growing too in cancer biology, as revealed in a new study from the MD Anderson Cancer Center in Texas: A Survey on Data Reproducibility in Cancer Research.
The researchers polled all of the nearly 3000 staff at the center. Unfortunately, just 15% responded, but of those that did, 55% reported having been unable to reproduce a published result, but only 33% of those published it.
This joins other reports into the poor reproducibility of preclinical cancer research. A 2011 article surveyed pharmaceutical industry scientists who’d attempted to reproduce published findings (with a view to making drugs out of them):
In almost two-thirds of the projects, there were inconsistencies between published data and in-house data that either considerably prolonged the duration of the target validation process or, in most cases, resulted in termination of the projects…
I’d be amazed if that problem is limited to cancer drug discovery, either.
The problem is formal and systemic. In a nutshell, false-positive results will be a problem in any field where results are either ‘positive’ or ‘negative’, and scientists are rewarded more for publishing positive ones. Unless you are much more careful than we are today.
Poor reproducibility has little to do with the subject of the science. Whether you study study minds, mice, or molecules, if you measure them and analyze them statistically, you are all in the same boat. It’s the structure of the incentives.
So we shouldn’t single anyone out for blame.
However, this doesn’t mean that those at whom fingers are pointed should get defensive. That’s a natural, but regrettable, response.
Yes, it’s rather unfair that at the moment, people seem to be demanding more of, say, social psychologists than of cognitive psychologists. We ought to demand the highest standards of both of them.
But raising the standard has to start somewhere, so why not with social psychology? “We’re no worse than anyone else” is a comforting mantra, but a poor defence. And any field or speciality that puts its house in order will soon be in a position to say “We’re better than you”.
Mobley, A., Linder, S., Braeuer, R., Ellis, L., & Zwelling, L. (2013). A Survey on Data Reproducibility in Cancer Research Provides Insights into Our Limited Ability to Translate Findings from the Laboratory to the Clinic PLoS ONE, 8 (5) DOI: 10.1371/journal.pone.0063221