Recently, cognitive science postdoc Sebastiaan Mathôt wrote two pieces that raise questions about the idea of reforming scientific communication to involve preregistration of experiments: The Pros and Cons of Preregistration in Fundamental Research and also The Black Swan.
Registration has long been a favorite topic of mine; it’s something I’ve been advocating since my very first post. Now it’s starting to become a reality which I think is great. Yet many researchers are wary of the idea, and Mathôt makes some important points.
My answer in a nutshell is that preregistration does seem scary, in the context of science’s current culture – but that’s a problem with the current culture.
Mathôt’s core argument, as I understand it, is this (from the first article, emphasis mine):
My colleagues and I recently conducted an experiment in which we recorded eye movements of participants while they viewed photos of natural scenes. On half of the trials we manipulated the scene based on where participants were looking. The other half of the trials served as a control condition…
[Our manipulation] turned out not to have the predicted effect. According to the rules of preregistration, this means that our study was worthless: We made a prediction, it didn’t come out, and any attempt to use this dataset for another purpose borders on scientific fraud.
However, we stumbled across an unexpected, but interesting and statistically highly reliable phenomenon in the control trials. So what now? Are we not allowed to look at this effect, because we did not predict it in advance? Should we run a new study, in which we predict what we have already found, and use only the data from the new experiment?
Your intuition, no doubt, screams ‘no’, or at least mine does. However, the logic behind pre-registration says ‘yes’. The essential conflict here is that pre-registration discourages exploratory research, and assumes that a finding is not a real finding unless it was predicted – a questionable assumption at best.
In this example, the authors have made two discoveries: 1) the originally predicted phenomenon didn’t happen (‘negative’); and 2) a different, unpredicted phenomenon was observed (‘positive’).
Both of these are interesting findings, and both ought to be published. Number 1) is interesting, because the authors surely had good reasons to predict that the effect would happen. So the fact that it didn’t is a discovery; it tells us about the world, if only by narrowing down the possibilities. It contributes to science. Under the current publishing system, however, this interesting finding might never be made public – and even worse, might be regarded as deserving to remain unpublished.
Then there’s 2), the incidental positive observation. This should also be made public – and there’d be no barriers to doing so under a system of preregistration, albeit ‘only’ if it’s clearly marked as an incidental observation. Being incidental is not a bad thing – but you do need to be honest about it.
If it sounds bad, to scientists today, it’s because we’ve been disguising our incidental findings for so long. We write papers to make ‘positive’ results seem predicted even when they weren’t – just as we make ‘negative’ findings disappear.
By making such manipulation impossible, preregistration would liberate both the unexpected finding, and the negative finding. There would be a lot more of both kinds of result out there, if nothing else; I suspect their status would rise accordingly.
I’ll return to this sentence of Mathôt which I think is a very clear description of a common worry: “According to the rules of preregistration, [not finding the predicted effect] means that our study was worthless.”
The worry here is that a good experiment would be ‘wasted’ if the primary prediction turns out to be false. But the truth is that it’s the current system that measures a study’s worth by its p-values.
Preregistration is the dream that one day, studies will be judged, not by the significance of their Results, but by the content of their Methods.
P.S. Mathôt is also the creator of OpenSesame, a free psychological experiment development toolkit. I haven’t used it yet, but the various commercial ones certainly leave a lot to be desired…