Blinded Analysis For Better Science?

By Neuroskeptic | November 3, 2015 8:40 am

fixing_science

In an interesting Nature comment piece, Robert MacCoun and Saul Perlmutter say that “more fields should, like particle physics, adopt blind analysis to thwart bias”: Blind analysis: Hide results to seek the truth

As they put it,

Decades ago, physicists including Richard Feynman noticed something worrying. New estimates of basic physical constants were often closer to published values than would be expected given standard errors of measurement.

They realized that researchers were more likely to ‘confirm’ past results than refute them — results that did not conform to their expectation were more often systematically discarded or revised.

To minimize this problem, teams of particle physicists and cosmologists developed methods of blind analysis: temporarily and judiciously removing data labels and altering data values…

Blind analysis ensures that all analytical decisions have been completed, and all programmes and procedures debugged, before relevant results are revealed to the experimenter.

One investigator – or a computer program – methodically perturbs data values, data labels or both, often with several alternative versions of perturbation.

The rest of the team then conducts as much analysis as possible ‘in the dark’. Before unblinding, investigators should agree that they are sufficiently confident of their analysis to publish whatever the result turns out to be, without further rounds of debugging or rethinking.

As a procedure, blind analysis has much in common with preregistration. Both involve the creation of a “Chinese wall” that prevents knowledge of the results from affecting decisions about the analysis. Both require a hypothesis of interest to be framed in advance. Both are intended to prevent p-hacking and other conscious and unconscious biases.

Blind analysis shares many of the same limitations as preregistration, too. MacCoun and Perlmutter discuss concerns such as “won’t people just peek at the raw data?”. This is analogous to a criticism commonly raised against preregistration, “won’t people just preregister retrospectively?” Both methods ultimately rest on trust.

MacCoun and Perlmutter discuss preregistation, but they say that blind analysis is better as it offers the advantage of flexibility: “preregistration requires that data-crunching plans are determined before analysis… but many analytical decisions (and computer programming bugs) cannot be anticipated.”

However, I think that preregistration and blind analysis could work together. Each brings important benefits.

For instance, preregistration ensures that negative results don’t just disappear unpublished. Indeed, with pre-peer review, it can help them get published, if a journal agrees to publish the paper on the strength of the methods, before the results (negative or otherwise) are collected. Blinded analysis, alone, doesn’t achieve that.

But blinded analysis could help to make preregistration more useful. A prespecified analysis plan could incorportate a blinded phase. So, rather than having to decide at the outset how (say) outliers will be treated, researchers could leave this question open, and then decide based on a blinded look at the final data. It would be the best of both worlds.

ADVERTISEMENT
NEW ON DISCOVER
OPEN
CITIZEN SCIENCE
ADVERTISEMENT

Neuroskeptic

No brain. No gain.

About Neuroskeptic

Neuroskeptic is a British neuroscientist who takes a skeptical look at his own field, and beyond. His blog offers a look at the latest developments in neuroscience, psychiatry and psychology through a critical lens.

ADVERTISEMENT

See More

@Neuro_Skeptic on Twitter

ADVERTISEMENT

Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

Collapse bottom bar
+