Last week, I wrote about a social psychology paper which was retracted after the data turned out to be fraudulent. The sole author on that paper, William Hart, blamed an unnamed graduate student for the misconduct.
Now, more details have emerged about the case. On Tuesday, psychologist Rolf Zwaan blogged about how he was the one who first discovered a problem with Hart’s data, in relation to a different paper. Back in 2015, Zwaan had co-authored a paper reporting a failure to replicate a 2011 study by Hart & Albarracín. During the peer review process, Hart and his colleagues were asked to write a commentary that would appear alongside the paper.
Zwaan reports that Hart’s team submitted a commentary which presented their own succesful replication of the finding in question. However, Zwaan was suspicious of this convenient “replication” and decided to take a look at the raw data. He noticed anomalies and, after some discussion, Hart’s “replication” was removed from the commentary. When the commentary was eventually published, it contained no reference to the problematic replication.
Meanwhile, following an investigation, Hart’s nameless student confessed to manipulating the data in the “replication” and also in other previous studies – Hart’s retracted paper being one of them.
There are a number of lessons we can take from this story but to me, it serves as a reminder that scientists should not be replicating their own work. Replication is a crucial part of science, but “auto-replications” put researchers under great pressure to find a certain result.
For a career-minded scientist, to fail to replicate your own work is worse than never doing the replication at all. First, because replications are less sexy than original studies and usually end up in low ranking journals. But it gets worse – if you publish an effect and then later fail to replicate it, an observer (e.g. someone deciding whether to award you a grant, fellowship, or job) might conclude that you don’t know what you’re doing.
In order to succeed, researchers today are expected to craft and project a “career narrative” in which all of their experiments and papers constitute a beautiful upward arc of progress. It’s very difficult to fit a negative auto-replication into such a tidy and optimistic story. This is why “failed” studies, especially replications, tend to end up unpublished. Or, as in the Hart case, worse happens.
Here’s another way of looking at it: a replication attempt has much in common with peer review, in that they’re both an evaluation of the validity of a scientific claim. Who would want scientists to peer review their own work?
So I wonder if we should “discount” apparently succesful auto-replications: perhaps when performing a meta-analysis, we should include the largest study from each research group and ignore the others. I think we certainly shouldn’t expect scientists to replicate their own work before they can publish it. Rather, we should encourage scientists to perform more independent replications of other peoples’ studies.