The hottest story in science over the past couple of weeks has been the accusations of fraud against UCLA political science PhD student Michael LaCour.
The allegations were posted online on May 19th and they concern one of LaCour’s papers, published in Science, called When contact changes minds: An experiment on transmission of support for gay equality. On May 28th the paper was retracted on the request of LaCour’s co-author, Donald Green, but LaCour stands by the data and disagreed with the retraction.
There have been lots of twists and turns in this case – LaCour has admitted lying about some aspects of the data collection. In this post however I’ll focus on the data and on LaCour’s rebuttal to the original accusations, which he posted on May 29th.
LaCour’s key data are measures of attitudes towards gay marriage, using a 0-100 scale called a ‘feeling thermometer.’ LaCour measured this at baseline and then at subsequent timepoints.
According to the accusers, led by David Broockman, LaCour’s baseline feeling thermometer data are statistically indistinguishable from an large existing gay marriage feeling thermometer dataset called CCAP. The implication is that LaCour faked his data by randomly selecting datapoints from CCAP.
The critics showed histograms of the two baseline datasets in LaCour and Green (2014) and of the CCAP thermometer. It can be seen that they’re virtually identical and a statistical test confirms this at p = 0.4776, no significant difference.
In his rebuttal, LaCour disputes this, and implies that the critics are themselves guilty of intentional misrepresentation. He writes that they
Selected the incorrect variable from CCAP, they then further manipulate this variable to make the distribution look more like that in LaCour and Green (2014).
When the correct variable is used, the distributions between the CCAP thermometer and the LaCour and Green (2014) thermometer are statistically distinguishable.
Selecting the incorrect variable may have been an oversight, but further manipulating that variable to make the distribution look more like LaCour and Green (2014) is a curious and possibly intentional “error.”
But to my mind, his objections are very weak. LaCour says that Broockman et al. used the CCAP variable ‘gaytherm’ whereas they should have used one called ‘pp gays t’.
The only difference between them, however, is that in ‘gaytherm’ some missing responses are coded as 50 (i.e. the midpoint of the scale). The ‘further manipulation’ LaCour decries also amounted to replacing missing data with 50s.
LaCour says that his data are statistically distinguishable from ‘pp gays t’ and he presents a histogram of the ‘correct’ CCAP variable:
Yet the only difference between the two CCAP versions is that there are fewer 50s in this one. LaCour writes that the distributions are “quite different” but there’s only one difference, the 50s. Everything else is identical.
LaCour claims that there is “a modal spike at 100 in the CCAP data, no such spike exists in LaCour and Green (2014)” but the very same spike at 100 is clearly visible in his data, it just looks smaller because the spike at 50 is even bigger!
Overall, I’d say that these results are fully consistent with the theory that LaCour’s data were taken from CCAP with missing items replaced with 50s. This would have been a natural approach to the missing items, because the CCAP dataset itself makes this substitution in the variable called ‘gaytherm’.
As far as I can see, LaCour has failed to refute this central criticism of Broockman et al.