There was quite the stir a few weeks back about a psychology paper claiming that rich people aren’t very nice: Higher social class predicts increased unethical behavior.
The article, in PNAS, reported that upper class individuals were more likely to lie, cheat, and break traffic laws.
However, these results have been branded “unbelievable” in a Letter to PNAS just published. Psychologist Gregory Francis notes that the paper contains the results of 7 seperate experiments, and they all found statistically significiant socioeconomic effects on unethical behaviour.
Those 7 replications of the effect “might appear to provide strong evidence for the claim” – one study good, 7 studies better, right? – but Francis says that actually, it’s too good to be believed.
Each of the studies was fairly small, and the effects they found were modest, and only just significant. So the observed power of the studies – the probability that a study of that size would detect the effect that they did, in fact, find – was only about 50-88% in each case.
Think of it this way: if you took a pack of cards and discarded half of the black ones, then shuffled the remainder, a random card from the deck would most likely be red. But even so, it would be unlikely that you’d pick seven reds in a row.
The chances of all 7 studies finding a positive result – even assuming that the effect claimed in the paper was real – is just 2%, by Francis’s calculations.
He concludes “The low probability of the experimental findings suggests that the data are contaminated with publication bias. Piff et al. may have (perhaps unwittingly) run, but not reported, additional experiments that failed to reject the null hypothesis (the file drawer problem), or they may have run the experiments in a way that improperly increased the rejection rate of the null hypothesis (4)“.
What might have happened? Maybe there were more than 7 studies and only the positive ones were published. Maybe the authors peeked at the early data before settling on the sample size, or took other outcome measures that showed no effect and went unreported. See also the 9 Circles of Scientific Hell.
Or maybe not. Piff et al respond in their own Letter, firmly denying that they ran any other unpublished experiments, and saying that they “scrutinized our data collection procedures, coding protocols, experimental methods, and debriefing responses. In no case have we found anything untoward.” They go on to criticize the method Francis used to get his magic 2% figure, which they point out relies on some debatable assumptions.
Even if you buy the 2% figure, it doesn’t mean that the true effect is zero; it might be real, but exaggerated. Ultimately it all becomes rather murky and subjective, which is why I think we need preregistration of research, which would prevent any possibility of such data fiddling, and also remove the possibility of false accusations of it… but that’s another story.
Francis, G. (2012). Evidence that publication bias contaminated studies relating social class and unethical behavior Proceedings of the National Academy of Sciences DOI: 10.1073/pnas.1203591109