Ed Yong has a piece in Nature on the problems of confirmation bias and replication in psychology. Yong notes that “It has become common practice, for example, to tweak experimental designs in ways that practically guarantee positive results.” The way this has been explained to me is that you perform an experiment, get a p-value of > 0.05 (significance). You know that your hunch is warranted, so just modulate the experiment, and hope that the p-value comes in at < 0.05, and you have publishable results! Obviously this is not just a problem in psychology; John Ioannidis has famously focused on medicine. But here’s a chart which shows that positive results are particular prevalent in psychology:
There are many angles to this story, but one which Ed did not touch upon is the political homogeneity of of psychology as a discipline. The vast majority of psychologists are political liberals. This issue of false positive results being ubiquitous is pretty well known within psychology, so I’m sure that that’s one reason Jonathan Haidt has emphasized the ideological blinders of scholars so much. Let’s assume that the range of false positives to support a wide array of hypotheses is rather large. In other words, if you have the will, you can support many alternative hypotheses. How then do you support your hypothesis? In all likelihood, consciously or unconsciously, you are guided by normative considerations. From the pot of “statistically significant” results you just peel away the ones which align with your preferences.
All of this is one reason why I’m rather skeptical whenever I hear that a psychologist has dispassionately waded into a domain of study and come back with objective and incontrovertible evidence supporting their own position. I can go in and do that too. Or more concrete, how hard has it been for your to find “sources” which support whichever crazy opinion you want to hold on Google?
Knowledge is hard.