Neuroskeptic is a neuroscientist who takes a skeptical look at his own field and beyond at the Neuroskeptic blog.
Fraud is one of the most serious concerns in science today. Every case of fraud undermines confidence amongst researchers and the public, threatens the careers of collaborators and students of the fraudster (who are usually entirely innocent), and can represent millions of dollars in wasted funds. And although it remains rare, there is concern that the problem may be getting worse.
But now some scientists are fighting back against fraud—using the methods of science itself. The basic idea is very simple. Real data collected by scientists in experiments and observations is noisy; there’s always random variation and measurement error, whether what’s being measured is the response of a cell to a particular gene, or the death rate in cancer patients on a new drug.
When fraudsters decide to make up data, or to modify real data in a fraudulent way, they often create data which is just “too good”—with less variation than would be seen in reality. Using statistical methods, a number of researchers have successfully caught data fabrication by detecting data which is less random than real results.
Most recently, Uri Simonsohn applied this approach to his own field, social psychology. He has two “hits” to his name, and more may be on the way.
Simonsohn used a number of statistical methods but in essence they were all based on spotting too-good-to-be-true data. In the case of the Belgian marketing psychologist Dirk Smeesters, Simonsohn noticed that the results of one experiment conducted by Smeesters were suspiciously “good”: They matched with his predictions almost perfectly.