In its current issue, The New Yorker has an excellent piece on the prevalence of (unconscious) bias in scientific studies that builds on this recent must-read piece in The Atlantic. And to some extent, Jonah Lehrer’s New Yorker article builds on this story he did for Wired in 2009. Anyone interested in the scientific process should read all three, for they are provocative cautionary tales.
Back to Lehrer’s story in The New Yorker. I’m going to quote from it extensively because it’s behind a paywall, but I urge people to buy a copy of the issue off the newsstand, if possible. It’s that good.
His piece is an arrow into the heart of the scientific method:
The test of replicability, as it’s known, is the foundation of modern research. Replicability is how the community enforces itself. It’s a safeguard for the creep of subjectivity. Most of the time, scientists know what results they want, and that can influence the results they get. The premise of replicability is that the scientific community can correct for these flaws.
But now all sorts of well-established, multiply confirmed findings have started to look increasingly uncertain. It’s as if our facts were losing their truth: claims that have been been enshrined in textbooks are suddenly unprovable. This phenomenon doesn’t yet have an official name, but it’s occurring across a wide range of fields, from psychology to ecology.
This is not the same as scientific fraud, Lehrer writes:
Rather the problem seems to be one of subtle omissions and unconscious misperceptions, as researchers struggle to make sense of their results.
He then describes “one of the classic examples” of selective reporting:
While acupuncture is widely accepted as a medical treatment in various Asian countries, its use is much more contested in the West. These cultural differences have profoundly influenced the results of clinical trials. Between 1966 and 1995, there were forty-seven studies of acupuncture in China, Taiwan, and Japan, and every single trial concluded that acupuncture was an effective treatment. During the same period, there were ninety-four clinical trials of acupuncture in the United States, Sweden, and the U.K., and only fifty-six percent of these studies found any therapeutic benefits. As [University of Alberta biologist Richard] Palmer notes, this wide discrepancy suggests that scientists find ways to confirm their preferred hypothesis, disregarding what they don’t want to see. Our beliefs are a form of blindness.
Lehrer then introduces Stanford epidemiologist John Ioannidis, the star of the The Atlantic story. Lehrer writes:
According to Ioannidis, the main problem is that too many researchers engage in what he calls “significant chasing,” or finding ways to interpret the data so that it passes the statistical test of significance–the ninety five percent boundary invented by Ronald Fisher. “The scientists are so eager to to pass this magical test that they start playing around with the numbers, trying to find anything that seems worthy,” Ioannidis says. In recent years, Ioannidis has become increasingly blunt about the pervasiveness of the problem. One of his most cited papers has a deliberately provocative title: “Why Most Published Research Findings are False.”
The problem of selective reporting is rooted in a fundamental cognitive flaw, which is that we like proving ourselves right and hate being wrong. “It feels good to validate a hypothesis,” Ioannidis said. “It feels even better when you’ve got a financial interest in the idea or your career depends on it. And that’s why, even after a claim has been systematically disproven”–he cites, for instance, the early work on hormone replacement therapy, or claims involving various vitamins–“you still see some stubborn researchers citing the first few studies that show a strong effect. They really want to believe that it’s true.”
That’s why [UC Santa Barbara cognitive psychologist Jonathan] Schooler argues that scientists need to become more rigorous about data collection before they publish. “We’re wasting our time chasing after bad studies and underpowered experiments,” he says. The current “obsession” with replicability distracts from the real problem, which is faulty design…In a forthcoming paper, Schooler recommends the establishment of an open-source database, in which researchers are required to outline their planned investigations and document all their results.”I think this would provide a huge increase in access to scientific work and give us a much better way to judge the quality of an experiment,” Schooler says.
As I said, you really should read the whole piece if you want to learn more about this widespread but little discussed problem with a key tenet of the scientific method. Lehrer perceptively concludes:
We like to pretend that our experiments define the truth for us. But that’s often not the case. Just because an idea is true doesn’t mean it can be proved. And just because an idea can be proved doesn’t mean it’s true. When the experiments are done, we still have to choose what to believe.
UPDATE: Jonah Lehrer’s New Yorker article has spawned much discussion in the science blogosphere. See Jerry Coyne, Randy Olson, Steven Novella, John Horgan, Matthew Nisbet, Charlie Petit, David Gorski, and Judith Curry. Additionally, Lehrer, at his blog, elaborates on what his article is NOT implying.
UPDATE: Five days after hitting the send button on my post, I see that Marc Morano has linked to it. Readers coming here via Climate Depot should be aware of Jonah Lehrer’s answer to a reader who asks: “Does this mean I don’t have to believe in climate change?” Lehrer’s response:
One of the sad ironies of scientific denialism is that we tend to be skeptical of precisely the wrong kind of scientific claims. In poll after poll, Americans have dismissed two of the most robust and widely tested theories of modern science: evolution by natural selection and climate change. These are theories that have been verified in thousands of different ways by thousands of different scientists working in many different fields.
I concur with Lehrer’s assessment of the science underlying evolution and climate change.