Raw Data: How Widespread is the Problem of Irreproducible Results?

By George Johnson | January 29, 2014 4:17 pm

Last week in the first installment of “Raw Data,” my new monthly column for the New York Times, I reflected on what has become known in science as the problem of irreproducible results. The fear that the corpus of scientific knowledge is becoming polluted with questionable findings — experiments that cannot be replicated by other laboratories — has become so great that the journal Nature has promised to implement new measures “improving the consistency and quality of reporting in life-sciences articles” and has compiled an eye-opening archive called “Challenges in Reproducible Research.”

The concerns are arising not just in epidemiological studies — where some effect (a drug, food, behavior, or an environmental contaminant) is correlated positively or negatively with human health — but also in bench research. This is the science of petri dishes and chemical reagents, with subjects ranging in complexity from human cells to genetically altered mice. For me the most jarring revelation was the failure of a team of scientists to replicate the results of 47 of 53 “landmark” experiments in the biology of cancer. I couldn’t help wondering whether any of these studies are ones that I cite in my own book, The Cancer Chronicles. Because of confidentiality agreements, the information is being kept under wraps.

What I concentrated on in my Times column was not scientific fraud, which I believe is very rare, but the ease with which we humans — master detectors of patterns — can fool ourselves into seeing regularities that aren’t really there. Pictures in the clouds. With all good intentions, these findings may sometimes enter into the body of scientific literature to be cited and cited without ever being verified. As I’ve written here before, it can be maddeningly difficult to distinguish between what we see and what we think we see. That is a phenomenon that has fascinated me since I wrote my book Fire in the Mind, and I touched on it again in the inaugural post for this blog, “Lighting the Match.”

The largest share of published research — about 50 percent — involves the life sciences, and that field has become the focus of the recent controversy over reproducibility. But what about the equally important work going on in physics, chemistry, geology, and other so-called physical sciences? My colleague, Faye Flam, raises that question in a tough critique of my column on the Knight Science Journalism Tracker.

As far as I have been able to discover, research in these fields has not been subjected to the same kind of scrutiny that John Ioannidis and Glen Begley (sources for my Times column) have brought to bear on the biomedical world. There have been no claims that dozens of landmark findings in, say, high-energy physics or nuclear chemistry have failed replication. One reason may be that higher statistical standards are often used to analyze data, particularly in particle accelerator research. And highly publicized discoveries like the confirmation of the top quark and the recent discovery of the Higgs boson include built-in replication — two detectors run by different teams of scientists converging on similar results.

Flam suggests another reason that the physics literature may be comparatively sound:

[Physicists] say that they can’t just make up any old hypothesis because they are tightly constrained by quantum mechanics and general relativity. And they’re constrained by umpteen measurements of the way atoms and particles and light behave in the real world. So they can’t get away with just dreaming up long-shot hypotheses without violating some aspect of reality as it’s been measured.

Another consideration is the fact that human bodies are all different, while all electrons, protons and Higgs Bosons are the same, so naturally you won’t get the same kind of “truth” asking about the mass of the electron as you get asking about the risks and benefits of a daily aspirin.

Those are good points when applied to the differences between particle physics and epidemiology. But that is just a small part of biomedical science. What about the delicate and exquisitely controlled experiments that occur in laboratories? Are hypotheses involving intracellular enzyme pathways and the effects of microRNA on protein regulation so much less constrained than, say, solid-state physics and materials science?

In the current issue of Science, Robert Service describes a feud over whether a researcher in Switzerland has or has not created exotic entities consisting of organically striped gold nanoparticles. In any kind of laboratory science there is a danger, as I wrote in the Times of “unknowingly smuggling one’s expectations into the results, like a message coaxed from a Ouija board.”

These are the kinds of matters I will be exploring in “Raw Data” and in a final post or two for this blog.

 

Comments and corrections are welcome by email. For public discussion please use Twitter.

For a glimpse of my new book, The Cancer Chronicles, please see this website.

@byGeorgeJohnson

  • Matthew Slyfield

    The journals themselves are a major part of the problem with this.

    1. There is a bias against reporting negative results.

    2. The journals prefer novel work. Replication efforts don’t get published.

NEW ON DISCOVER
OPEN
CITIZEN SCIENCE
ADVERTISEMENT

Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

Fire in the Mind

Whether a subtle new pattern shows up in an experiment on the Higgs boson, an epidemiological report about a suspected cancer cluster, or a double-blind trial purporting to demonstrate ESP, it can be maddeningly difficult to distinguish between what we see and what we think we see. "Fire in the Mind" takes a look at the big questions behind today’s science news.

About George Johnson

George Johnson writes about science for the New York Times, National Geographic Magazine, Slate, and other publications. His nine books include The Cancer Chronicles: Unlocking Medicine's Deepest Mystery (August 2013), The Ten Most Beautiful Experiments, A Shortcut Through Time, and Fire in the Mind. He is a winner of the AAAS Science Journalism Award and has twice been a finalist for the Royal Society science book prize. Co-founder and director of the Santa Fe Science Writing Workshop, he can be found on the Web at talaya.net. Twitter @byGeorgeJohnson.

ADVERTISEMENT

See More

ADVERTISEMENT

@byGeorgeJohson onTwitter

Collapse bottom bar
+

Login to your Account

X
E-mail address:
Password:
Remember me
Forgot your password?
No problem. Click here to have it e-mailed to you.

Not Registered Yet?

Register now for FREE. Registration only takes a few minutes to complete. Register now »