Cap and Trade Scientific False Positives?

By Neuroskeptic | April 24, 2014 3:39 pm

In a letter to Nature, University of Miami psychologists Michael McCullough and David Kelly propose A trading scheme to reduce false results.

fix_scienceNeuroskeptic readers will know that concern over false-positive science is growing. Many solutions have been proposed, but McCullough and Kelly’s is quite novel:

Cap-and-trade systems have proved useful in cutting pollutants such as sulphur dioxide, nitrogen oxides and lead additives in petrol. We suggest that they could also be applied to reduce pollution of the scientific literature with irreproducible results.

[Currently], researchers do not have to face the cost of publishing their own unverifiable results (most of which could have been prevented). That cost is borne by the scientific community and the public — for instance, in subsequent research inspired by false positives, which can lead to badly designed policies.

Cap-and-trade systems force excessive polluters to purchase permits. Initially, institutions could receive 5 free permits per 100 published results, reflecting the widely accepted ideal of a 5% false-positive production rate. It would then be necessary to buy extra permits from other institutions should they ‘emit’ significantly more false positives that this (irrespective of whether these were honest or deliberate errors).

Institutions that successfully reduce false positives in their research output could then sell off their surplus permits to other institutions that have exceeded their allocation. This flexibility would create incentives for researchers to find innovative ways to reduce false positives.

On his blog, McCullough expands on this theme, explaining how it might all work in practice.

I like this idea, not least because it puts the emphasis on institutions not individual researchers. It makes little sense to focus on whether an individual researcher has a high or low false positive rate, because the sample size (the number of results published per year) is too small.

It would be unfair to favor someone with 1/5 false positives last year over someone with 2/5 – the latter person probably just got unlucky. But with institutions publishing 100s of results per year, you could draw much stronger conclusions.

But as neat as it is, this cap-and-trade idea might take a long time to set up. In the interim, I wonder if simply auditing false positives – a necessary prerequisite for the cap-and-trade to work – would be nearly as good? Institutions would still be incentivized to reduce false positive rates, even without a formal quota system – just in terms of reputation.

Being publicly known as an institution with a high false positive rate would be its own punishment. Conversely, clean institutions would reap the rewards of being known as clean. These would be intangible (at first) but sooner or later a good reputation would turn into success in concrete terms: attracting funding, collaborations, and recruits.

ResearchBlogging.orgMcCullough ME, & Kelly DL (2014). Reproducibility: A trading scheme to reduce false results. Nature, 508 (7496) PMID: 24740058

  • P.

    “Institutions would still be incentivized to reduce false positive rates,
    even without a formal quota system – just in terms of reputation.”

    How about institutions that would make pre-registration of studies mandatory for all their in house scientists. Would such a thing decrease the false positive rates and increase the reputation of the institution?

    • Neuroskeptic

      I think it would do both!

      • P.

        It seems so easy to improve matters. Can’t wait for the 1st institute to set some higher standards for their in house scientists. It seems to me that the emphasis on how to improve matters has focused on the individual scientist and journals, and institutions have been left out of the discussion. To me it seems that an institution can have a great impact in improving matters by setting higher standards that are mandatory for all their in house scientists.

  • sbitzer

    Shouldn’t you first find a reliable way of detecting false positives before you think about what to do with their counts?

  • Tom Campbell-Ricketts

    In fact, it’s extremely easy to reduce your false-positive rate to zero: present your results not as ‘pass’ or ‘fail,’ but as ‘we found this much evidence for that.’

    This would be a very good thing. It wouldn’t very directly sharpen up people’s research practices, the way the proposed scheme is presumably intended to work, but it would force scientists to think more like… um scientists, actually.

  • cosmopolite

    Scientific instruments can measure air and water pollution rather well, in an objective manner. There is no equivalent procedure for determining whether a study is or is not guilty of a false positive result. This is a human judgement call, and the making of that call will become intensely politicised, with some cases landing in court.

    McCloskey and Ziliak make a strong case that current scientific reporting is guilty of statistical naivete, and that naivete biases the entire literature. The statistical calculations scientists report are largely driven by what current statistical software automates. Any major changes in how data are analysed and summarised will require major changes in statistical software. E.g., Bayesian Posterior Odds are more sensible than a statistical significance.



No brain. No gain.

About Neuroskeptic

Neuroskeptic is a British neuroscientist who takes a skeptical look at his own field, and beyond. His blog offers a look at the latest developments in neuroscience, psychiatry and psychology through a critical lens.


See More

@Neuro_Skeptic on Twitter


Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

Collapse bottom bar