Fixing Science – Systems and Politics

By Neuroskeptic | April 14, 2012 8:49 am

There is increasing concern that the structure of modern science is flawed and that most published research findings may be false.

Commonly cited problems with how science works today include:

  • Publication bias and the file drawer problem.
  • “Result fishing”, data dredging etc. – analyzing data in different ways to “get a finding”
  • The privileging of “positive” results over “negative” ones.

I have previously argued that, to solve these, problems we need a way to ensure that scientists publicly announce which studies they are going to run, what methods they will use, and how they will analyze the data, before running their studies.

We already have such a registration system in place for clinical trials. It’s a good system. It’s not perfect but it’s helped. I propose we extend it to all science. But how would that work in practice?

I’m not sure. So what follows is a series of ideas. These are intended to spark debate.

Here are some options for systems:

    1. There could be a central registry, free and open to the public, where protocols are pre-registered. Call this the ‘ option’ because we already have one for clinical trials. This registry could also serve as a repository of results and raw data, but it wouldn’t have to.
    2. Academic journals could require studies to be pre-registered in order to be considered for publication: you submit the Introduction and Methods, these are peer reviewed, and if accepted, the journal is bound to publish the results when they arrive; the authors for their part are bound to follow their protocol (secondary analyses could take place, but they would be explicitly flagged as such.) and submit the results.
    3. Scientific funding bodies could make all successful scientific grant applications public via an open database. These applications already contain pre-specified methods, hypotheses, and statistical analyses, in most cases; part of this plan could be to make these more detailed.
    4. Authors could have the individual responsibility to publicly announce their methods, hypotheses and plans before starting studies on their own websites.

    How can we actually make this happen? That’s a question of politics:

    1. Governments could introduce legislation to force this. This is the most extreme option. It is probably unviable, because it would place researchers in different jurisdictions under different rules. Science is a global enterprise, and we don’t have a global legislature. (The USA did this for clinical trials, but for various reasons these are a special case and more ‘international’ than others.)
    2. A consortium of major scientific journal editors could announce that they’ll only publish research that complies with the system. Notably, this was how clinical trial registration started.
    3. A consortium of major funding bodies could refuse to finance research that doesn’t adhere to the system.
    4. Individual scientists, journals, and funding bodies could unilaterally adopt the system. This would, at least at first, place these adopters at an objective disadvantage. However, by voluntarily accepting such a disadvantage, it might be hoped that such actors would gain acclaim as more trustworthy than non-adopters.

    My own preference would be for System 1 via a combination of Politics 2 and 3. Yet any combination of these options would be better than the current system.

    Some possible objections:

    1. Pre-registration of all science would be impractical. What about pilot studies and ‘tinkering’? – I’m only proposing that any research which might be published, should be publicly registered. This leaves anyone free to tinker away all they like – in private. We just need to be clear, from the outset, whether we’re tinkering or doing ‘proper’ publishable research, a line which is currently very murky.
    2. Many interesting results are unexpected. Post-hoc analysis or interpretation of data is important. – There’s nothing wrong with post-hoc analysis or interpretation, so long as everyone knows it was post-hoc. The problem is when it is passed off as being a priori. Registration doesn’t seem to have discouraged legitimate post-hoc analyses in the case of clinical trials: there are lots of excellent post-hoc analyses coming out, clearly labelled as such.
    3. It would be unfair to scientists to make them ‘tip off’ their rivals about what they’re working on in advance. It would penalize originality. – My gut instinct here is that this is not a big problem; everyone would be in the same boat so it would be a fair system. However, if this were felt to be a concern, there’s an easy solution – just build in a delay to the publication of registered protocols. Put them in a ‘sealed envelope’ to be opened after a 12- or 24- month ‘grace period’, and that would give people a head start while ensuring that their original protocol was eventually revealed.
    4. This wouldn’t solve all of the other problems with science. – No, it wouldn’t, and it’s not intended to. However, I do feel that we’ll struggle to make progress in other areas without something like this happening. The current system of post-results publication is not the only problem, but it is a large part of it.

    On that note, here’s a sketch of how I see this relates (or not) to some other issues in science today:

    Replication – there’s been much discussion of late around ensuring the replicability of results in certain fields e.g. neuroimaging studies and psychology too. My view is that most published false (i.e. unreplicable) findings are a product of publication bias and positive result fishing. Solving those problems, as outlined here, would increase the replicability of science. It wouldn’t be a panacea. There will always be dodgy results due to fraud, incompetence, and bad luck, but the current system too often rewards scientists for fiddling around until they get a positive one.

    Careers – There is a widespread complaint that the current system of science is unsatisfactory. Our jobs, promotions, funding and tenure depend on our ability to generate high impact papers – which means, in effect, novel and interesting positive results. Pre-registration of science would change the game. Scientists would be judged on their ability to design and run interesting experiments, rather than on their ability to generate ‘good papers’.

    Open Access – The issue of free open access to scientific papers is an important one. It’s a separate question to the one I’ve discussed here. But I see a spiritual overlap. In both cases, the  fundamental question is who owns science? At the moment, scientists own their work until and unless they decide to publish parts of it. When they do, they sell it to a publisher, who sells it to the world. In my view, the world should be told about science, from the beginning.


    Fundamentally, this will only happen if a critical mass of scientists want it to happen. It will not be easy, but whereas four years ago I was, deep down, skeptical that it would ever be possible, today I really think it might.

    Already we’re seeing signs of hope, from informal pre-registration  to calls for preregistration in particular fields in major journals. 10 years ago, the idea was being written off as impractical, and with the technology available at the time, it probably was. I do not think that is true today.

    Change can happen. All it needs is will.

    CATEGORIZED UNDER: FixingScience, methods, politics, science
    • Anonymous

      Political solution?

      Alex Holcombe for Commander in Chief (USA) and have scientists as cabinet makers. Hollywood already screams science and unless the House of Representatives give a damn about science, the incremental changes will hit wall after wall weakening the advocates until there is no energy left. I know these people. They don't want science to flourish and that I say, is the fundamental political truth to polluting the progression of science.

    • Anonymous

      You seem to have missed the most obvious objection: under your system, scientists would simply run their experiments, then publicly announce their intent to do so, get some funding, wait a suitable amount of time and then report the results which they had been sitting on all along. The funding received would be used for the next iteration of the same trick. All your suggestion does is impose a minor difficulty to overcome on the first iteration, i.e. how to run the first experiment without money, which may backfire by making it even harder than it is to break loose from established collaborations and do something new.

      As an aside, I thought it was common knowledge that grant applications are often written for studies already performed, exactly because doing so helps establish a track record of predictable performance.

    • yoav

      A group of psychologists are working on developing a study registry, and promoting other changes to fix psychological science (e.g., replication and open access publications).

      You can read more about them at

      • ferkan


    • N

      Wait a minute, how are you supposed to discover anything new if you have to predict what you'll do and how? Certainly this doesn't include basic science right? Most of the interesting discoveries were made while looking for something else…

    • Neuroskeptic

      N: Yes, it includes basic science. All this means is that you have to be honest about what you expected to find. “Most of the interesting discoveries were made while looking for something else” – absolutely, I agree. But all preregistration would mean is that you have to state that you were looking for something else, when you report your unexpected cool result.

    • Neuroskeptic

      Anonymous: “under your system, scientists would simply run their experiments, then publicly announce their intent to do so, get some funding, wait a suitable amount of time and then report the results which they had been sitting on all along.”

      This is a worry. A few responses:
      1) People who played by the rule would have a time advantage under the proposed system. By registering their research before doing it, they would register sooner than someone who waited until after they'd done it (all else being equal) so they could claim priority. This might help to counterbalance the incentives to cheat.

      2) Under the proposed system the behaviour you describe would be clearly “breaking the rules”. It would probably come to be regarded as academic misconduct, on a par with doing research without first getting ethical approval. (Actually, at least in fields where people need ethical approval, it could be integrated with it. Ethics committees could decide that you don't get ethical approval until you publicly register. That would be much more useful than 90% of the stuff those committees worry about.)

    • EJ

      I completely agree with all of your points. A while ago my colleagues and I conducted a confirmatory replication study of one of the Bem precognition studies — just to illustrate how you can pre-register experiments online and take away all post-hoc degrees of freedom. This was a great idea, if only because it became quite clear to me that in my field (experimental psychology) people hardly do strictly confirmatory studies at all.

    • Hal P

      This idea (variants of which are being kicked around in many quarters these days) is very interesting and in my opinion, definitely deserves to be tried out. A few comments:

      1. You make it sound like it is going to be a disagreeable medicine that scientists are forced to take. That's bad marketing, and it's not completely true. In fact, there are important positives for scientists in going with this kind of reviewing scheme. Right now, if you obtain a result that seems hard to reconcile with Joe Schmoe's favored theory, 90% of the time, Schmoe will get your paper to review. Because Schmoe won't like his theory being threatened, as sure as the sun rises in the East he will find a bunch of deficiencies in your experiments. And he will be right, because every experiment falls short of the ideal study in many ways. Even if you know that Schmoe's own work never ever lives up to the standard he is espousing for your work, you can't complain to the editor about that–it just makes you look petty and personal. So you will have to run a nasty gauntlet of biased criticism every time.

      By contrast, with the pre-approval system (and please call it pre-approval rather than pre-registration if you want anyone to go for it–pre-registration sounds like nasty bureaucracy for its own sake), the reviewers don't have the opportunity to ding the study because they don't like the results.

      2. One important little tweak: reviewers should get to specify some outcome-neutral criteria for publishing the study, e.g., that you do not have a floor effect or ceiling effect, that manipulation checks turn out OK, etc. If you don't do this, then you are asking journals to precommit to publishing studies that fail to offer real tests of hypotheses and will have zero citations (which will make them hate the idea.)

    • Jon Brock

      We had a discussion along these lines on Dorothy Bishop's blog a little while ago

      My suggestion was for papers to be clearly identified as Experiments or Observations. The former would focus on hypothesis-driven analyses and would give greater prestige to replication attempts. The latter would allow focus on post hoc analyses (as per N's comment), but would be clearly signposted as such.

      Your objection was that there would still be an incentive to pretend that you'd predicted the outcome all along, in order for your paper to be classified as an Experiment rather than an Observation.

      It seems that you've provided the solution here. For a paper to be an Experiment, it has to be preregistered. Anything else is an Observation.

    • infinidiv

      While I think the combination of such a system with the burgeoning open access journals is very interesting and may, together, improve science, I do have one issue with the open access system. While it is much better in terms of getting knowledge to a wider audience, there is the issue that individuals with less funding will be caught in between, in particular when they have unpopular opinions. The high cost of open access will often prevent them from being able to publish (as compared to their current inability to get access to a wide array of publications through subscription costs). Are we not replacing one problem with another this way?

    • Ivana Fulli MD

      Still, you will have to be careful to let people explore area of science that you consider no priority at all or dogma.

      An Italian physicist who publish “against Einstein” had to resign recently. I hope he had been disonest or silly.

      If not so, it is like punishing an explorer for having lost his way in the jungle when he was exploring at the risk of his own life when other explorers had just drink tea conversing in nice gardens.So to speak.

      Another problem- and I was waiting for somebody else to expose it- is the boss stealing intelectual propereties of younger academics or even worse obliging young people to work only on their ideas.

      The president of the Eur Psy Association wrote a paper I found astonishing in a journal freely availale at the last EPA meeting in Prague 2012: he told young researchers in psychiatry that they should not try to have ideas and do research but they should ask and help their elders to do their research, stating that they should humbly ask to be helpful doing a bit of stats or whatever useful in a big study…

      I went to an interesting session for “young psychiatrists ” (I am old but the French medical gestapo says I should learn more so it makes me young in a way) and I was astonished to heard a young psychiatrist telling a British psychiatrist with white hair that he complied with the older man two years old advice who was to learn chionese because the futur of research was in China and asking the old bright and funny man what to do next!

      Not to mention the persons calling themselves researchers who just submit their clients to drugs protocol designed by Big Pharma.

      Sorry to be a bore and I hope I am not out of topics.

    • Neuroskeptic

      EJ: Yeah, the preregistration of the Bem studies is a great example of why this is important.

      Jon Brock: Right. We need a clear distinction between a priori and post hoc work, because people interpret them differently (as indeed they should do). There's absolutely nothing wrong with Observations. Many of the best papers are Observations (including “Why Most Published Research Findings Are False”!) It's all a matter of honesty.

      • ferkan

        Exactly. There should be no shame in admitting to post hoc analyses / observations. It should be easier when everyone is ‘confessing’!

        • ferkan

          I’ve just realised that this article is 3 years old… but still relevant!

          • Neuroskeptic

            I’m glad you found it useful after all this time!

            It’s great to see some of these proposals becoming reality today. Slowly but surely.

    • Neuroskeptic

      Hal P: Thanks, those are both excellent comments & raise points I hadn't considered properly.

      Re: 1) I suppose I've tended to see it as a bitter pill to swallow because clinical trial registration certainly was, for Pharma, and I've had that in mind, but you're absolutely right, in many cases it would be beneficial. Certainly in the case of “unpopular” work as you say; also I think it would be of great benefit to people whose approach is just novel.

      At the moment if you come up with a really innovative new method, and invest time & money in it, you're gambling that the results will be “good” at the end. If your awesome method ends up supporting the null hypothesis you'll struggle to publish, certainly you're unlikely to get the kind of impact your method deserves. Even if your method is really good, and provides very good evidence for the null hypothesis of whatever question you apply it to.

      Re: 2) That's a good point although I'd worry that it could be abused, obviously journals don't want to publish data that's just trash, but we'd need to make sure that the “non-trash” criteria aren't used to smuggle in conservative assumptions about what the truth is (e.g. I'd worry that a journal with an editor who thinks that X causes Y, might say “your study about W, X,Y and Z will only be published if you replicate the 'established fact' that X and Y correlate”).

    • Ivana Fulli MD

      And what about a rule making submitted paper to be evaluated for publication by randomly chosen pairs instead of letting the editors do a little bit of politics when choosing to whom to send a paper to be reviewed before publication?

      Actually, I never thought it healthy that close competitors are reviewers for each other papers.

      I remember that once upon a time I dreamed that my editor husband (in a non psychiatry journal) received complains that a paper under reviewing had been delayed until one of the reviewers who asked many modifications of the writing and took time to do so had himself published first a similar study making the most of the intelligent discussion of the paper he was reviewing.

      It was a dream, of course, but how to prevent it to become reality?

      By having a bank of reviewers in any given “subfield” so to speak without too narrow boundaries?

    • Ivana Fulli MD

      Also what about registrating projects before you find clinicans who will agree to give you informations and blood samples (or whatever) from their informed and agreeing clients in order not to have your intelectual property stolen when it is time to publish the result like putting your name in fourth position and changing the title in order to attract attention to themselves ?

      NB: I know several researchers (for some reasons all female) who had their intelectual property stolen as I had it myself: a cinical statistician who was at Paris VI with I to study very little stats signed in third position for having done a Qui2- I decided to use- in a 1986 paper in “the Lancet” about hormones and suicidal attempt!

      This couldn't have happens if I had registerd my protocol beforehand.

      More importantly, I am now convinced that some clients will be able to give the idea of excellent research protocols -take petrossia for example or other aspies- and it is not fair to stole their property just because they are not researchers.

      Beware that it can be stolen in two ways:

      1) you take the elaborate idea from an aspie and you just forget him if he doesn't thank you for publishing it without his name on it;

      2) you see a poster from clients made research at a psychiatry meeting with a gorgeous idea poorly tested and badly named and you then do a great paper without mentionning them since all they will get from their research will be that poorly named poster.

    • Nitpicker

      I think there are some interesting ideas here. On the whole I think it has a lot going for it (let's leave the question for how to actually make this change happen out of the equation for now). However, I think that you may be sacrificing flexibility in science by assuming that everyone should have to follow their initial design by the letter. As is the case with grant proposals, this usually couldn't be farther from the truth – often what people do is a far cry from what they originally proposed (except for the kinds of grant proposals mentioned above where the work has already been completed – this also seems to vary a little between different countries).

      I would challenge any researcher, especially in the fields of cognitive neuroscience/experimental psychology on their claim that they conduct all their research precisely as it was laid out in their initial project proposal. Even the best minds make oversights and can think of better approaches between the time they first conceive an experiment and when they actually collect the data. Often a glaring design flaw may only occur to you when you begin. Hopefully, you will have spotted this when you were testing your design in pilot experiments or a pre-experiment test but sometimes it may be due to factors in your actual experimental population you just could not foresee. The post above by EJ confirms this suspicion that this is not how things normally work and that there are good reasons for it.
      Perhaps the system you propose would select against those researchers who make a lot of these errors and who are therefore simply bad experimentalists. That may be a good thing but I think it sets the bar too high. Everyone can make mistakes.

      There is an associated problem with this proposal also. As was already pointed out above, this system would encourage people to conduct their research and then submit their “pre-registered” design and (after an arbitrary delay) the actual results. You say this would be against the rules but how do you validate this? In a lot of cases, you will probably not even be able to confirm that the data were in fact collected precisely in the manner that was outlined in the design.

      In summary, I agree that it sounds appealing and may fix some of the problems of the current system but it's in itself ripe with potential problems – and that's not even taking into account the problems of actually making this a reality. In my mind, it would be far more efficient to improve the current system of peer review and post-publication evaluation of research to be focused more on the substance than on impact factors and h-index. I concede I don't know how to do that either but it at least seems more realistic.

      • ferkan

        That’s what pilot studies are for.. But in the real world, yes, I think the best way forward is that you should be made to preregister and then any amendments should also be registered. This should not be too hard to do.

        Clearly however, a good and flexible system needs to be set up to meet the needs of different researchers. But it should be not beyond the wit of wo/man.

    • Martin Larsson

      Great post! I really dig this.

      However, I think some of the text has dissappeared at “rather than on their ability to generate…”. Generate what? :)

    • Neuroskeptic

      Martin: Whoops. Fixed. I was drafting and redrafting this post for a while, it ended up a bit fragmented.

    • Neuroskeptic

      Nitpicker: In terms of not following your original protocol, I forsee this being handled as it is on i.e. you can amend your protocol, but all changes are tracked and your previous versions remain publicly accessible.

      Much like how on a Wiki you can see changes.

      As you say, protocol changes are inevitable & are not always a bad thing, but they're often used in such a way as to help generate desired results rather than to genuinely improve the experiment, and I think by publicly tracking them, readers would be able to judge which it is.

      Re: “You say this would be against the rules but how do you validate this? In a lot of cases, you will probably not even be able to confirm that the data were in fact collected precisely in the manner that was outlined in the design.”

      I would say that this is already a problem. Anyone's data could just be made up or their methods could be described wrongly.

      I see the advantage of my proposal as follows: it removes the “middle ground” between outright fraud and best practice.

      Of course it would be possible to misbehave in a preapproval system, but doing so would involve outright lying (claiming you will do something in future when in fact it's already done, changing methods and not telling people, etc.)

      Currently, yes there is outright lying but a much bigger problem (in my view) is bending the truth without lying, through selective publication, data dredging etc. A paper could be entirely “true” in the sense that nothing in it is a lie, but still very bad science.

      Under preapproval, there's no middle ground (or much less) – you're either honest or a liar.

      And I think faced with that choice most people would be honest. But faced with shades of grey, people's sense of honesty gets murky.

    • Nitpicker

      @Neuroskeptic: I see what you're saying. I think the tracked changes to the protocol is certainly a good idea. As long as people don't get too nitpicky (kind of ironic ;->) about every little divergence from the preapproved protocol, it should probably be fine to do it this way.

      As far as the lying is concerned, however, I do think there is likely to be a grey area. You are probably right that most people wouldn't have completed entire projects already when they submit. However, it's much more likely that people will have collected some substantial “preliminary data” or that they will already be collecting data when they submit.

      This is perhaps also dishonest but at the same time you couldn't blame them. It can take a long time to write a paper, especially if there are many coauthors, and so if you simply wait for the protocol to be published before you even begin your experiment this causes considerable delay.

      Another aspect to consider is that some kind of pre-approval system as you imaging already exists on a smaller scale. In many departments it is standard practice that new projects have to first be presented, discussed and approved by a panel of researchers in the department. I don't know how common this is but it should be encouraged everywhere. Naturally, this alone does not allow external researchers to verify that the investigators didn't diverge from their original proposal.

    • sigihale

      Here’s another analysis of problems/fixes … super important topic. We need to put our heads together and fix this!

    • Jon Wong

      Neuroskeptic, I think your article is very interesting. Personally, I like your Systems 2 solution combined with Politics 2 and 4. And I very much agree that Post-Hoc Analysis should be separate from the initial study and presented as such.



    No brain. No gain.

    About Neuroskeptic

    Neuroskeptic is a British neuroscientist who takes a skeptical look at his own field, and beyond. His blog offers a look at the latest developments in neuroscience, psychiatry and psychology through a critical lens.


    See More

    @Neuro_Skeptic on Twitter


    Discover's Newsletter

    Sign up to get the latest science news delivered weekly right to your inbox!

    Collapse bottom bar