How To Fix Science

By Neuroskeptic | May 24, 2011 7:51 am

Over at Bad Science, Ben Goldacre discusses a big problem with modern science – the published literature is all very well and good, but we don’t know what people are finding that goes unpublished:

The scale of the academic universe is dizzying, after all. Our most recent estimate is that there are over 24,000 academic journals in existence, 1.3 million academic papers published every year, and over 50 million papers published since scholarship began.
And for every one of these 50 million papers there will be unknowable quantities of blind alleys, abandoned experiments, conference presentations, work in progress seminars, and more. Look at the vast number of undergraduate and masters dissertations that had an interesting finding, and got turned into finished academic papers: and then think about the even vaster number that don’t…
We are living in the age of information, and vast tracts of data are being generated around the world on every continent and every question. A £200 laptop will let you run endless statistical analyses. The most interesting questions aren’t around individual nuggets of data, but rather how we can corral it to create an information architecture which serves up the whole picture.

I agree with all of this. It is a problem. In fact I’d say it’s the single biggest problem with science today. Scientists are required to publish ever-increasing numbers of high-impact papers, in order to get grants and promotions, with the “best” papers, usually meaning the ones with the most interesting positive results, being favored.

Findings that show that nothing especially interesting is going on here all too often get swept under the carpet or re-re-analyzed until a positive result falls out. If you do a study of a certain gene and its association to brain function, say, and find it has no association: that’s bad news for you. That will make a low-impact paper, if it makes a paper at all. But maybe it has an association with brain structure? Or personality?

Anyway, that’s the problem. What to do about it? Goldacre notes that in medicine, there are mechanisms in place to deal with this:

In medicine, where the stakes are tangible, systems have grown up to try and cope with this problem: trials are supposed registered before they begin, so we can notice the results that get left unpublished. But even here, the systems are imperfect; and pre-registration is very rarely done, even in medical research, for anything other than trials.

Clinical trial pre-registration is a fantastic idea. The systems are certainly imperfect, but they’re getting better, and they’re much better than nothing. Back in 2008 I proposed that all scientific studies, not just clinical trials, should be publically pre-registered. That way everyone could know what science was going unpublished and could tell when authors were doing analyzes they hadn’t originally planned to do (which is fine, so long as you admit to it.)

I still think that would be a good idea. But how would it work in practice? Here’s what I’ve come up with:

Scientific papers should be submitted to journals for publication before the research has started. The Introduction and the Methods section, detailing what you plan to do and why, would then get peer reviewed. The rest of the paper would obviously be a blank at this stage. Anonymous experts would have a chance to critique the methods and rationale.

If the paper’s accepted, you then do the research, get the results, and write the Results and Discussion section of the paper. The journal is then required to publish the final paper, assuming that you kept to the original plan. The Introducion and primary Methods would be fixed – you can’t change them once the data come in.

You can do additional stuff and run additional analyses all you like, but they’ll be marked as secondary, which of course is what they are. Publication would therefore be based on the scientific merits of the experiment, the importance of the question and the quality of the methods, not the “interestingness” of the results. If you want a paper in Nature, it needs to be a great idea, not a lucky shot.

This would be a radical change from the current system. Too radical, almost certainly, to ever happen in one go. So here’s another idea as to a kind of stepping-stone on the way:

Already, scientists have to spell out their original rationale and original methods before they do any work – when they apply for funding from a grant awarding body. These grant applications are often very detailed, but at the moment, they’re private. And people don’t always stick to them.

Why not make the full publication of the grant application a condition of being awarded the money? This would be rather like preregistration of the Introduction and Methods, although less elegant, but it would do the job. And given that most grants consist of public cash, the public really have a right to know this. These applications are usually just PDF files. It would be trivial to put them online – after redacting personal information like applicant résumés, if desired.

CATEGORIZED UNDER: FixingScience, links, media, science, statistics
  • petrossa

    The main problem imo is that too many scientists vie for too little money.

    This inexorably leads to goaldriven studies being favored. I don't see how any system is going to change that. It'll only get worse.

    It's all fine and dandy people being able to get educated to high levels, but in opening up this system you inevitably are going to be forced to downgrade the criteria for qualifying for a degree.

    The educational institute needs to have a good success rate overall.

    Nowadays the educational system is more then saturated, leading to more and more scientists with a good memory but hardly any talent coming on the market.

    With as a result that the real talent gets swamped by the sheer number of mediocre scientists.

    This isn't going to be fixed. Idiocracy (2006) is getting reality as we speak.

    In another thread i presented a symbol of idiocracy, the female science writer who masturbated herself in an fMRI and came to the conclusion that that opened the mind based on the increased flow. (and whole lot more excruciatingly silly conclusions)

    Granted this is a very extreme example, but the AGW hoax is a real life scary example of how mediocre scientists take common sense hostage by sheer information overload.

    We have to learn to live with it.

  • Anonymous

    In Science, unexpected results are often the rules rather than the exception. Does that mean that they are less interesting? I don't think so.

    Removing the possibility of publishing those unexpected results or at least forcing the authors to present them as secondary will not improve science.

    Sometimes, it is more the intelligent analysis of the results rather than the intelligent design of the study that makes the study interesting. You might also learn a new analysis technique after you performed an experiment and applied this analysis to your data. Why not? Does that make the analysis less interesting?

    It's a good idea to pre-publish methods but the value of the results should not be judged with respect to how planned they were.

  • Anonymous

    I think your 2nd proposal is more likely to fly. It may also have the additional benefit of reducing the number of rushed grant applications submitted based on half-baked ideas.

    However, most grant applications have strict word limits and have to be written for “a diverse audience of different scientific backgrounds”. Thus, the description of the methods tend to be VERY non-specific anyway. This would still give unscrupulous scientists plenty of wiggle room. Plus, no such system can prevent outright data fudging.

    The only effective protection is independent replication. We need to incentivize replication and accept that we have to be patient.

    I am intrigued by Ioannidis' (doi:10.1158/1055-9965.EPI-05-0921) suggestion that journal editors NOT publish novel findings in print (only on web) until they have been independently replicated, as new findings are often wrong or grossly overestimating whatever “true” effect there may be. Again, this requires patience.

  • Neuroskeptic

    Anonymous #1 – I agree that unexpected results are very important. I think my proposal would mean more of them get out there. At the moment, if you find some unexpected results, you can just not publish them, or you can write the Introduction to make it look as if you did expect them.

    As for secondary methods and analyses, I'm not saying they're bad. They are fine, but they have to be flagged up as such. If you do five analyses in a row and find one positive result at the end, that's an interesting suggestive finding, and deserves to be published, but not in such a way that it looks like you did the study focussing on that one thing.

  • Neuroskeptic

    Anonymous #2: My feeling is that that would be unworkable. There would be endless debates over whether the replication was really an adaquate replication or not. Given that everyone's reputation and career would be at stake, these could be very vicious and you'd create perverse incentives to try to replicate or not replicate other people's work based on their relationship to you…

    Whereas simply changing the order in which publication happens would avoid all that.

    In other fields, you publish the proposal first and then you go away and get the data. In government, for example, you announce that you are going to do a review of some area of policy, then you do it. You don't do it in secret and then suddenly call a press conference saying that you've decided that a certain law is rubbish. OK this isn't an ideal example because government reviews are subject to so many other pressures, but you see the point. Science is rather unique it its publication model. Even with books, you have to pitch your book to a literary agent & publisher before you write it. Science is just about the only place where you do everything in private and then just submit the final manuscript.

  • Andrew Oh-Willeke

    Seems to me that a less intrusive way to deal with the underpublication of uninteresting results problem is a quasi-adversarial process.

    Negative results aren't published because they aren't interesting to read. A scientific journal that promised to publish results whether or not they found anything worthwhile would be boring as sin.

    But, while negative results are usually boring, they become interesting when they contradict a big published paper. The key then is not to publish every result, something that is clearly impracticable and become more so, but to create strong incentives to publish old research that contradicts a published result once it is published.

    If Nature routinely rewarded scholars who had unpublished results that contradicted a recent article with their own publication berths, academic ambition would root out negative findings that we care about quite effectively.

    They key is to change how editors at publications like Nature see their reputation for authoritativeness. There is a perception that they don't want to routinely publish papers that flat out contradict their prior publications because it makes it look as if what gets published in the journals isn't trustworthy. To fix the sociological bias towards positive results in science that attitude at the top of leading journals needs to adjust.

  • Anonymous

    I have a feeling that NIH, at least, is moving in the direction you're suggesting. They're adamant about research being hypothesis driven, and making grant applications public seems a natural next step, since they are clearly funding a lot of junk science. I think such transparency would do wonders for science.

  • Anonymous

    I disagree with your suggestion that publishing grant applications will yield publications which include a more accurate sample of the science being performed, for a couple of reasons- 1) many grant applications are relative duplicates of applications to other funding bodies, with the “spin” adjusted to the criteria of each organization; and 2) your suggestion would encourage the annual horror show of lobbyists, congressmen, and advocates of one stripe or another pulling out examples of obscure sounding or incremental research pans and holding them up as examples of “wasteful spending on frivolous research”, simply because the advocate opposes research in a particular field, or the existence of the field itself.
    If you're serious about reducing the drive to publish only positive findings, and avoid needed replications of published studies, there are at least 2 revolutionary actions which could acheive those aims: 1) Force institutions to accept the burden of salaries for their researchers, rather than perpetuating a system where the PI pays salaries with one grant, then has to get another grant to pay for the work. 2) Enforce a meaningful open source publishing policy, with institutional (tenure) credit given to studies that are published in forums apart from the for-profit academic journals. If the work is funded publicly, then essentially paying journals for the right to publish it, and subsequently giving the journal the copyright is the true definition of wasteful. I have yet to hear an argument from the editorial board of any journal that makes a strong, relevant, non-self serving argument for continuation of their monopoly. If every scientist could devote 100% of external funds to actually performing experiments, rather than paying salaries that should be paid by the employing institution, and could ensure rapid, peer-reviewed publication of experimental findings without the months-long and expensive cycle of journal copy editing, formatting, and page charges, they would have the opportunity to conduct and publish studies that are necessary, but not necessarily 'sexy'. Replication can take months of time and effort, and done carefully, is just as difficult and demanding as performing new studies. If I know a new study or analysis yielding a positive finding will advance my career, while replication of someone else's Nature paper will result in a no-impact release in a backwater journal, it's a no-brainer which course of action I will pursue.

  • Pseudonymoniae

    It almost seems like you want to tack a grant review process on to every paper that ever gets published. Not only would this be a massive undertaking on the part of journals, but it would also worsen the current situation, where many researchers already their grant applications. I don't think we need journals publishing a bunch of fiction as well. Also, as with many grant review committees, journals won't want brilliant ideas that are unlikely to work, because then they will be forced to publish a lot of negative results. Instead, they will primarily accept a large number of very safe and relatively interesting findings, but very little groundbreaking research.

    I think the only legitimate solution is to produce strong incentives for the publication of null results. It's been pointed out that PloS One already does this–the question is whether this is a good enough incentive to drag all of the null results out of the woodwork… which I highly doubt.

  • Neuroskeptic

    Pseudonymoniae: On your first point, that it would be too much of an undertaking for journals, I don't think it would be any more difficult than post-research peer review as we have now.

    I'm not saying we double the amount of peer review: just that we move it to before the work's been done.

    This would also help to avoid “Do Yet Another Experiment” peer reviews, which can delay publication for years. By putting peer review up front, you'd at least know what was being asked of you before you started.

    On the second point, that journals wouldn't want research that might give a null result…that's possible, but I don't think many journals would pass up the chance to publish an excellent study for which even a negative finding would be important (e.g. the biggest ever genome-wide association study of some disorder.)

    It might mean that good journals would shy away from experiments that are real long shots which might be a complete bust. But we already have that problem in the sense that no-one funds that. So I don't think it would make the situation much worse.

  • Pharmacologica

    You are a scientist right? Then how could you possibly think real experimental science could be “pre-registered”. It's like you've bought into the myth that the programmatic science process beloved of grant applications, journal formats, and ethics applications, are how experimental scientist really work. Come on, you know it's all tinkering. Re-read Medawar if confused on this point.

  • Neuroskeptic

    I'd make an exception for work where true findings can be readily replicated. In other words, work where it's not so much your data, as your conclusions, that are important (because the data are so easily reproduced, once you know what to look for).

    Obviously you can't pre-register “looking down the microscope” or “having a think about it”.

    However, when you're doing anything where it's your data (or statistical analyses of data) that will end up being important, not just the ideas you generate, ideally that would be preregistered. Because as it stands you can generate data and analyze it all day until you find what you want.

  • Anonymous

    well there is another problem with making research grants public in the interests of pre-registration. (and one that makes the funding/application process also broken in some ways).

    Grant applications these days typically consist of *finished* research with positive findings, dressed up as preliminary data — anything less is unlikely to get funded. The stuff promised is typically either already done or not actually meant as a research plan.

  • TheCellularScale

    This is an interesting idea, for sure, though its implementation could be pretty problematic. One question I have is about the preliminary data required in grants. Preliminary data is a HUGE part of the grant score, because they don't want to fund something that is not going to 'work'. So would these pre-registered papers contain any preliminary data? and if so, of what nature? I mean what if you want to use an old technique for a new purpose, shouldn't you have to show that the technique can be used in this new way? And then, what if you show some preliminary data in a grant or a pre-paper, but later cannot replicate the preliminary data. Is that in itself a result?

  • Anonymous

    yes, what the last two commenters said

    anyway, I believe funded grants are “public” you just need to submit a FOIA request to get it…

  • Neuroskeptic

    This is the Google age. “Public but really inconvenient to access” is the new “private”.

    The point about grant applications often being retrospective is a good one though.

    I'm currently working on a follow-up to this post which has some further ideas on this.

  • deevybee

    Just to say that, in contrast to most of your other commentators, I liked your proposals. This paper describes the problems, which affect science beyond psychology: Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-positive psychology. Psychological science, 22(11), 1359-1366. doi: 10.1177/0956797611417632
    Things are worse than they were 30 yr ago because it's become much easier to do data analyses very rapidly, so numerous alternative analyses get run until something 'interesting' pops up.

  • John M. Nardo MD

    I think your idea is so good that I'm envious I didn't think of it myself. Great thinking…

  • Mickey Nardo

    Brilliant idea. I wish I'd thought of it!

  • LF Velez

    One interesting side effect of making grant applications public information will be getting to see the amount of overhead universities are charging. Another will be the arguments about whether it's appropriate for the grants offices at universities to be writing key parts of the text [it's the same situation as papers written by professional writers in other areas of scientific research — outsourcing the writing allows the scientists to stick to what they do best, but someone has to foot the bill for those services, and maybe we don't trust who usually provides that money?]

  • Anonymous

    The problem with these proposals is that it neglects the reality of how scientists (at least in the US)are evaluated. In the publish-or-perish culture, tenure committees would have no sympathy for a scientist who makes many “journal proposals” or grant proposals but does not have many publications. Such an open system would have to have the acceptance of an entire university/ academic community, or else it would just result in many good scientists losing jobs.

  • Neuroskeptic

    Anonymous: True, there are going to be institutional barriers to this. I have explored some ways of getting around them in this post.



No brain. No gain.

About Neuroskeptic

Neuroskeptic is a British neuroscientist who takes a skeptical look at his own field, and beyond. His blog offers a look at the latest developments in neuroscience, psychiatry and psychology through a critical lens.


See More

@Neuro_Skeptic on Twitter


Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

Collapse bottom bar