For Preregistration in Fundamental Research

By Neuroskeptic | April 25, 2013 3:02 pm

Recently, cognitive science postdoc Sebastiaan Mathôt wrote two pieces that raise questions about the idea of reforming scientific communication to involve preregistration of experiments: The Pros and Cons of Preregistration in Fundamental Research and also The Black Swan.

Registration has long been a favorite topic of mine; it’s something I’ve been advocating since my very first post. Now it’s starting to become a reality which I think is great. Yet many researchers are wary of the idea, and Mathôt makes some important points.

My answer in a nutshell is that preregistration does seem scary, in the context of science’s current culture – but that’s a problem with the current culture.

Mathôt’s core argument, as I understand it, is this (from the first article, emphasis mine):

My colleagues and I recently conducted an experiment in which we recorded eye movements of participants while they viewed photos of natural scenes. On half of the trials we manipulated the scene based on where participants were looking. The other half of the trials served as a control condition…

[Our manipulation] turned out not to have the predicted effect. According to the rules of preregistration, this means that our study was worthless: We made a prediction, it didn’t come out, and any attempt to use this dataset for another purpose borders on scientific fraud.

However, we stumbled across an unexpected, but interesting and statistically highly reliable phenomenon in the control trials. So what now? Are we not allowed to look at this effect, because we did not predict it in advance? Should we run a new study, in which we predict what we have already found, and use only the data from the new experiment?

Your intuition, no doubt, screams ‘no’, or at least mine does. However, the logic behind pre-registration says ‘yes’. The essential conflict here is that pre-registration discourages exploratory research, and assumes that a finding is not a real finding unless it was predicted – a questionable assumption at best.

In this example, the authors have made two discoveries: 1) the originally predicted phenomenon didn’t happen (‘negative’); and 2) a different, unpredicted phenomenon was observed (‘positive’).

Both of these are interesting findings, and both ought to be published. Number 1) is interesting, because the authors surely had good reasons to predict that the effect would happen. So the fact that it didn’t is a discovery; it tells us about the world, if only by narrowing down the possibilities. It contributes to science. Under the current publishing system, however, this interesting finding might never be made public – and even worse, might be regarded as deserving to remain unpublished.

Then there’s 2), the incidental positive observation. This should also be made public – and there’d be no barriers to doing so under a system of preregistration, albeit ‘only’ if it’s clearly marked as an incidental observation. Being incidental is not a bad thing – but you do need to be honest about it.

If it sounds bad, to scientists today, it’s because we’ve been disguising our incidental findings for so long. We write papers to make ‘positive’ results seem predicted even when they weren’t – just as we make ‘negative’ findings disappear.

By making such manipulation impossible, preregistration would liberate both the unexpected finding, and the negative finding. There would be a lot more of both kinds of result out there, if nothing else; I suspect their status would rise accordingly.

I’ll return to this sentence of Mathôt which I think is a very clear description of a common worry: “According to the rules of preregistration, [not finding the predicted effect] means that our study was worthless.”

The worry here is that a good experiment would be ‘wasted’ if the primary prediction turns out to be false. But the truth is that it’s the current system that measures a study’s worth by its p-values.

Preregistration is the dream that one day, studies will be judged, not by the significance of their Results, but by the content of their Methods.

P.S. Mathôt is also the creator of OpenSesame, a free psychological experiment development toolkit. I haven’t used it yet, but the various commercial ones certainly leave a lot to be desired…

  • http://twitter.com/jdottan Joseph Tan

    Then I guess the question is: If you change your behavior before the incentive structure changes (which is reasonable and admirable), will you suffer professionally? Your integrity would definitely be judged highly (as it should be), for sure, and I think if everyone is waiting for the incentives to change first, change will be slow, but I wonder still how exactly we get to the dream you articulate.

    I suppose if it is easy to preregister (hello Open Science Framework) it will make behavior change easier.

    • http://blogs.discovermagazine.com/neuroskeptic/ Neuroskeptic

      That’s a great question… or *the* question. I see it happening as a gradual building of momentum with more and more initiatives and journals and individuals implementing registration, for a few years perhaps, until eventually we’ll reach an inflection point and it’ll become “the norm” – we’re at an advanced stage of this process for open access, whereas registration is several years behind that but I think the evolution will be similar.

      • http://blogs.discovermagazine.com/neuroskeptic/ Neuroskeptic

        P.S. and what this means for individuals is – don’t panic. There will be early adopters, but most people would rather not be one, and that’s fine – change will come organically rather than being imposed top down (although I’ve previously said that top-down would be justifiable, I don’t think it’ll be necessary.)

    • http://twitter.com/ceptional Alex Holcombe

      As you say, one’s integrity will be perceived as higher, and I think the benefits of that outweigh any potential of suffering, as I argue here: http://alexholcombe.wordpress.com/2012/08/29/protect-yourself-during-the-replicability-crisis-of-science/

  • http://twitter.com/ceptional Alex Holcombe

    We also have a preregistration initiative, specifically for replication studies, over at Perspectives on Psychological Science. Briefly, a lab proposes a study designed to replicate a previously-published study, specifying in advance the exact methods and analysis plan, we send it to reviewers including an author of the to-be-replicated publication to refine the protocol, and then invite multiple labs to participate in the replication effort. We (Dan Simons and I) have recently accepted the first replication plan, which will be posted soon on the webpage as an invitation for labs to join in. We think this system reduces much of the uncertainty and worry about bias in publishing that currently plagues the issue of whether particular studies are replicable http://www.psychologicalscience.org/index.php/replication http://alexholcombe.wordpress.com/2013/03/03/registered-replication-reports-are-open-for-submissions/

  • JonFrum

    ” the authors surely had good reasons to predict that the effect would happen. So the fact that it didn’t is a discovery; it tells us about the world, if only by narrowing down the possibilities. It contributes to science.”

    So is it going to become standard practice to submit (and have published) ten papers describing failed hypotheses – because each failure narrows the possibilities? Isn’t the object to rule out possibilities – all of them – and then deliver the goods? I remember seminars in graduate school in which the molecular geneticists shook their heads at this sort of thinking when they heard it from evolutionary biologists.

    • http://blogs.discovermagazine.com/neuroskeptic/ Neuroskeptic

      Sure, ultimately we want to work out what is going on. But important steps on the way to that include ruling out other possibilities, and if you rule one out, you should publish that (and get credit for it).

      I shake my head at those who’d shake their head at it.

      Of course, not all negative results are interesting. Sometimes the hypothesis is just stupid and the results are a forgone conclusion. But this is pretty rare and under (some kinds of) preregistration, those would hopefully get filtered out at the registration stage (if the registered protocol was pre- peer reviewed.)

  • http://twitter.com/deevybee Dorothy Bishop

    Yesterday Chris Chambers gave a great talk in Oxford, where he described the new Registered Reports initiative at Cortex http://www.ncbi.nlm.nih.gov/pubmed/23347556. Many people had the same reaction as Mathot:extreme unease at the idea that this approach would devalue interesting incidental observations.

    Chris was reassuring on the point that an unexpected finding could certainly be reported, but it would be clearly flagged up as exploratory.

    I was surprised, though, by the extent to which people don’t seem to get the assumptions of statistical testing – and the fact that p-values are no longer meaningless if stats are applied after data-snooping to find a ‘significant’ result. There was a lot of talk about the waste involved if you do a study and don’t report unexpected findings – but a relative lack of concern about the waste that occurs when spurious findings are treated as meaningful. Maybe it would help if all scientists were given training with randomly generated data-sets, to make them more aware of the difference between a priori and post hoc analyses.

    I started to get seriously worried about this a few years ago when I realised that electrophysiological studies in neuropsychology were mostly generating meaningless results, because people could select from a large number of electrodes, time windows, analysis methods so that they were bound to find something ‘significant’ if they looked hard enough. Yes, there are reliable phenomena in ERPs, but I suspect that the majority of findings involving comparisons between clinical groups aren’t replicable. And when I reviewed one part of the literature, it was striking that you could not do a meta-analysis, because every study used different analytic methods (Bishop, D. V. M. 2007. Using mismatch negativity to study central auditory processing in developmental language and literacy impairments: where are we, and where should we be going? Psychological Bulletin, 133, 651-672. doi: 10.1037/0033-2909.133.4.651).

    So roll on pre-registration. Can’t come soon enough for me.

    • http://blogs.discovermagazine.com/neuroskeptic/ Neuroskeptic

      Thanks for the comment – I wanted to come along to Chris’s talk but had other plans. From what you say about the audience reaction, concern over unexpected findings is the #1 worry and that’s quite understandable. I think we need to better explain how, with registration, incidental findings would be published and indeed routinely so – that’s been a failure of communication hitherto.

      • http://www.facebook.com/jona.sassenhagen Jona Sassenhagen

        Especially since it’s a feature, not a bug.

    • Ruben

      I thought about giving students randomly generated datasets with meaningful variable names and seeing if they figure it out.

      Sort of like Anscombe’s quartet but for confirmation bias?

      But it’s probably too mean, right?

  • http://twitter.com/deevybee Dorothy Bishop

    That should of course be ‘no longer meaningful’!

  • anon

    Couldn’t people easily get around pre-registration by doing the study first and then pre-registering and then publishing their positive results? This seems like a major flaw.

    • http://www.facebook.com/jona.sassenhagen Jona Sassenhagen

      That would however be clear outright fraud. People are committing clear outright fraud already all the time, we can hardly stop that. But I am sure the majority of researchers are good people and would produce honest research.
      You would also have to fake time-stamps and lab protocols.

    • http://www.cogsci.nl/smathot Sebastiaan Mathôt

      In Chris Chambers’ proposal you have to explicitly prove that the data was collected after the proposal. But as Jona says, everything is fakeable, of course.

    • http://blogs.discovermagazine.com/neuroskeptic/ Neuroskeptic

      As Jona says, you could do it, but it would be fraud. I see that as a strength of registration – it means that in order to mislead, you have to outright lie, whereas currently you can pull all kinds of tricks while never doing anything “wrong”.

      I don’t think many people would choose to be fraudulent in this way – especially since it would be quite easy to catch by checking registration dates against e.g. informed consent records and so forth.

  • http://www.cogsci.nl/smathot Sebastiaan Mathôt

    Thanks for the post! I appreciate that you’re not dismissing my points, even though we disagree (at least on the surface).

    First, just to clarify the point that I didn’t want to make: I’m not opposed to pre-registration per se, not in a rigid form, and certainly not in the very liberal form that you’re describing here (i.e. with room for exploration, as long as it’s labeled as such). I think pre-registration is a useful tool, especially when the reliability of an effect is disputed. And also, it facilitates publication of null results, even though there are other ways to achieve that as well, of course.

    However, my point is that pre-registration (in a rigid form) is just that: a tool, and not the one true path. I think the ideal of specifying your entire analysis trajectory in advance is hard to achieve. Again, it’s not impossible, and when you (as Wagenmakers puts it) want to ‘convince a skeptical audience of a controversial claim’, you can do it, thus leveraging the full power of the p-value. But I think it’s also a valid approach to play with your data and search for interesting patterns, especially when you’re dealing with large sets of data. This is disastrous for the meaning of the p-value, but that in itself doesn’t make it bad science. It simply means that you should analyze your results differently. It’s also true that evidence is more ambiguous in exploratory than in confirmatory studies. But again, that doesn’t necessarily make it bad science, especially when you’re not dealing with results that are on the fringe of significance. After all, an ambiguous p = .0000001 is more convincing than unambiguous p = .04, because practically speaking (and with the exception of things like fMRI analyses) it’s difficult to inflate p-values over many orders of a magnitude. (But, as I point out in The Black Swan, maybe it’s better to omit p-values in ambiguous circumstances altogether.)

    But to be fair, your post, as I understand it, describes a form of pre-registration that is very liberal, essentially a form of enforced transparency. It’s difficult to disagree with that, although I’m not looking forward to the added bureaucracy that it implies. (But me being lazy is perhaps not a good argument.) But the question is whether current pre-registration protocols are really that liberal? Do you think so? It strikes me that they are not, and that they largely discourage exploration and require a strict adherence to pre-specified analyses. Again: Rigid pre-registration has merit, but it’s not the one true path.

    • Ruben

      Following the logic of your arguments (except the part where you say pre-reg is a tool not a grail), one should ALWAYS pre-register, even if it to say we didn’t have any plans for the data except how large the sample should be.

      This won’t solve all problems, but many. So get on board! No pre-registration protocol under discussion now will or can forbid you to conduct unregistered analyses, but if you deviate and *informed* reviewers (now they’re kept in the dark) think you deviated too much, then you should not get the added credibility of your study being called pre-registered. Simple as that.

      Surely you’re not in favour of continuing to lie about having had hypotheses about exploratory effects, as e.g. many embodiment researchers currently do.

      Obviously, this will not mean that all exploratory science will grind to a stop. Think about fields other than your own: Do you think many findings in developmental psychology or personality development (large data, lots of ways to look at it) were predicted?
      Yet they all say they were.

      About you being lazy: If one has to do one’s best effort to make an accurate prediction, maybe more researchers will actually bother to read the literature before concocting the experiment and find that it is to their benefit. I certainly know people who first read highly important previous research when writing up the results. This may not be so for you, but it happens when incentives encourage such behavior.

      • http://www.cogsci.nl/smathot Sebastiaan Mathôt

        > No pre-registration protocol under discussion now will or can forbid you to conduct unregistered analyses, but if you deviate and *informed* reviewers (now they’re kept in the dark) think you deviated too much, then you should not get the added credibility of your study being called pre-registered. Simple as that.

        Yes, that is reasonable. That would mean though, as I also argued, that pre-registration would just be one acceptable format of many. An appealing format, of course, that gives your findings, as you say, extra credibility.

        > I certainly know people who first read highly important previous research when writing up the results. This may not be so for you, but it happens when incentives encourage such behavior.

        That’s a bit odd, and not what I meant at all. I meant that pre-registration presumably requires you to fill in and submit some document, which takes time (and I’m lazy).

    • http://www.facebook.com/jona.sassenhagen Jona Sassenhagen

      None of the actually proposed/implemented forms of pre-registration I’m aware of (e.g. Cortex Registered Report, pre-registration of clinical trials) is like the form you’re arguing against, and all are of the form you seem sympathetic towards. You must do what you pre-registered, and label everything else explorative. Is there any register that actually disallows you from reporting unexpected/explorative measures? In the implementations I know, you’re simply not allowed to call your (welcome) exploratory/unexpected stuff confirmatory/expected.

    • Eric-Jan Wagenmakers

      Hi Sebastiaan,

      I think that enforcing transparency (and preventing researchers from fooling themselves) is exactly what preregistration is all about. In the article that my co-authors and I recently wrote on this topic (for the 2012 open access special issue in Perspectives on Psychological Science, http://pps.sagepub.com/content/7/6.toc), we stress that exploration is of course still possible in confirmatory designs — you just need to be honest about it (and realize your statistical test is no longer reliable, as you also pointed out): “Exploration is an essential component of science and is key to new discoveries and scientific progress; without exploratory studies, the scientific landscape is sterile and uninspiring. However, we do believe that it is important to separate exploratory from confirmatory work, and we do not believe that researchers can be trusted to observe this distinction if they are not forced to.”

      So the only thing that changes with preregistration is that researchers are forced to be honest about what is confirmatory and what is exploratory. And who can argue against more honesty? To be clear, I am not saying that researchers are deliberately dishonest; there are powerful human biases at work that even the most honest researcher will fall prey to (in the absence of preregistration). So I’d argue that preregistration inoculates the researcher against the powerful viruses of wishful thinking and hindsight bias.

      Cheers,
      E.J. Wagenmakers

      • http://www.cogsci.nl/smathot Sebastiaan Mathôt

        Ok, so the majority opinion appears to be that I’m afraid of the bogeyman. This may be true in part, and I have conceded this point before. But I’m still not fully convinced.

        To step away from theory and get to the concrete, let’s consider the pre-registration protocols by Chris Chambers and Hans IJzerman (or at least IJzerman is the one that pointed it out to me; he may not be the only author). These protocols state that authors won’t be able to base the conclusions of their study on the outcome of unplanned analyses (from Chambers) and editors must ensure that authors do not base their conclusions entirely on the outcome of significant Exploratory Analyses (from IJzerman).

        You can argue that these protocols encourage the use of exploration as supportive evidence, and they do. But this is encouragement in only a very limited sense. As I wonder on my blog, how would you fit a corpus-based analysis into this framework? These are inherently exploratory (although with some creativity you can reason for ‘confirmatory analyses’), because you are analyzing a pre-existing dataset.

        - Would you pre-register your intent to go exploring?
        - Or would you be ‘encouraged’ to use this exploration only as a starting point for a subsequent confirmatory experiment? This would make it considerably less attractive to engage in large data collection projects.
        - Or perhaps, as Wagenmakers and colleagues suggest in the paper linked to above, you should split the dataset into an exploratory and a confirmatory part? I like this idea (I have a dataset right now that I could try this on), although it strikes me that it still leaves you with considerable degrees of freedom. And how would this relate to pre-registration?

        The general point being: If pre-registration is to be widely adopted at some point, it should be in a form that accommodates and facilitates science in all its varieties. After all, the overarching goal is not to make it difficult for cheats to cheat (an argument from morality), but to maximize the efficiency of the scientific process at large, right?

        • http://blogs.discovermagazine.com/neuroskeptic/ Neuroskeptic

          As far as I see it, all registration requires is honesty about your intentions, from the start of the process.

          So yes, if your intent is “I’m going to explore X”, you should be able to register that intent and then go explore.

          I’d argue that you should register in many cases because science benefits from people publishing negative explorations as well as positive ones. For one thing, if you spend time exploring a dataset and find nothing, you’ll save other people time if you announce that.

          I do see a problem with “un-registerable” studies, but for me it’s not a problem about the type of studies but rather the ‘size’. Clearly we can’t register everything we do as scientists. And student projects etc. are another interesting grey area.

    • http://blogs.discovermagazine.com/neuroskeptic/ Neuroskeptic

      I wouldn’t say my approach is liberal really. As others have said, registration pretty much just is enforced transparency. And I’m in favour of enforcing transparency across (substantially) all of science – no more than that, but also, no exceptions…

  • Pingback: Preregistration – it will change, but your unexpected findings will not be banished | Åse Fixes Science

  • Pingback: 尹致反引

  • Chris Chambers

    Hi everyone, great discussion! As Dorothy says, I gave a talk yesterday on the Cortex initiative at Experimental Psychology in Oxford. The slides are linked below – and thanks to everyone who attended and took part in the discussion. The debate afterwards was stimulating and constructive.

    Slides:
    https://dl.dropboxusercontent.com/u/15691907/chambers_oxford_25Apr2013.pdf

    Also, the editorial piece introducing the initiative is available free here: http://www.sciencedirect.com/science/article/pii/S0010945212003735

    And I provide further details here:
    http://neurochambers.blogspot.co.uk/2013/04/scientific-publishing-as-it-was-meant_10.html

  • http://www.facebook.com/jona.sassenhagen Jona Sassenhagen

    As an aside, I would like to note an important aspect of Sebastiaan’s OpenSesame that touches upon pre-registration: due to the fact that he has decided to open source it, one can include the experimental script (and a link to cogsi.nl) into the pre-registration and every reviewer will be able to test the experiment for themselves.

    • http://www.cogsci.nl/smathot Sebastiaan Mathôt

      That’s a good point. On a related note: One of the things that I want to implement for the next major release is an extension framework for the GUI. One such extension might be an upload mechanism that allows you to post your experiment somewhere (publicly or no), thus getting an objective timestamp on it.

      • http://blogs.discovermagazine.com/neuroskeptic/ Neuroskeptic

        Awesome! Maybe one day soon you’ll not just be a fan of preregistration but one of its architects ;)

  • Pingback: Neuroscience Methodology and Deconstruction | notessimple

  • http://twitter.com/davenuss79 Dave Nussbaum

    I’m in favor of pre-registration and I’m fully behind efforts like Chris Chambers’ and OSF’s, and Alex Holcombe’s described below. But I do think that there are still pitfalls that we should remain vigilant to:

    1. Being right for the wrong reasons.

    It’s always possible to get a significant result that has nothing to do with your proposed mechanism. What we should be vigilant to is pre-registration giving such results an added weight of authority. If a researcher were to find an unexpected result, then concoct a theoretically plausible explanation, then replicate the same result, it would *appear* as though the result confirmed the researcher’s hypothesis, but in fact all we know is that it’s a replicable effect. We know nothing about the reasons why we get the result until we test that theory directly (e.g., test for predicted mediators, moderators, etc.). Again, this isn’t a knock against pre-registration, it’s just a reminder that just because something is pre-registered doesn’t make it true. The theory can still be effectively post-hoc.

    2. Being wrong for the wrong reasons.

    This is just to say that sometimes we fail to find a significant result, not because the hypothesis was incorrect, or even because of statistical “bad luck” but because the study was designed and/or executed poorly. Pre-registration could, theoretically, solve the design issue, if designs were peer reviewed — although peer review is occasionally imperfect ;) On the execution side, pre-registration doesn’t really help. I know from personal experience that it’s possible to mess up the execution of a study in various ways. There’s nothing inherently wrong with that, but it puts us in a somewhat tricky position as to what to do with studies that don’t work. On one level, we should publish everything, and allow authors or reviewers to note instances in which they think the study failed due to poor execution. But the fact is that we often aren’t sure. So we could get a situation in which there were many published, pre-registered failed experiments, some proportion of which were false negatives. Again, this is is not an argument against pre-registration as much as a warning that pre-registration will lead far more failed studies to be published than ever before, and that we should think carefully about what to think of negative results, since we’ve never really had to pay much attention to them before.

    To reiterate, I don’t think either of these possibilities should cause us not to pre-register research, but we should be aware that pre-registered research brings a new set of issues that we should not ignore.

    • http://blogs.discovermagazine.com/neuroskeptic/ Neuroskeptic

      Both good points. #1 I think is a fundamental problem with explanation, not really specific to registration or even to science.

      #2 is a big concern. I’m working on a post about just this issue but in a nutshell I think the only rigorous way to operationalize “messing up” is to specify, in advance, criteria that the data ought to meet if they were collected correctly. These criteria could be anything, except that they can’t refer to the results being ‘positive’ or ‘negative’. For example if you’re testing the effect of a vitamin pill on IQ scores, you could say (in advance) that baseline IQ scores should be 80-120, and that blood vitamin levels should be higher in the vitamin vs. the placebo group. But clearly you can’t specify, in advance, that vitamins must increase IQ or the study is rubbish.

      In retrospect, of course, judging an experiment in the light of its conclusions is all too common…

      • http://www.facebook.com/jona.sassenhagen Jona Sassenhagen

        Possibly the aspects of pre-registration that is most attractive to me is what it could do to results. If we can get rid of p-hacking, the results we will have will be more solid, more, if I may say so, real. So the conclusions may still show the same disconnect from the results as they do today; but at least the results they’re based on will not be at best spurious, at worst lies.

        I also don’t fear too much that pre-registration will lead to overconfidence in bad studies that still made it through the process. I don’t expect the people who think the P600 is a high-level interpretatory linguistic component will read about my pre-registered experiment(s) and suddenly change their minds. No; more likely, they’ll still interpret the results differently.
        But at least these results, that we can then interpret differently, were derived in a confirmatory experiment. And I hope the data they’ll produce in response will follow the same standard, so while not much might change regarding how we interpret data (for that, we’ll still be depending on our critical thinking), the data itself will be of higher quality.

    • Chris Chambers

      Hi Dave,
      Thanks for the comment. I don’t think either of your (very good) points is actually a specific concern raised by preregistration – instead, they strike me as general concerns about good scientific practice. Preregistration was never intended to be a panacea for insufficient experimental control (point 1) or poor experimental execution (point 2).

      Point 1 is an issue of experimental design. In the scenario you propose, the key is to design sufficiently well controlled follow-up experiments that seek to identify the precise basis of the replicated effect. To me this seems entirely independent of whether such follow-up experiments are preregistered or not. What I would say, though is that under the Cortex model, authors can submit Incremented Registrations that allow them to sequentially add experiments in this manner (see the author guidelines, linked below).

      Point 2 reflects the importance of building sufficient outcome-neutral criteria into any experiment to ensure that the analyses are capable of testing the stated hypotheses. These might be manipulation checks, positive controls, or any other non-dogmatic reality checks that provide reassurance that a method or analysis was applied correctly. To avoid circularity these reality checks must of course be orthogonal to the hypotheses under investigation (as Neuroskeptic points out). We’ve been keenly aware of how important this issue is, which is why we have built in two specific reviewing criteria for the Cortex Registered Reports initiative (slides here: https://dl.dropboxusercontent.com/u/15691907/chambers_oxford_25Apr2013.pdf — and see author guidelines here: https://dl.dropbox.com/u/15691907/Draft_guidelines_RR.pdf)

      First, at Stage 1 (preregistration), reviewers are asked to assess:
      “Whether the authors have considered sufficient outcome-neutral conditions for ensuring that the results obtained are able to test the stated hypotheses (e.g. absence of floor or ceiling effects; positive controls).”

      Then, at Stage 2 (full manuscript, post data collection), reviewers are specifically asked to assess:
      “Whether the data are able to test the authors’ proposed
      hypotheses by passing the approved outcome-neutral criteria”

      Failure to meet either criteria would likely result in a manuscript being rejected.

  • Faye18

    If pre-registration is going to be soft enough to allow for exploratory analysis and other flexibility, it seems like all that is really at stake is getting the “pre-registered report” label on your paper. How is this a significantly different outcome than just requiring replication for publication? Seems like less hassle, to me at least, to just do that.

    • http://blogs.discovermagazine.com/neuroskeptic/ Neuroskeptic

      I don’t think we’re on the same wavelength here. Allowing exploratory analysis is not ‘soft’, or hard, or anything else – the point is that registration allows readers to check whether the statistics reported are valid as opposed to being the result of p-value fishing, by checking against the original registration.

      Of course people could still do exploratory analysis – you can’t physically stop them, and we wouldn’t want to. It would be clearly labelled as exploratory (i.e. not matching the original intended analysis) and the statistics would be viewed in that light.

      Whether the exploratory stuff would get published in the same paper as the registered stuff, or whether it would be a new (non-registered) paper, that’s a matter for the authors and the journal.

      • Faye18

        So this is simply like having an experiment proposal section and allowing the reader to compare if they want to?

        I would write a long post but Sebastiaan seems to be articulating my thoughts much better than I am.

  • Pingback: Preregistration …Problem? : Neuroskeptic

  • Pingback: How the Scientific Sausage Gets Made: Preregistration Escrow for Basic Science? | Nucleus Ambiguous

  • Pingback: Preregistration, a Boring Ass Word for a very Important Proposal | Nucleus Ambiguous

  • Pingback: My Homepage

  • Pingback: A very quick post on preregistration… | saraheknowles

  • louise nelson

    i was recently diagnosed as having no dopamine and very little serotonin. does anyone know of research being done?

  • Pingback: Fixing Science's Chinese Wall - Neuroskeptic | DiscoverMagazine.com

  • Pingback: From the Neuroskeptic blog | The Fibromyalgia Perplex

  • Pingback: Psychology’s ‘registration revolution’ | Science News

  • Pingback: Preregistration for All Medical Animal Research - Neuroskeptic | DiscoverMagazine.com

  • Pingback: Preregistration for All Medical Animal Research | Fresh News Today

  • Pingback: Preregistration for All Medical Animal Research | World News

  • Pingback: Failed Replications: A Reality Check for Neuroscience? - Neuroskeptic | DiscoverMagazine.com

NEW ON DISCOVER
OPEN
CITIZEN SCIENCE
ADVERTISEMENT

Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

Neuroskeptic

No brain. No gain.

About Neuroskeptic

Neuroskeptic is a British neuroscientist who takes a skeptical look at his own field, and beyond. His blog offers a look at the latest developments in neuroscience, psychiatry and psychology through a critical lens.

ADVERTISEMENT

See More

@Neuro_Skeptic on Twitter

ADVERTISEMENT
Collapse bottom bar
+

Login to your Account

X
E-mail address:
Password:
Remember me
Forgot your password?
No problem. Click here to have it e-mailed to you.

Not Registered Yet?

Register now for FREE. Registration only takes a few minutes to complete. Register now »