Registration: Not Just For Clinical Trials

By Neuroskeptic | November 3, 2008 12:50 am

In a previous post, I said that I’d write about how to improve the quality of scientific research by ending the scrabbling for “positive results” at the cost of accuracy. So here we go. This is a long post, so if you’d prefer the short version, the answer is that we ought to get scientists in many fields to pre-register their research – to go on record and declare what they are looking for before they start looking for anything.

This is not my idea. Clinical trial registration is finally becoming a reality. Several organizations now offer registration services – such as Current Controlled Trials. Their site is well worth a click, if only to see the future of medical science unfolding before your eyes in the form of a list of recently registered protocols. Each of these protocols, remember, will eventually become a published scientific paper. If it doesn’t, everyone will know that either the trial was never finished, or worse, it was finished and the results were never published. Without registration, a trial could be run and never published without anyone knowing what had happened – making it very easy for “inconvenient” data to never see the light of day. This is publication bias. We know it happens. Trial registration makes it all but impossible. It’s important.

In fact, if someone were designing the system of clinical trials from scratch, they would, almost certainly, make registration an integral step right from the start. Unfortunately, no-one intelligently designed clinical trials. They evolved, and they’re still evolving. We’re not there yet. Trial registration is still a “good idea” rather than a routine part of clinical research, and while many first-class medical journals now require pre-registration and refuse to publish unregistered trials, plenty of other respectable publications have yet to catch up.

What I want to point out is that it’s not just clinical trials which would benefit from registration. Registration is a way to defeat publication bias, wherever it occurs, and any field in which there are “negative results” is vulnerable to the risk that they won’t be reported. In some parts of science there are no negative results – in much of physics, chemistry, and molecular biology, you either get a result, or you’ve failed. If you try to work out the structure of a protein, say, then you’ll either come up with a structure, or give up. Of course, you might come out with the wrong structure if you mess up, but you could never “find nothing”. All proteins have a structure, so there must be one to find.

But in many other areas of research there is often genuinely nothing to find. A gene might not be linked to any diseases. A treatment might have no effect. A pollutant might not cause any harm. Basically, if you’re looking for a correlation between two things, or an effect of one thing upon another, you might get a negative result. Just off the top of my head, this covers almost all genetic association and linkage studies, almost all neuroimaging, most experimental psychology, much of climate science, epidemiology, sociology, criminology, and probably others I don’t know about. Oh, and clinical trials, but we already knew that. People don’t tend to publish negative results, for various reasons. Wherever this is a problem, trial registration would be useful.

Publication bias is known to be a problem in behavioural genetics (finding genes associated with psychological traits). For example Munafo et. al. (2007) found pretty strong evidence of publication bias in research on whether a certain allele (DRD2 Taq1A) predisposes to alcoholism. They concluded by saying that

Publication of nonsignificant results in the psychiatric genetics literature is important to protect against the existence of a biased corpus of data in the public domain.

Which is true, but saying it won’t change anything, because everyone already knew this. No-one likes publication bias, but it happens anyway – so we need a system to prevent it. Curiously however, registration is rarely mentioned as an option. Salanti et. al. (2005) wrote at length about the pitfalls of genetic association studies, but did not. Colhoun et. al. (2003) , in a widely cited paper in the Lancet, explained how publication bias was a major problem but then flat-out dismissed registration, saying that

an effective mechanism for establishment of prospective registers of proposed analyses is not feasible.

They didn’t say why, and if it works for clinical trials, I can see very little reason why it shouldn’t work for other research. Indeed another similar paper in the same journal raised the idea of “prestudy registration of intent”. Clearly it deserves serious thought.

Registration would also help combat “outcome reporting bias“, or as it’s known in the trade, data dredging. Any set of results can be looked at in a number of ways, and some of these ways will lead to different conclusions to others. Let’s say that you want to find out whether a certain gene is associated with obesity. You might start by taking a thousand men and seeing whether the gene correlates with body weight. Let’s say it doesn’t, which is really annoying, because you were hoping that you could spend the next five years getting paid to find out more about this gene. Well, you still could! You could check whether the gene is associated with Body Mass Index (weight in proportion to height.) If that doesn’t work, try percentage of body fat. Still nothing? Try eating habits. Eureka! Just by chance, you’ve found a correlation. Now you report that, and don’t mention all the other things you tried first. You get a paper, “Gene XYZ123 influences eating behaviour in males”, and a new grant to follow up on it. Sorted. Lynn McTaggart would be proud.

This kind of thing happens all the time, although that’s an extreme example. The motives are not always selfish – most scientists genuinely want to find positive results about their “pet” genes, or drugs, or whatever. It is all too easy to dredge data without being aware of it. Registration would put an end to most of this nonsense, because when you register your research – before the results are in – you would have to publically outline what statistical tests you are planning to do. Essentially, you would need to write the Methods section of your paper before you collected any results.

If you were feeling particularly puritan, you could make people register the Introduction in advance too. Nominally, this is a statement of why you did the research, how it fits into the existing literature, what hypothesis you were testing and what you expected to find. In fact, it’s generally a retrospective justification for getting the results you did, along with a confident “prediction” that you were going to find … exactly what you found. This is not a serious problem, as publication bias is, because everyone knows that it happens and so no-one (except undergraduates) takes Introductions seriously. But writing Introductions that no-one can read with a straight face (“Oh sure, they really predicted that ahead of time” “Ha, sure they didn’t just decide to do that post-hoc and then PubMed a reference to justify it”) is silly. Registration would be a way of getting everyone to put their toys away and get serious.

  • HolfordWatch

    Following your comment, we included this link in as an update to Biased Reporting of Trials: This Is Why People Are Losing Trust in the Value of the Scientific Process.You write: “If you were feeling particularly puritan, you could make people register the Introduction in advance too. Nominally, this is a statement of why you did the research, how it fits into the existing literature, what hypothesis you were testing and what you expected to find”. I could just about cope with defining the hypothesis etc. up-front but I certainly couldn’t write the literature review as I would want to keep tinkering with it in the light of later publications. I might even change my mind about what I expect to see, in the light of intervening reports from other groups.But, mostly, I tinker. The idea of being able to leave something alone for several years – no.However, I strongly take your point about the utility of the pre-registration of intent.I’m also finding it increasingly difficult to treat some large-scale trials seriously when there are 12 or more papers about the same cohort, all reporting something different – but there is no cumulative use of the Bonferroni correction used on the data to determine significance or calculate the true CIs.There’s needs to be a pre-declaration about the number of parameters that will be evaluated in a cohort and a commitment to use cumulative multiple-analysis correction in all of the papers.

  • Neuroskeptic

    Well there’s nothing wrong with adding to the intro – and I’m one for last-minute tinkering as well. The big problem is when people say things in the Introduction which they probably came up with post-hoc but which, in theory, should have been written first.What we need, perhaps, is a pre-registered “Preface” setting out the hypothesis and why they chose to study it – then you could wax lyrical about the literature afterwards in the “Introduction” (mainly for the benefit of students and non-specialists).

  • pj

    I’m sympathetic to the idea of pre-registration of scientific studies, but I don’t think it is feasible in many fields as they are currently practised. Just think about the lengths people go to to avoid being ‘scooped’.But this approach wouldn’t help with the main problem anyway, which is that results are just not published or made available if they are not considered interesting enough, or contradict the particular meta-narrative a scientist is promoting.Things like Nature Precedings or Journal of Negative Results don’t seem to have made much difference to that – and without the results being made available all pre-registration does is tell you that someone else may or may not have completed a study on X, which may or may not have found something interesting, and which may or may not be published at some point in the future.

  • Neuroskeptic

    it’s no panacea – but if you see that Gene X has been studied 50 times, and only five of them were published, all finding a positive result, you’d know that something was going on. And when it came to meta-analysis, it would allow you to contact the authors and request their data…At the moment no-one knows how much data there is out there not getting published.The big problem is, as you say, that scientists wouldn’t like it because of the risk of being “scooped”. Not sure what to do about that.

  • dvizard

    IMO, it would be no problem to avoid the fear of being scooped. One could, for example, create an internet database where you register your research intent ahead of time and the only thing which would be publicly seen was, say, the date of registration, or even only the number of current projects running by a certain research group. All more detailed information would be kept private (even encrypted, to protect against database hacks or an admin sniffing around) until you are ready to publish the study, at this point you can make the information public, or even only accessible to reviewers with an access code you’d send in together with the manuscript. Or whatever. There is a plethora of possibilities, and I agree that it would be a very important tool to reduce the post hoc bias. If a very interesting point arises post hoc and it is really worth publishing, one could still publish it, but it would be no secret that the findings were not expected from the beginning.

  • Neuroskeptic

    My thoughts exactly! There’s nothing wrong with post-hoc analysis at all – the problem arises when you then publish the results of the one post-hoc analysis that gave you a good result and just pretend the other analyses never happened.



No brain. No gain.

About Neuroskeptic

Neuroskeptic is a British neuroscientist who takes a skeptical look at his own field, and beyond. His blog offers a look at the latest developments in neuroscience, psychiatry and psychology through a critical lens.


See More

@Neuro_Skeptic on Twitter


Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

Collapse bottom bar