Beware Reverse Publication Bias

By Neuroskeptic | February 23, 2012 7:32 am

In all the fuss over the pressure for scientists to publish positive results, we may have been missing an equally dangerous kind of publication bias operating in the opposite direction.

So say Luijendijk and Koolman in the Journal of Clinical Epidemiology: The incentive to publish negative studies: how beta-blockers and depression got stuck in the publication cycle.

The background here is the possible link between beta blockers and depression. Beta blockers are drugs widely used to treat high blood pressure. Some studies have reported that they raise the risk of depression, though many others found no link. Propranolol is said by some to be the worst offender because it’s best at entering the brain.

Luijendijk and Koolman say that beta blocker-depression studies have appeared in the form of “publication cycles” – first a positive study appears, and then negative ones follow. Then another study finds a positive link using a different method – and rebuttals, using those methods, soon appear. They sketch out several such positive-negative cycles based on different methods and particular hypotheses.

Now, there’s two ways to look at this. You could explain this in terms of standard positive publication bias. Maybe lots of people looked into a possible link, the ones who found nothing didn’t publish. Then someone, by chance, did find an association with depression, and they published it. Once that happens, the question became a hot topic so the unpublished negative studies were dusted off and submitted.

But there’s a more worrying possibility. What if the original positive studies were correct, and the subsequent negative studies were the product of an inverse publication bias in favor of contrarian negative results?

The publication cycles in the literature about beta-blockers and depression seem to suggest that
the very publication of positive studies, whether true or false, increases the incentive to publish negative results, whether true or false… [in the case in question] the ?rst as well as a signi?cant number of subsequent negative studies were published in high-impact journals (8 of 19 journals with 2009 impact factor greater than 4.0). Third, power analysis showed thatd in two cycles, the ?rst negative studies were underpowered…

If a true-positive study stimulated the publication of one or more false-negative studies, again an invalid picture of the true association would emerge. Publication of false-negative studies may thus give rise to publication bias, just like publication of false-positive studies. Research groups usually compete to get the ?rst positive study published in a high-impact journal. It has been suggested that it could also be worthwhile to aim at getting the ?rst study that challenges the former published.

This is not an entirely new idea. It was described in the classic Why Most Published Research Findings Are False, but only in passing.

To be honest it’s impossible to know, in any particular case, whether inverse publication bias is at work. Depending upon whether you think beta blockers cause depression (and that’s still controversial), your interpretation of the biases in the literature will probably differ.

However, I think the basic idea is important. Publication bias isn’t a bias in favor of positive results per se. It’s a bias towards “interesting” results – which in most cases means positive ones, but could equally well include negative ones, in certain contexts. In some ways, this could be a good thing, if the negative and positive biases eventually cancelled out, leaving a neutral playing field; but there’s no guarantee that would ever happen.

As for how to fix publication bias – my opinions on that question are well known…

ResearchBlogging.orgLuijendijk, H., and Koolman, X. (2012). The incentive to publish negative studies: how beta-blockers and depression got stuck in the publication cycle Journal of Clinical Epidemiology DOI: 10.1016/j.jclinepi.2011.06.022

  • Bjoern Brembs

    From looking at a number of these meta-analyses over the last weeks, to me it boils down to a numbers game: such studies are good candidates for estimating actual effect size. These calculations get more accurate as the number of studies increases and the funnel graphs will reveal any bias:

  • Burak Uzel

    These long going debates on prepondorence to the possible publications of positive result researches, will not be subsided in mean time, I guess. However, for me, 15 years in the practice, every publication that I read, reminds me: “Ars longa, vita brevis,
    occasio praeceps, experimentum periculosum,iudicium difficile.”.

  • Joe McCarthy

    I would not be surprised by either of the two potential explanations proposed here. However, I also believe that this back and forth of claims and counter-claims – or results and refutations – is part of the natural progression of science. As scientists, we are trained to be skeptical, and it is far easier to be skeptical about others' work than our own. It is important to pay particular attention to the limitations – explicit or implicit – of any particular study, but I think it's generally a Good Thing that we are exposed to alternative evidence-based perspectives.

  • Negative Results Journal

    Interesting analysis and post. There are also journals covering negative results specifically, like The All Results Journals:
    I think publication of negative results will decrease publication bias and journals like these will be of a high value for science development.

  • Eric Charles

    Good Point Joe! The back and forth doesn't necessarily indicate that something is wrong. If they cycle is too regular though, we should be worried. In most lab sciences, it would be odd for one lab to prove something, a single other lab to replicate and refute, a single other lab to replicate and support, etc. Especially given how long the publication window is. If a result is sufficiently controversial and either the effect size is small, or the mediating conditions delicate, we should expect a mess of replications and failures to hit neigh simultaneously. Thus, too much order does suggest (but not prove) bias. It suggests that several manuscripts with various results were available, and preference was given to which ever ones contradicted the most recently published papers.

    P.S. Bjoern… but… but… a larger sample (of study results) will only produce a better estimate of the effect size if there is no publication bias. If there is bias in favor of large (or small) effect sizes, then more published studies won't help.

  • Neuroskeptic

    Eric: Right. That's the problem with publication bias though – you can never be sure about it.

    In some ways it would be better if it always happened, so we could at least account for it! But we can't even do that.

    Science shouldn't be about judging whether a pattern of publication “looks dodgy”, although it increasingly is (and no amount of funnel plots or p-curves can substitute for judgements of dodginess); which is why I'm increasingly feeling that radical solutions such as I linked to at the end of the post, are justified.

  • Eric Charles

    I think the more radical solutions would be good for the field… I just can't predict how. I don't think it would stop the occasional unscrupulous person from fudging data. It also wouldn't stop people from ending studies early for a vast variety of reasons, and not updating the registry to explain what happened.

    That said, it would certainly change how we do meta-analyses, it would stop some types of fraud, and it might help with how credit is given to researchers (e.g. if someone registered a planned study first, but finished it second or third for some logistic reason, they could still get some type of “first to think of it” credit).

  • practiCal fMRI

    And there's always the question of methodology: suitable, or not? The unspoken assumption seems to be that all methods were appropriate yet the results conflicted.

    In my experience – and being in fMRI it's considerable in this regard – the most common reason for a negative result is a shitty experiment. I already reduced the number of papers I will review per year to the number I publish. I doubt many others would be any more enthusiastic to review a lot of flaky studies. (Maybe a reason to have online publication without review, and let the masses go at it? Use fear of embarrassment to tighten up methods?)

  • Zigs

    This comment has been removed by the author.

  • Zigs

    I think that it is important to differentiate between positive results from experiments and positive results from epidemiological research.

    A positive result from observational studies may be due to a bias that was not adequately accounted for. A positive result on the primary outcome of an experiment means that there is at least a 95% chance that there is a real difference between groups (as you know, things get complicated when you start doing secondary analysis).

    The problem with negative experimental studies is that they are not always meaningful. If you get a negative result, it can be for many reasons that may have absolutely nothing to do with whether or not there is a real difference between 2 groups.

    For example, a randomized trial can be negative if the medication is overdosed, underdosed, dosed too quickly, if patients are poorly selected, if there are too many drop outs, etc., etc., etc.

    An excellent example of negative studies that don't mean much are the aripiprazole bipolar depression trials (Thase, J Clin Psychopharm, 2008). The medication was overdosed and increased too quickly and the studies failed (even though there is a very nice separation from placebo from weeks 1 through 6, by week 8 there was no difference). Including these 2 studies in a meta-analysis would lead one to believe that there is no benefit at all to the medication when the study indicates that there may be some benefit, but it was dosed incorrectly.

  • Ivana Fulli MD

    The subject of that post is very important and your proposals are worth of interest.

    Another problem will remain though. I will call it the “Prejudice Publication Bias”:

    Unconventional ideas or ideas coming from a non -specialist are too difficult to test.

    Published positive results found for unconventional treatments are dismissed as irrational or ignored.

    Beware that it is not the priviledge of homeopathic remedies where you have to admit that the science supporting efficacy is weak .

    But hormones for mood disorders or even the Baclofen treatment of alcohol addiction suffer from prejudice because outsiders are promoting it.

    There is a clean Gaba possible mecchanism of action for Baclofen but as a treatment of alcohol addiciton it has been pushed by the clients against the “academia” after being invented by an alcohol dependant cardiologist.

    Despite a 2005 case report and a 2007 positive clinical trial both published in The Lancet of alcohol-use disorders

    and a phenomenal clients interest in Baclofen and use of it with dedicated internet forums and big sales of it on the internet,

    the inventor of the treatment-Dr Olivier Ameisen (a cardiologist and an alcohol addict himself but not an addiction doctor )had to protest when it was not included in a 2009 review of treatment of alcohol addiction in cirrhotic clients when only Baclofen is not metabolized by the liver and those clients clearly live longer if sober:

    “In the treatment of alcohol dependence, Marc Schuckit (Feb 7, p 492)1 omits baclofen. This is most regrettable (…).”

    To say nothing about the fact that physicians have to let clients prescribe for themselves through the internet or take a personal risk of a suit for off label prescription in case of an accident.

  • John

    Thanks Neuroskeptic,

    This issue is interest because of beta blockers. I have often wondered why so many anti-depressants, often regarded as all about serotonin, exhibit important norepinephrine agonist effects. So it would have been nice to have long known about this negative result …



No brain. No gain.

About Neuroskeptic

Neuroskeptic is a British neuroscientist who takes a skeptical look at his own field, and beyond. His blog offers a look at the latest developments in neuroscience, psychiatry and psychology through a critical lens.


See More

@Neuro_Skeptic on Twitter


Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

Collapse bottom bar