The Myth of the Optimism Bias?

By Neuroskeptic | June 3, 2016 12:42 pm

Are humans natural, irrational optimists? According to many psychologists, humans show a fundamental optimism bias, a tendency to underestimate our chances of suffering negative events. It’s said that when thinking about harmful events, such as contracting cancer, most people believe that their risk is lower than that of ‘the average person’. So, on average, people rate themselves as safer than the average. Moreover, people are also said to show biased belief updating. Faced with evidence that the risk of a negative outcome is higher than they believed, people don’t increase their personal risk estimates properly.

bright_sideBut now a group of researchers, led by first author Punit Shah of London, has criticized the theory of biased belief updating and, by extension, the whole optimism bias model. Shah et al. say that optimism bias may be a mere statistical artifact, a product of the psychological test paradigms used to assess it. They argue that even perfectly rational, unbiased individuals would seem ‘optimistic’ in these tests. Specifically, the authors say that the apparent optimism is driven by the fact that negative events tend to be uncommon.

The new work builds on a 2011 paper by Adam J. L. Harris and Ulrike Hahn, also authors of the present paper. The 2011 article criticized the claim that people show an optimism bias by rating themselves as safer than the average. The new paper takes aim at biased belief updating. Here’s how Shah et al. describe their argument:

New studies have now claimed that unrealistic optimism emerges as a result of biased belief updating with distinctive neural correlates in the brain. On a behavioral level, these studies suggest that, for negative events, desirable information is incorporated into personal risk estimates to a greater degree than undesirable information (resulting in a more optimistic outlook).

However, using task analyses, simulations and experiments we demonstrate that this pattern of results is a statistical artifact. In contrast with previous work, we examined participants’ use of new information with reference to the normative, Bayesian standard.

Simulations reveal the fundamental difficulties that would need to be overcome by any robust test of optimistic updating. No such test presently exists, so that the best one can presently do is perform analyses with a number of techniques, all of which have important weaknesses. Applying these analyses to five experiments shows no evidence of optimistic updating. These results clarify the difficulties involved in studying human ‘bias’ and cast additional doubt over the status of optimism as a fundamental characteristic of healthy cognition.

I asked Shah and his colleagues to explain the case against the optimism bias in belief updating in a nutshell. They said

All risk estimates have to fit into a scale between 0% and 100%; you can’t have a chance of getting a heart attack at some point in your life of less than 0% or greater than 100%. The problems for the update method arise from the fact that the same ‘movement’ in percentage terms means different things in different parts of the scale.

Someone whose risk decreases from 45% to 30% has seen their risk cut by 1/3, whereas someone whose risk increases from 15% to 30% has seen their risk double -much bigger change. So the same 15% difference means something quite different if you have to revise your beliefs about your individual risk downwards (good news!) or upwards (bad news!) toward the same percentage value. The moment people’s risk estimates are influenced by individual risk factors (a family history of heart attack increases your personal risk by a factor of about 1.6), people should change their beliefs to different amounts, depending on the direction of the change. The update method falsely equates the 15% in both cases.

If the difference in belief change simply reflects these mathematical properties of risk estimates then one should see systematic differences between those increasing and those decreasing their risk estimates regardless of whether they happen to be estimating a negative or a positive event. But in the first case, this will look like ‘optimism’, in the second case it will look like ‘pessimism’. This is the pattern our experiments find…

The evidence base thus seems far less stable than previously considered. There is, using various paradigms, plenty of evidence for optimism in various real-world settings such as sports fans predictions and political predictions, but these just show that certain people might be optimistic in certain situations, not that there is a general optimistic tendency across situations that would be required to say people are optimistically biased. It is also important to note that because this belief updating paradigm has been used in so many neuroscience studies, it means those neuroscience data are also uninterpretable.

In my view, Shah et al. make a strong case that the evidence for optimism bias needs to be reexamined. Their argument makes a crucial prediction: that people should show a ‘pessimistic’ bias (the counterpart of the optimism bias) when asked to rate their chance of experiencing rare, positive events. In the new paper, the authors report finding such a pessimistic bias in a series of experiments. But perhaps they should team up with proponents of the optimism bias and run an adversarial collaboration to convince the believers.

ResearchBlogging.orgPunit Shah, Adam J. L. Harris, Geoffrey Bird, Caroline Catmur, & Ulrike Hahn (2016). A Pessimistic View of Optimistic Belief Updating Cognitive Psychology

CATEGORIZED UNDER: mental health, papers, select, Top Posts, you
  • OWilson

    Optimists look for ways out of the swamp.

    Pessimist just wallow in it.

    If you are a world traveller, you know what I am taking about :)

  • Uncle Al

    Are humans natural, irrational optimists?” In Upper West Side Manhattan, yes. In Brownsville, Brooklyn, no. Along the Nairobi Highway (Harbor Freeway) it depends on how insurance fraud prosecutions are being pursued.

    • Joe Miller

      You’ve got a great sense of humor. I always enjoy your posts because of that.

  • PsyoSkeptic

    Hmmm prove the point? Is it a rare event in this field that people don’t attempt to argue against a perfectly rational position to defend an entrenched incorrect one? Your recommended solution already predicts that the reaction from proponents of optimism bias will be to deny. Your recommendation would be expected if there was just bias against rare events and an agreement from optimism bias researchers wasn’t readily forthcoming.

  • zlop

    Life is a positive presumption.
    (worked — I am not banned from here)

  • D Samuel Schwarzkopf

    Interesting post! As you know I’m in general a big fan of adversarial collaborations – but if, as seems to be the case here, the evidence is a statistical artifact, is this really necessary? Presumably all you’d need to do is reanalyse the existing data with the appropriate statistics?

    • Neuroskeptic

      Re-analysis of the existing data might be an alternative, but I’m not sure what the correct analysis would be. AFAIK from reading the paper, testing whether rare positive events produce a pessimistic bias is the most direct way of testing Shah et al.’s approach

      • Punit Shah

        Neuroskeptic – thanks so much for covering our work and I am glad it has generated some discussion. Re-analysis of existing data is a good idea, as is the suggestion of an adversarial collaboration. There is unfortunately no completely correct analysis because the optimistic updating task has inherent problems which generates statistical artifacts which looks like optimism bias. However, some analyses are more appropriate than others (where one can try to control for some of the limitations of the task) and it was important to use positive as well as negative life events (something that surprisingly hadn’t been done before). And when we include positive events and performed some of this more appropriate analysis, we couldn’t find evidence for optimism bias. So, an adversarial collaboration using the same paradigm would probably not be that useful even if the data were analysed as well as they can be. Instead, a new test of optimism bias is required, for which an adversarial collaboration would be helpful.

        • Chris Chambers

          If you can organise an adversarial collaboration, please do consider submitting it as a Registered Report.

          It would be a perfect fit for RRs at Cortex

          or at Royal Society Open Science

          • affectivebrain

            We would love to do this! Our response paper is under review (the one were we show no flip for positive stimuli and explain why thier critique is wrong)
            but we would be happy to work with Shah et al.

          • Chris Chambers

            Excellent – it sounds like this could be a perfect way to move the area forward.

            As Registered Reports editor at Cortex and RSOS, please don’t hesitate to contact me if I can help to facilitate.

          • D Samuel Schwarzkopf

            My original comment was based on my reading of Neuroskeptic’s post. But this discussion suggests the situation is a lot more nuanced than it came across and I’ll read all the papers everybody linked to! Either way, it does indeed seem like an adversarial collaboration could be of great interest here.

      • affectivebrain

        Neuroskeptic – we would love to explain the myth of the myth of the optimism bias. If you would like to interview the other side please do reach out!

    • affectivebrain

      Sam – we hope that you read our comment and the papers we posted above. We would love to hear your thoughts after! Thanks!

    • Ulrike Hahn

      I think you’re right that an adversarial collaboration makes no sense unless one uses a different method!

      It’s not possible to reanalyse the bulk of the existing data either though: the vast majority of past studies using this method simply didn’t collect essential information:

      They simply asked people about their own individual risk and then told them the average risk, and then asked again about individual risk. They never asked participants for their beliefs about the average person’s risk.

      But you need *both* information about the average and individual risk factors to make optimal predictions about probabilities of experiencing future life events.

      We published a paper in 2013 in the Proceedings of the 35th Annual Meeting of the Cognitive Science Society that made this point and highlighted some of the consequences of this error

      In the following, a few studies took this up and also asked for estimates of the average risk.

      But while this is a precondition for being ‘in the right game’, it doesn’t in and of itself solve all the statistical problems. And none of the past studies -even the two or three that subsequently asked for estimates of average risk- have solved these.

  • M Peirce

    From the explanation in the second paragraph of the second box, it seems the authors are pointing out a common error in statistical reasoning. Consider that a genetic risk factor could make your risk 1000-fold higher than it is for people without that factor. Without further context, that sounds like reason for concern. But if the initial average risk is, say, .00001%, a thousand-fold increase leaves you with a still negligible risk of .01% (where the negligibility still depends on the value of what’s at risk). On the other hand, increasing your risk by a “mere” half of your current risk could be quite concerning, especially if your initial risk was high (e.g. if you started with 50-50 odds).

    If I am understanding the nature of the criticism correctly, that this is the source of errors (conflating percent of increase/decrease with the increase/decrease in percent), then the conclusion that is warranted is to reject that optimism is the cause of the bias, not to reject that we tend to be irrationally optimistic. Regarding the causal explanation, we are instead prone to mis-update risk increases or decreases when the starting points are at far ends of the scale; at one end of the scale, with one framing, optimism is the result, at the other end of the scale with another framing, pessimism is.

    But also on this point, it seems odd that irrational risk averseness, such as fear of flying – pessimism? – is not also pointed out and discussed, since that seems to be another side of the same coin (e.g., a 200% increase in commerce flight crashes tends to lead to a significant reduction of faith in the safety of flying, even though the risk of injury is still too negligible to worry about).

  • affectivebrain

    The Shah critique is completely wrong for a very simple reason addressed here, there is optimism bias for positive stimuli:

    We too were persuaded at first read, back in 2012 when they first attempted to publish in another journal. We took their claim seriously and carefully read all 150 pages including the appendix. Turns out there was a simple artefact in their stimuli that caused the flip – they used a skewed set of base rates. We simulate this flip ourselves and explain it here:

    You do not even need a Bayesian explanation, it is much more simple (see pg 6-8) – a monkey would get the same results.

    Once you use a set of positive stimuli that is not skewed you do not get a bias in simulated data but you do see an optimism bias in real human data! In other words, we show no flip in optimism bias for positive events.

    The second claim that they make about misclassification does not hold empirically. We show this here:

    We hope people will carefully read the papers above (and examine Shah’s paper carefully including appendix) before reaching a conclusion.

    P.S. – At the first two journals in which the authors attempted to publish the paper the editors first approached us to respond and then sent our response together with the paper to reviewers. In both cases the reviews recommended rejection (5 out of 6 reviewers recommended rejection after reading our response). In this last journal the editor did not ask us to write a response before sending to review and thus Shah’s mistakes were missed by those 2 reviewers. Our suggestion is that editors reviewing critique papers will ask for a response first and send it together with the critique.

    • affectivebrain
    • Daniel Bennett

      The working paper linked above claims that it is possible for optimistic belief updating to be an artefact, but that this can only for (a) negative events, under (b) a ‘bottom heavy’ distribution of events (Figure 1A and 2A of the working paper).

      But isn’t this exactly the distribution of events found in the Sharot et al (2011) paper (as summarised in Figure 11 of the Shah paper). As such, doesn’t your own working paper provide evidence that the optimistic effect of Sharot et al (2011) could be an artefact?

      • affective brain lab

        Hi Daniel. No – because what that paper shows is that if in the simulation you control for this by controlling for estimation errors (as the original paper does) the simulation shows no bias, but the human data does. In fact, the 2011 paper takes very careful measures to control for this by in addition using a restricted rating scale (3-77) for stimuli ranging from 10-70. This places the stimuli in the middle and allows the extent of overestimation to be equal to the extent of underestimations. The 2011 paper in fact shows that the neural data correlates with estimation errors (not update) and that a learning parameter (the relationship between the estimation error and the update) is different for good and bad news (rather than simple update). This debate, as you see, in much more nuanced than a headline of “no optimism” would have and it will be for the advantage of the readers if the blog considers the other side in the main test not just comments.

        • Daniel Bennett

          Thanks for getting back to me. I certainly agree that it’s to readers’ advantage to have this material aired in public. I would contend that facilitating post-publication discussion is one of the advantages of having the authors’ commentary published, rather than letting it die in the review process (as this one apparently almost did!). I certainly would agree with you that the question is nuanced enough that neither side’s argument should be dismissed out of hand.

          Would you mind to expand a little on how the original paper controlled for estimation errors in its behavioural analysis? This seems to me to be a particularly important point, and on a quick look through the paper I couldn’t see anything.

          • Adam Harris

            Hi Daniel and affectivebrain, in addition to my earlier reply to affectivebrain, it is worth also
            adding that p. 8 of the shared in prep paper (Garrett & Sharot) suggests that including the difference in estimation error between bad
            news and good news trials is critical. Although our paper argues that this is not an appropriate control from a normative classification of
            the task, in response to a reviewer’s comment, this control was nevertheless included in all of the Shah et al. experiments, as stated in that manuscript.

          • Adam Harris

            Hi Daniel and affectivebrain. In addition to my previous reply to affectivebrain, it is worth also adding that p. 8 of the shared in prep paper (Garrett & Sharot) suggests that including the difference in estimation error between bad news and good news trials is critical. Although we
            argue that this does not solve any of the problems we identify, in response to a reviewer’s comment, this control was nevertheless included in all of the Shah et al. experiments.

      • Punit Shah

        The is an astute observation, Daniel. And thanks for having a look at our paper, and making it down to Figure 11 :) We have touched on this point and other issues raised above in a section appearing just before the General Discussion in the paper (Section 7.3; see

        For more info also see a nice explanation by Hahn and Harris (2014; – about what makes something a bias as this is relevant to deciding which life events should be picked in this kind of research.

        • Daniel Bennett

          Thanks for the links, I’ll check them out!

    • Matt Evans

      Whilst this is an interesting discussion, especially as someone from outside the field, I have to say that I find the tone of affectivebrain rather unprofessional. The postscript revealing information about the peer review process, in a rather unflattering way I must add, is not appropriate for an academic discussion.

      • D Samuel Schwarzkopf

        I disagree. While you can perhaps take issue with the tone, I think information about the peer review process actually puts cases like this into context. I am a strong advocate of open peer review and this is yet another case where this would greatly benefit readers from outside the field like myself. It would also benefit all of the authors because rather than being published in a different, lower profile journal the critique and the response should ideally be directly associated with the original publication. Being able to see not only both comments but the whole peer review process would better allow the interested non-expert readers to make up their own mind.

        • Matt Evans

          I am always the first person to talk about the benefits of open review. If done well and fairly, it can offer great insight into the work being presented.

          But that is NOT the situation here. This is a very bitter person maliciously giving facts out of context, to discredit a work they have vested interest in. That is in no sense open review, and is deeply unprofessional.

          I say again – the person who made these comments should consider how to act more professionally in public.

          • D Samuel Schwarzkopf

            Be that as it may, my point still stands: with open review (and dropping our obsession with journals while we’re at it) you could see it all and judge for yourself.

    • Adam Harris

      Hello everyone! To introduce myself, I’m the
      corresponding author on the Shah et al. paper. I just wanted to direct interested readers to a couple of important points in the Shah et al. manuscript that are relevant to the present discussion:

      1) Our argument does not relate to the
      fact that overestimates can be greater than underestimates (although that would clearly contribute to an artifactual effect as demonstrated in the SSRN manuscript). Please see original blog post for our argument. No capping of the scale is required to observe this effect.

      2) The simulations in our paper do not involve a random selection of numbers, in contrast to the simulations in the shared in prep manuscript on SSRN: “we will randomly generate a first estimate for each trial for each ‘participant’. This will be a random integer…between 5 and 95…and generate a second estimate – a random integer between the first estimate and the information [the actual base rate]” (Garrett & Sharot, pp. 6-7). Rather, our simulations involve rational agents who are optimally performing the risk judgment task that they are being asked to do. For these agents, artifactual optimism (or pessimism) can be observed even if event frequency is centered at 50%.

      3) Our paper demonstrates that the
      ‘learning parameter’ is also susceptible to this artifact (e.g., from p. 31 and again in Section 7.3 that Punit has already referenced).

  • Tali Sharot

    Shah’s paper and critique of the optimism bias is completely wrong. An optimism update bias has indeed been shown for positive stimuli:
    also: korn et al 2012 (J of N), Mobius et al., 2012

    Shah is the only one to see a flip because there is a confound in his study – see pg 5-9 of the link above, where the confound is revealed.

    Also he uses nonsense stimuli like: what is the likelihood that you will eat at your favourite restaurant in your life time? (i would think 100%!) or “what is the likelihood that you will receive a present in your lifetime?”

    It has been shown in 3 separate papers from independent groups that asking subjects for base rates as well as own risk and using those statistics in the analysis as Shah suggests still results in optimistic update bias:
    Kuzmanovic et al., 2016; Kuzmanovic et al., 2015, Garrett et al., 2014

    This critique should be evaluated with caution.

  • Ulrike Hahn

    I’m one of the co-authors of the Shah et al. paper: it’s really important to stress that our paper shows that the method used in past optimistic updating studies is simply not fit for purpose.

    The method makes unbiased, non-optimistic, perfectly rational agents look biased and irrational. So when we observe those patterns in human participants it tells us *nothing* about what human participants are doing.

    This means that the past studies using these methods are uninterpretable. The problems also can’t be fixed simply by ‘choosing the right kinds of events’. Different, better methods need to be found.

    • Daniel Bennett

      I think that’s an interesting point. What do you think those better methods would look like? Ward Edwards-style ‘book-bag’ tasks with poker chips in bags would give you more interpretable probability estimates, but how do you give people good news and bad news about such anodyne events?

      • affectivebrain

        Daniel – an optimism update bias has indeed being shown for low level RL tasks:
        which is why this critique is mute.
        There are other tasks as well which use chip like tasks that show the same thing – for example PhD thesis of Cahill from Harvard (to be published soon). He will probably send you a copy if you ask.

      • Ulrike Hahn

        There’s a fairly long research tradition of looking for evidence of ‘wishful thinking’ in the lab. The results have led researchers to speak of the ‘elusive wishful thinking effect’.

        Of course, as you say, the difficulty is doing studies where the events really matter to people in the same way that thinking about one’s risk of divorce or risk of getting cancer does.

        So, one can argue about how much results with ‘anodyne’ events really tell us.

        That said, there are results from studies using such events that show the opposite of ‘wishful thinking’, namely a tendency to over-estimate the probability of negative events (a ‘severity bias’).

        It’s worth noting also that even in classic comparative optimism studies people over-estimate the probabilities of the negative future life events typically studied (cancer, divorce etc. all of which are -luckily!- comparatively rare). In that sense, too, people are ‘pessimistic’.

  • Pingback: Does optimism bias shape our lives? | What is behavioral?()

  • Pingback: Indistinguishable from Magic 6/4-6/5 – Disruptive Paradigm()

  • Punit Shah

    Had anyone else noticed a peculiar pattern of ‘up-voting’ in the comments section (image attached)?

    • D Samuel Schwarzkopf

      I typically upvote comments to move the comments I’d like to see discussed to the top but in all honesty I think it’s a terrible feature. I wonder if one can’t just set the default option to ‘sort by newest’?

    • Tali Sharot

      punit – that tali is actually not me – I assure you.



No brain. No gain.

About Neuroskeptic

Neuroskeptic is a British neuroscientist who takes a skeptical look at his own field, and beyond. His blog offers a look at the latest developments in neuroscience, psychiatry and psychology through a critical lens.


See More

@Neuro_Skeptic on Twitter


Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

Collapse bottom bar