New Point of Inquiry: Hugo Mercier — Did Reason Evolve for Arguing?

By Chris Mooney | August 16, 2011 8:25 am

The latest episode of Point of Inquiry is now up, and Hugo Mercier himself is responding in the comments section.

Here is the show write-up:

Why are human beings simultaneously capable of reasoning, and yet so bad at it? Why do we have such faulty mechanisms as the “confirmation bias” embedded in our brains, and yet at the same time, find ourselves capable of brilliant rhetoric and complex mathematical calculations?

According to Hugo Mercier, we’ve been reasoning about reason all wrong. Reasoning is very good at what it probably evolved to let us do—argue in favor of what we believe and try to convince others that we’re right.

In a recent and much discussed paper in the journal Behavioral and Brain Research, Mercier and his colleague Dan Sperber proposed what they call an “argumentative theory of reason.” “A wide range of evidence in the psychology of reasoning and decision making can be reinterpreted and better explained in the light of this hypothesis,” they write.

Given the discussion this proposal has prompted, Point of Inquiry wanted to hear from Mercier to get more elaboration on his ideas.

Hugo Mercier is a postdoc in the Philosophy, Policy, and Economics program at the University of Pennsylvania. He blogs for Psychology Today.

Listen to the full show here.

Comments (5)

  1. Matt

    Would this mean reasoning is the result of sexual selection?

  2. No, it’s totally independent of sexual selection. Geoffrey Miller suggested something along these lines (in The Mating Mind), but that’s not at all plausible. Intuitively (and from experience…), I’d say that argumentation is not necessarily the best mating strategy.

  3. Nullius in Verba

    Would there not also be a selective pressure to accurately judge the arguments of others?

    If one persuades many, to their ruin, one profits, but many lose. Does the gain of one outweigh the losses of the many? And if an indiscriminate suspicion of glib arguers were to be selected for, what of the advantages of cooperation? Surely, each argument is meant not only to be told, but to be listened to. Are the listeners not best served by the most accurate judgements?

    It has long seemed to me the easiest solution to why our reasoning is flawed is that computation is costly. It costs energy, attention, a huge and delicate brain straining at the limits of anatomical possibility, and time to think. It is designed by a blind and fallible process, out of materials not best suited to the task.
    Why is the mammalian retina wired up backwards? Why does the recurrent laryngeal nerve follow the route it does? Why can’t humans digest cellulose? We don’t expect bodies to be perfect – quite the reverse; it is truly astonishing that the brain is as accurate and effective as it is.

    You can get 80% of the effect for 20% of the effort, and vice versa. We use heuristics that work most of the time – especially for problems of the sort faced on the plains of Africa. One rarely has the precise information needed for finer judgements; a coarse approximation is justified by the quality of the inputs. (Precise odds processed with Bayesian exactitude would be a good case of the ludic fallacy.)

    And while the need to make a good argument does explain confirmation bias quite well – or the need for resistance to too hasty persuasion for that matter – what of all the other fallacies: correlation implying causation, affirming the consequent, the conjunction fallacy, illicit major, and so on? Would we not predict from the hypothesis that fallacies would all be of a type to make confirmation of belief easier, when many of these can fall either way with equal facility?

    An interesting hypothesis – I will be interested to see how well you argue in its defence.

  4. Thank you for your comment!

    >Would there not also be a selective pressure to accurately judge the arguments of others?

    Yes, that is half our theory.

    >If one persuades many, to their ruin, one profits, but many lose. Does the gain of one outweigh the losses of the many? And if an indiscriminate suspicion of glib arguers were to be selected for, what of the advantages of cooperation? Surely, each argument is meant not only to be told, but to be listened to. Are the listeners not best served by the most accurate judgements?

    Yes. For communication to be stable in any species, it has to benefit both senders and receivers, it can’t just be manipulation.

    >It has long seemed to me the easiest solution to why our reasoning is flawed is that computation is costly. It costs energy, attention, a huge and delicate brain straining at the limits of anatomical possibility, and time to think. It is designed by a blind and fallible process, out of materials not best suited to the task.

    Two problems with that view: 1) it makes no testable prediction, or not much; 2) the biases of reasoning are systematic, not random error.

    Why is the mammalian retina wired up backwards? Why does the recurrent laryngeal nerve follow the route it does? Why can’t humans digest cellulose? We don’t expect bodies to be perfect – quite the reverse; it is truly astonishing that the brain is as accurate and effective as it is.

    >You can get 80% of the effect for 20% of the effort, and vice versa. We use heuristics that work most of the time – especially for problems of the sort faced on the plains of Africa. One rarely has the precise information needed for finer judgements; a coarse approximation is justified by the quality of the inputs. (Precise odds processed with Bayesian exactitude would be a good case of the ludic fallacy.)

    Yes, but biases are best understood as adaptive deviation due to an underlying sound heuristic, which is what we do with reasoning.

    >And while the need to make a good argument does explain confirmation bias quite well – or the need for resistance to too hasty persuasion for that matter – what of all the other fallacies: correlation implying causation, affirming the consequent, the conjunction fallacy, illicit major, and so on? Would we not predict from the hypothesis that fallacies would all be of a type to make confirmation of belief easier, when many of these can fall either way with equal facility?

    Some of these are not always fallacious, and others have nothing to do with reasoning (e.g. conjunction fallacy).

    >An interesting hypothesis – I will be interested to see how well you argue in its defence.

    Thanks!
    If you’re really interested, I urge you to read our paper (which is long and technical I’m afraid). It’s freely available here:

    http://journals.cambridge.org/action/displayJournal?jid=BBS&tab=mostdownloaded#tab

  5. Nullius in Verba

    #4,

    Thanks. Some good material there.

    “Yes, that is half our theory.”

    So I see. What I wasn’t sure of was why particular fallacies would be selected for if the aim was to judge accurately rather than according to prior beliefs.

    “Two problems with that view: 1) it makes no testable prediction, or not much; 2) the biases of reasoning are systematic, not random error.”

    It predicts that we would use quick-to-apply heuristics rather than exact rules. The biases of reasoning are systematic because we use imperfect heuristics. But it doesn’t suggest any system to the imperfections – so you would get many systematic imperfections that counted against your own arguments as often as they did for.

    So if you took a list of all possible syllogisms, valid and invalid, determined which ones people got right and wrong, and then showed that the particular ones people got wrong most often followed a pattern related to their use in arguments, that would demonstrate the hypothesis. Finding a random pattern, or patterns based on frequency of use or applying a smaller subset of rules would tell a different story. Picking out a few examples and explaining them in argumentative terms is, as you noted, at risk of being a “just so” story.

    But I was mainly trying to test your hypothesis by comparing predictions with alternative hypotheses.

    Taking a hypothesis, making predictions from it, and confirming those predictions are true doesn’t confirm the hypothesis. It’s a case of confirming the consequent: A implies B, B, therefore A.

    To test a hypothesis, you have to first show that it makes different predictions from all other hypotheses. Then checking the predictions disconfirms the alternatives, leaving the one you want. That’s modus tollens: A implies B, not B, therefore not A.

    So if I can offer an alternative hypothesis that can explain the same observations (even if it doesn’t do so uniquely), it bears on whether the first hypothesis has been demonstrated or not. I’m not trying to prove my alternative.

    I’m sure you know all that, but I thought I’d clarify that that’s why I took the approach I did.

NEW ON DISCOVER
OPEN
CITIZEN SCIENCE
ADVERTISEMENT

Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

About Chris Mooney

Chris is a science and political journalist and commentator and the author of three books, including the New York Times bestselling The Republican War on Science--dubbed "a landmark in contemporary political reporting" by Salon.com and a "well-researched, closely argued and amply referenced indictment of the right wing's assault on science and scientists" by Scientific American--Storm World, and Unscientific America: How Scientific Illiteracy Threatens Our Future, co-authored by Sheril Kirshenbaum. They also write "The Intersection" blog together for Discover blogs. For a longer bio and contact information, see here.

ADVERTISEMENT

See More

ADVERTISEMENT
Collapse bottom bar
+

Login to your Account

X
E-mail address:
Password:
Remember me
Forgot your password?
No problem. Click here to have it e-mailed to you.

Not Registered Yet?

Register now for FREE. Registration only takes a few minutes to complete. Register now »