Is Reasoning Built for Winning Arguments, Rather Than Finding Truth?

By Chris Mooney | April 25, 2011 8:38 am

How is this for timing? Just as my Mother Jones piece on motivated reasoning came out, the journal Behavioral and Brain Sciences devoted an entire issue to the case for an “argumentative theory” of reason, advanced by Hugo Mercier of the University of Pennsylvania and Dan Sperber of the Jean Nicod Institute in Paris. You can’t get the article over there without a subscription, but it’s also available at SSRN, and here is the abstract:

Reasoning is generally seen as a means to improve knowledge and make better decisions. However, much evidence shows that reasoning often leads to epistemic distortions and poor decisions. This suggests that the function of reasoning should be rethought. Our hypothesis is that the function of reasoning is argumentative. It is to devise and evaluate arguments intended to persuade. Reasoning so conceived is adaptive given the exceptional dependence of humans on communication and their vulnerability to misinformation. A wide range of evidence in the psychology of reasoning and decision making can be reinterpreted and better explained in the light of this hypothesis. Poor performance in standard reasoning tasks is explained by the lack of argumentative context. When the same problems are placed in a proper argumentative setting, people turn out to be skilled arguers. Skilled arguers, however, are not after the truth but after arguments supporting their views. This explains the notorious confirmation bias. This bias is apparent not only when people are actually arguing, but also when they are reasoning proactively from the perspective of having to defend their opinions. Reasoning so motivated can distort evaluations and attitudes and allow erroneous beliefs to persist. Proactively used reasoning also favors decisions that are easy to justify but not necessarily better. In all these instances traditionally described as failures or flaws, reasoning does exactly what can be expected of an argumentative device: Look for arguments that support a given conclusion, and, ceteris paribus, favor conclusions for which arguments can be found.

Behavioral and Brain Sciences contains not only the paper by Mercier and Sperber, but also a flurry of expert responses and then a response from the authors. SSRN does too, and there is a site devoted to this idea as well.

Mercier sent me a more user friendly summary, and is allowing me to repost parts of it:

Current philosophy and psychology are dominated by what can be called a classical, or ‘Cartesian’ view of reasoning. Even though this view goes back at least to some classical Greek philosophers, its most famous exposition is probably in Descartes. Put plainly, it’s the idea that the role of reasoning is to critically examine our beliefs so as to discard wrong-headed ones and thus create more reliable beliefs—knowledge. This knowledge is in turn supposed to help us make better decisions. This view is—we surmise—hard to reconcile with a wealth of evidence amassed by modern psychology. Tversky and Kahneman (and many others) have shown how fallible reasoning can be. Epstein (again, and many others) has shown that sometimes reasoning is unable to correct even the most blatantly incorrect intuitions. Others have shown that sometimes reasoning too much can make us worse off: it can unduly increase self-confidence, allow us to maintain erroneous beliefs, creates distorted, polarized beliefs and enables us to violate our own moral intuitions by finding excuses for ourselves.

We claim that the full import of these results has not been properly gauged since most people still seem to agree, or at least fail to question, the classical, Cartesian assumptions.

Our theory—the argumentative theory of reasoning—suggests that instead of having a purely individual function, reasoning has a social and, more specifically, argumentative function. The function of reasoning would be to find and evaluate reasons in dialogic contexts—more plainly, to argue with others. Here’s a very quick summary of the evolutionary rationale behind this theory.

Communication is hugely important for humans, and there is good reason to believe that this has been the case throughout our evolution, as different types of collaborative—and therefore communicative—activities already played a big role in our ancestors’ lives (hunting, collecting, raising children, etc.). However, for communication to be possible, listeners have to have ways to discriminate reliable, trustworthy information from potentially dangerous information—otherwise speakers would be wont to abuse them through lies and deception. One way listeners and speakers can improve the reliability of communication is through arguments. The speaker gives a reason to accept a given conclusion. The listener can then evaluate this reason to decide whether she should accept the conclusion. In both cases, they will have used reasoning—to find and evaluate a reason respectively. If reasoning does its job properly, communication has been improved: a true conclusion is more likely to be supported by good arguments, and therefore accepted, thereby making both the speaker—who managed to convince the listener—and the listener—who acquired a potentially valuable piece of information—better off.

That’s the positive side of things. But there’s a huge negative side:

If reasoning evolved so we can argue with others, then we should be biased in our search for arguments. In a discussion, I have little use for arguments that support your point of view or that rebut mine. Accordingly, reasoning should display a confirmation bias: it should be more likely to find arguments that support our point of view or rebut those that we oppose. Short (but emphatic) answer: it does, and very much so. The confirmation bias is one of the most robust and prevalent biases in reasoning. This is a very puzzling trait of reasoning if reasoning had a classical, Cartesian function of bettering our beliefs—especially as the confirmation bias is responsible for all sorts of mischief….Interestingly, the confirmation bias needs not be a drag on a group’s ability to argue. To the extent that it is mostly the production, and not the evaluation of arguments that is biased—and that seems to be the case—then a group of people arguing should still be able to settle on the best answer, despite the confirmation bias…As a matter of fact, the confirmation bias can then even be considered a form of division of cognitive labor: instead of all group members having to laboriously go through the pros and cons of each option, if each member is biased towards one option, she will find the pros of that options, and the cons of the others—which is much easier—and the others will do their own bit.

And worse still:

When people reason alone, there will often be nothing to hold their confirmation bias in check. This might lead to distortions of their beliefs. As mentioned above, this is very much the case. When people reason alone, they are prone to all sorts of biases. For instance, because they only find arguments supporting what they already believe in, they will tend to become even more persuaded that they are right or will develop stronger, more polarized attitudes.

I think this evolutionary perspective may explain one hell of a lot. Picture us around the campfire, arguing in a group about whether we need to move the camp before winter comes on, or stay in this location a little longer. Mercier and Sperber say we’re very good at that, and that the group will do better than a lone individual at making such a decision, thanks to the process of group reasoning, where everybody’s view gets interrogated by those with differing perspectives.

But individuals–or, groups that are very like minded–may go off the rails when using reasoning. The confirmation bias, which makes us so good at seeing evidence to support our views, also leads us to ignore contrary evidence. Motivated reasoning, which lets us quickly pull together the arguments and views that support what we already believe, makes us impervious to changing our minds. And groups where everyone agrees are known to become more extreme in their views after “deliberating”–this is the problem with much of the blogosphere.

When it comes to reasoning ,then, what’s good for the group could be very bad for the individual–or, for the echo chamber.

CATEGORIZED UNDER: Motivated Reasoning

Comments (30)

  1. The committee process, although universally reviled, is in fact a very good way of deriving a good conclusion from the flawed inputs of diversely interested parties, each one with parochial interests.

  2. Gaythia

    It seems to me that humans do have an intrinsic desire to classify things, and to develop cultures that provide social constraints and attempt to explain the world at large.

    I believe that the study by Mercier and Sperber, doesn’t rise above its own cultural constraints. This material would be much better at explaining the behavior and motivations of, say, members of the American Bar Association, than it would a convent of Tibetan monks.

    Humans are capable of horrifically violent reactions based on emotional, gut level “Us vs Them”. We are also capable of constructing complex ways of organized reasoning, science for example. We can also societal structures that create a sense of community that is supportive and does not crush individual differences. Or not.

    Jared Diamond, among others, has shown how precarious and prone to collapse our human civilizations can be. So certainly, thinking about ways that our reasoning skills can be best harnessed productively, and not derailed, is useful.

    Since we are members of a rich and advanced culture, we have many resources to draw on in choosing methods of reasoning that are deliberative and are evidenced based.

    I think that this blog is at its best when it is operating in that rich and thoughtful mode, in posts like this one or ones regarding the framing of scientific arguments. But frequently, it sinks to the lower level, as for example in the repeated posts attacking vaccine denial, which in my opinion, would be better framed as “how can our society best serve public health and effectively control the spread of infections diseases?”.

    But that gets back to a point in favor of the work by Mercier and Sperber: that which is emotionally charged is seen as exciting and engaging, whereas a more thoughtful discussion might be less interesting and gripping, but still more beneficial.

  3. Matt B.

    So if we want people to be more objective, our goal should be to find a way to work within the argumentative-reasoning framework to convince people that they should reason objectively, so that that will be a belief they can’t shake (especially because no one will argue against it). This would be like the fact that the U.S. Constitution derives its authority from the Articles of Confederation, even though doing so caused the Articles to no longer be in effect.

    The last quoted paragraph shows that we should discuss politics and religion more. People like the Unabomber, living alone in the woods, can easily go crazy because no one is there to say, “Whoa, where’d you get that idea?”

    I love that the BBS uses an optical illusion in its logo. It seems very appropriate.

  4. Nullius in Verba

    It depends on whether you’re talking about reasoning or fallacious reasoning. (Or heuristics.)

    The evolutionary reason for human reasoning being so often fallacious is not so much that we use it to persuade others (although that does come into it) but that correct reasoning is very slow, very expensive, difficult to implement (especially in a computer made of meat designed by trial and error), and has less coverage that heuristic reasoning (there are many questions we cannot even attempt to answer with strict logic). A method that gives an answer that is probably right, for a larger than average percentage of the time, still confers a strong evolutionary advantage. When it comes to finding the banana before your rivals, relatively crude reasoning is good enough. More sophisticated reasoning that spends its time cataloguing possible sources of deductive error and eliminating them one by one will only get there after the lucky guesser has already been and gone. Human reasoning is designed to be fast first, and correct second.

    Methodological scepticism recognises the fallibility of sensory perception, and of ordinary reasoning. Descartes certainly recognised that his reasoning could lead him into error. (“But as I reach this conclusion I am amazed at how feeble and error-prone my mind is. For although I am thinking all this just to myself, silently and without speaking, nonetheless the actual words bring me up short, and I am almost tricked by ordinary ways of talking.”) The answer he came up with was to first reject everything subject to the least doubt, then build a more reliable base of knowledge using axiomatic methods (i.e. similar to Euclid’s geometry) on this foundation by valid logical methods. Descartes implementation of his programme leaves a lot to be desired by modern standards – he first proved the existence of a perfect God, and then argued that God could not be a deceiver and must therefore have given him a rational faculty that could find only truth if used properly. But there’s no doubt that he recognised the possibility of error arising from what seemed to the reasoner to be correct reasoning. It was his motive in developing his method.

    The modern viewpoint is that valid reasoning can be used to construct reliable beliefs – but that without extensive training humans naturally use invalid methods that can often give wrong answers, and not even realise it. The rationalist school therefore first requires that you learn how to reason correctly before using it, and recognising that even then one is still fallible, that you must engage in sceptical debate, challenge, and critical argument to test your reasoning’s correctness. i.e. to critically examine our beliefs so as to discard wrong-headed ones. Thus the critical examination is not the reasoning itself, but the test to see if the reasoning is sound; given our inbuilt tendency to reason fallaciously.

    There is thus no conflict between Enlightenment rationalism and modern psychological research – indeed, systematic scepticism is specifically designed to address the human failing that psychologists are supposedly only now discovering. (I think actually they already knew about it a long time ago.)

    The real conflict is between people who think they are being rational, who are finding the rational method of sceptical challenge does not produce the clear victory for their side that they expected, and figure therefore that there must be something wrong with the method. The sceptics use reasoning to win the arguments, and so reasoning is clearly not good at finding the truth.

    The interesting question here is what alternative is being proposed? To abandon reason? To abandon systematic scepticism, critical challenge? Or stop trying to win arguments? I’m not sure.

    I fear the answer is to tell believers not to trust reasoning by sceptics that wins arguments – to warn them that the enemy will use convincing (but of course misleading) logic, so they should beware of any who use debate-winning logical arguments, and not listen to their siren song. But maybe that is being unduly cynical. I shall await alternative suggestions with interest.

  5. Nullius, a great defense of the “traditional” view of reasoning or whatever …

    BUT, I’m going to argue with you.

    First, the “reasoning as argumentation” model I think explicitly says this is NOT, NOT, NOT, a “human failing.” Rather, it is, if I may, “human ISness.”

    I won’t propose abandoning “rationalism,” but I will say that it is even more unnatural than you may want to admit.

    And, that IS a conflict with Cartesianism, which postulates rationality is a cornerstone of homo sapiens.

    Sorry, but, either you don’t get the degree of implications this involves, or …

    You DO, unconsciously, understand precisely what is up and by your conscious argumentation, actually support the fact at hand.

  6. Of course, maybe I have reasons for my argumentation. And, I do.

    One is to get people to accept that a Cartesian, or Platonic, idea of humans as homo rationalis simply doesn’t exist. Not even in the most notable of today’s skeptics. Witness Lawrence Krauss defending his billionaire hedge fund buddy. Or Shermer on climate change denialism. If we riffed through Mooney’s writings, we’d find examples on him, too.

    That said, should we stop trying to be more rational? No. But, we should recognize that even apparent growth in rationality may have ulterior motives.

  7. Nullius in Verba

    #8,

    Please, go ahead. People arguing with me is what I come here for. :-)

    I think I agree with your point about heuristic methods not being a human “failing”, assuming that you mean what I think you mean. It depends on what you’re trying to do. If, as I suggested in the evolutionary paragraph, you want something reasonably correct but very fast, then human heuristic methods are almost certainly far superior to strict logic. So when measured against the evolutionary requirement, it’s certainly not a failing. I was speaking of it in the sense Chris was using it in the title to his post – how good is it at finding the truth?

    Rationalism is of course unnatural – to the extent that there is a meaningful distinction. (Humans, and everything they do, is as “natural” as the activities of any other animal.) More precisely, it is a carefully constructed artefact, a method that requires extensive training and experience to master, and that was only arrived at after centuries of thought, controversy, and examination.

    Did Descartes postulate rationality as the cornerstone of homo sapiens? He said: “What then did I believe I was before? A man. But what is a man? Shall I say ‘a rational animal’? No, for then I should have to investigate what an animal is, what rationality is, and so one from one question I would slide down the slope to harder ones; and I do not have time to waste now on subtleties of this sort.” That isn’t an answer, but at the same time, it seems that if he asserts it at all it is as a conclusion rather than a foundation. He does appear to count the capability for rationality as an inbuilt and distinguishing feature of man, but he also makes it clear that man is often irrational and in error.

    I certainly don’t agree with Descartes on everything – and he is far from representative of Enlightenment philosophy. (A point I may make again regarding that Lakoff interview Chris has just put up – George gets Descartes completely mixed up.) He first popularised or developed a number of crucial points, that led on to other important developments, but his philosophy in detail is neither complete or correct. There’s a lot more to the Enlightenment than just Cartesian scepticism.

    There are many implications – but I have no idea which particular ones you’re referring to. If I haven’t covered it already in what I said, please do expand on that.

  8. Nullius, thanks for the response.

    First, thanks for pointing out you’re on the same page on the “failing” issue, at least in part.

    Since, contra certain Pop Ev Psychers, we evolved various mental skills over different environments and different times in the past, I would say it can’t be called a “failing” today either. We didn’t evolve *for* any particular time in the future, just to better adapt to the time at hand.

    So, in light of human “reasoning” and today’s issues, I don’t consider the relative lack of rationality a “failure” primarily because I think the concept is unapplicable. So, to that, per Doug Hofstader, I apply the Zen Buddhist “mu.”

    Next …

    As you note, rationalism is a skill. And, per your note on Descartes, the question is, how easy to learn, or difficult to learn, is this skill? Is it like learning to throw a bowling ball at a set of pins, or to hit a major-league curveball? More like high-school algebra or advanced differential equations? I opt for the latter. (Of course, it may be some point in between.) Where you, and others, fall on how you understand this relates in part to the “failure” above and to broader issues about how much we should expect from rationality.

    Agreed on Descartes otherwise. What you cite is a clear example of hand-waving. To some degree, so is his cogito, ergo sum.

    One implication, per the link on the argumentation view of reasoning, is when two groups both come to a theoretically well-reasoned decision within their group, is asking about how likely we can apply “metarationality.”

    Examples would be global warming and vaccination. In both cases (largely for mercenary reasons in the first, but largely for sincere reasons in the second) “doubters” have focused on the uncertainty issue, while to some semi-deliberate degree, at least, the scientific side has downplayed the issue.

    Some examples are already at hand. Kahnemann et al in behavioral economics have *hugely* debunked the idea of man as a rational actor there. There’s many implications, but basically none of them have made it to the level of changing political policy.

    Reasoning within a group, vs. alone, vs. “to” another group has implications for sociology and out-groups, and h ow much or little we can expect people’s behavior, and more, attitudes, to change.

  9. Thanks a lot for all the comments. Here are a few quick thoughts

    @Gaythia
    >I believe that the study by Mercier and Sperber, doesn’t rise above its own cultural constraints. This material would be much better at explaining the behavior and motivations of, say, members of the American Bar Association, than it would a convent of Tibetan monks.

    Thank you for the opportunity to add that I have a paper exactly on that issue. In fact people argue everywhere, and it seems that they reason better when they do. As for Tibetan monks, a lot of their training comes in the form of debates (see for instance Liberman’s Dialectical practice in Tibetan philosophical culture). My paper (On the Universality of Argumentative Reasoning) can be found there:

    http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1784902

    @Matt B
    >our goal should be to find a way to work within the argumentative-reasoning framework to convince people that they should reason objectively

    Unfortunately, people can’t do that: they don’t control their biases, they are (usually) not biased on purpose

    >People like the Unabomber, living alone in the woods, can easily go crazy because no one is there to say, “Whoa, where’d you get that idea?”

    That’s a very good example indeed (in fact, we have it in the first draft of a book on the topic we’re working on).

    @ Nullius in Verba
    >The evolutionary reason for human reasoning being so often fallacious is not so much that we use it to persuade others (although that does come into it) but that correct reasoning is very slow, very expensive, difficult to implement (especially in a computer made of meat designed by trial and error), and has less coverage that heuristic reasoning (there are many questions we cannot even attempt to answer with strict logic).

    You argument explains why we don’t need reasoning for individual purposes. But it doesn’t explain the existence of reasoning at all, given that it’s so bad at doing the things it’s supposed to do (in the Cartesian view). And we would argue that our hypothesis does.

    >The modern viewpoint is that valid reasoning can be used to construct reliable beliefs – but that without extensive training humans naturally use invalid methods that can often give wrong answers, and not even realise it.

    I’m not sure that’s an accurate description, and I’m quite sure it’s not a good remedy: training in reasoning does little to attenuate biases (they are all over the place in the greatest philosophers for instance).

    >I fear the answer is to tell believers not to trust reasoning by sceptics that wins arguments – to warn them that the enemy will use convincing (but of course misleading) logic, so they should beware of any who use debate-winning logical arguments, and not listen to their siren song. But maybe that is being unduly cynical. I shall await alternative suggestions with interest.

    What we’re suggesting is that in a normal debate, you try to convince the other guy when you produce arguments, but you’re mostly objective when you evaluate arguments (after all, if you’re better off changing your mind, you’d rather know about it). So a normal debate is the solution, and it has worked very well for ever. There is no need for any new thing really, just fixing institutions that don’t rely enough on genuine debate.

    @SocraticGadfly (#8)
    That’s pretty much what I would have said!

    Thanks in particular to the last two commenters for a very interesting debate! I hope the truth (or something vaguely like it) emerges…

  10. Blamer ..

    This argumentative theory of reasoning (of the group) seems to fit well with the concept of motivated reasoning (of the individual).

    Since arguers tend not to be persuaded, and generally not all group members are involved in an argument, what does the research suggest about the all important bystanders?

    For example, I’m sympathetic to Nullius in Verba after his exchange with SocraticGadfly. To what extent am I influenced by style over substance? And at the outset was I more biased towards the NiV view than I was aware of?

  11. @Hugo Mercier … didn’t think to add your last point myself, namely, that greater skill in rationality doesn’t necessarily eliminate biases. The Platonic image painted of Socrates vis-a-vis the Sophists, versus the reality of such, comes to mind.

  12. Åse

    Ok – not so erudite perhaps, but it immediately brought this to mind

    http://youtu.be/teMlv3ripSM

    (Monty Pythons argument skit)

  13. This is a very interesting post. The “persuasion” hypothesis of reasoning raises the possibility that argumentation/reasoning may have evolved as mechanisms facilitating “repression of competition” (see, for example, Steve Frank’s work). As discussed by Bernie Crespi and others, persuasion, coercion, and force function to repress competition in groups (e.g., by imposing cooperation among group members). Thus, there is potential to incorporate Mercier and Sperber’s new hypothesis into the broader field of evolutionary biology and behavioral ecology (e.g., by considering inclusive fitness consequences of reasoning/argumentation; by investigating phylogenetics of argumentation/persuasion; by evaluating argumentation/reasoning in a comparative context; by researching argumentation/reasoning and convergent evolution; etc.).

  14. Gaythia

    @Mercier #12, The paper you reference is very interesting, I am reading it. It seems to me that the crux of the issue is expressed in your statement: “There is no need for any new thing really, just fixing institutions that don’t rely enough on genuine debate.”

    How are we to go about defining and then setting up structures that encourage genuine debate?

    I would still argue that even within our own culture, an effective argument of a lawyer within a court of law is not always the same as those that would be given by a scientist citing the body of knowledge assembled as scientific evidence, or a religious scholar citing the authority of ancient texts and commentaries. And certainly cultures vary enormously as to their underlying assumptions as to the role of individuals and groups in society and how much emphasis to place on such things as “rugged individualism” and community spirit or compliance.

    It seems to me that the “fix” in “just fixing institutions” is as the heart of what what humans disagree about.

  15. @Blamer
    Yes, a large chunk of our paper is dedicated to motivated reasoning

    @Ase
    Haha, yes, I often use it in presentations (to illustrate the limits of mere contradiction and the advantages of arguments)

    @Clara B. Jones
    Indeed: our hypothesis is an evolutionary one, and we consider it to fall squarely within evolutionary psychology (again, cf. the paper)

    Thanks again for the comments!

  16. @Gaythia

    You’re right, it’s obviously more complicated. For instance, the difference you point out between the lawyer and the scientist is that the scientist, if she’s honest, only has an ‘implicit’ confirmation bias: reasoning is biased, but if she thinks of a counterargument (against her own ideas) despite the bias, she’ll say so. By contrast, the lawyer also has an explicit bias: even if she thinks of a counterargument, she won’t say it.

    @Clara B. Jones
    An addition to my first answer: unfortunately, we think that reasoning and argumentation are unique to humans, which severely restricts the range of evolutionary methods available (also, it doesn’t fossilize). So we have to rely on the standard method of EP: fit between structure and design. But I’ve also looked at development, to argue that the traits of reasoning we rely on in our argument are not learned (i.e. children exhibit the same pattern of performance as adults), and at other cultures, to make sure it’s not a Western recent quirk (cf. previous comment).

  17. Nullius in Verba

    #11,

    Rationalism is partly easy to learn, and partly difficult. Some bits are relatively easy – anyone can memorise a list of common fallacies and biases and practice recognising them. Other bits are very hard – requiring sophisticated statistics and philosophy to even understand. I don’t expect the man in the street to be able to expound for long on Bayesian decision theory, but I shouldn’t have to keep explaining ‘confirming the consequent’, ‘argument from ignorance’, ‘argument ad hominem’, ‘argument from authority’, etc.

    “Examples would be global warming and vaccination. In both cases (largely for mercenary reasons in the first, but largely for sincere reasons in the second)”

    Generally speaking, they’re not mercenary in the first case either. But that’s another debate.

    #12,

    “But it doesn’t explain the existence of reasoning at all, given that it’s so bad at doing the things it’s supposed to do (in the Cartesian view).”

    The existence of reasoning is to help us to find the banana first, it isn’t to enable philosophers to discover scientific truths. The Cartesian viewpoint has nothing at all to say on why reasoning exists (unless you take Descartes ideas about a beneficent God seriously), but it has a lot to say about how to best use them now that we have them.

    I’m simplifying somewhat with the banana story, but it’s still true the problems that our brains evolved to solve are not the ones that the Enlightenment philosophers were interested in, so any explanation of why we have this natural reasoning ability, and why it has the features and flaws it has, cannot be made with reference to the Cartesian viewpoint. It would be like arguing that we evolved opposable thumbs in order to be able to send text messages on mobile phones faster. (If we had, we’d surely have two thumbs…)

    “I’m quite sure it’s not a good remedy: training in reasoning does little to attenuate biases”

    Training does attenuate biases, but imperfectly. But the real measure aiming to adress bias was the other part of what I said: “recognising that even then one is still fallible, that you must engage in sceptical debate, challenge, and critical argument to test your reasoning’s correctness.”

    “So a normal debate is the solution, and it has worked very well for ever. There is no need for any new thing really, just fixing institutions that don’t rely enough on genuine debate.”

    Excellent idea!

  18. @21 Nullius … the existence of reasoning is to help us find the banana first? Isn’t that partially backing off your previous statements? That said, aside from bananas, and to riff on an older argument, some elements of what we today call “rationalism” may have originated as … spandrels?

  19. Nullius in Verba

    #22,

    “the existence of reasoning is to help us find the banana first? Isn’t that partially backing off your previous statements?”

    In what way? Which previous statements are you thinking of?

  20. Tom Upshaw

    Compare this work (http://www.amazon.com/Voltaires-Bastards-Dictatorship-Reason-West/dp/0679748199) which approaches perhaps the same subject from the perspective of sociology and history rather than psychology. But they may well be related and if so, truly illustrate the horrible large-scale mistakes that can result from the misapplication of reason based on such confirmation bias. More contemporary examples such as the war in Iraq are easy to find.

  21. A fascinating discussion. I hope Dr. Mercier is still monitoring. I’d like your thoughts about a few observations;

    1. Is it sufficient to argue that because we reason ‘better’ socially than alone, ergo reasoning developed for social purposes? It seems that there are a wide range of adaptive values to reasoning…consciously analyzing inputs to come up with a productive/adaptive choice… beyond just convincing the other guy or reinforcing your own views. Flawed as our decision making is when we’re on our own, we do apply reasoning to many tasks outside of social interactions. We certainly use it as PART of risk perception, for example (my area of interest).

    2. What does “better’ mean. There are endless examples of argumentative reasoning ending up in unresolved arguments and conflicts, with all sorts of really negative outcomes. It strikes me that the application of reasoning to the way we argue may simply be a tool that reinforces social cohesion, which is good for us social animals since the tighter the tribe, the better our chances, so we use reasoning to argue not to reach better decisions or even to persuade others, but to circle the wagons against others.

    Here I think about the Theory of Cultural Cognition (www.culturalcognition.net)…which has found that the views we choose – and argue to maintain – are shaped to conform with those with whom we most strongly identify and thereby enhance social cohesion within ‘our’ tribe. So the mental gymnastics/reasoning we do to come up with our view isn’t a tool for arguing, but for thinking things through ourselves in a way that strengthens our tribe and our tribe’s acceptance of us as a member in good standing, both of which improve our survival chances.

NEW ON DISCOVER
OPEN
CITIZEN SCIENCE
ADVERTISEMENT

Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

About Chris Mooney

Chris is a science and political journalist and commentator and the author of three books, including the New York Times bestselling The Republican War on Science--dubbed "a landmark in contemporary political reporting" by Salon.com and a "well-researched, closely argued and amply referenced indictment of the right wing's assault on science and scientists" by Scientific American--Storm World, and Unscientific America: How Scientific Illiteracy Threatens Our Future, co-authored by Sheril Kirshenbaum. They also write "The Intersection" blog together for Discover blogs. For a longer bio and contact information, see here.

ADVERTISEMENT

See More

ADVERTISEMENT
Collapse bottom bar
+

Login to your Account

X
E-mail address:
Password:
Remember me
Forgot your password?
No problem. Click here to have it e-mailed to you.

Not Registered Yet?

Register now for FREE. Registration only takes a few minutes to complete. Register now »