A paper on the psychology of religious belief, Paranormal and Religious Believers Are More Prone to Illusory Face Perception than Skeptics and Non-believers, came onto my radar recently. I used to talk a lot about the theory of religious cognitive psychology years ago, but the interest kind of faded when it seemed that empirical results were relatively thin in relation to the system building (Ara Norenzayan’s work being an exception to this generality). The theory is rather straightforward: religious belief is a naturally evoked consequence of the general architecture of our minds. For example, gods are simply extensions of persons, and make natural sense in light of our tendency to anthromorphize the world around us (this may have had evolutionary benefit, in that false positives for detection of other agents was far less costly than false negatives; think an ambush by a rival clan).*
There are many things that a given individual believes which are ‘heterodox’ in their social circle. For example, I have long thought that intelligence tests are predictive of life outcomes, and somewhat heritable in a genetic sense (these are both true, the objection of skeptics usually rests on the fact that they are skeptical of the construct itself). As I have explained here before I did not always hold to these views. Rather, when I was in seventh grade a teacher who mentored me somewhat took me aside after class, and suggested that perhaps some of my slower classmates were not quite as lazy as I obviously presumed (I tended to get impatient during mandatory group projects). When I was 5 years old and starting kindergarten my command of English was rather weak, and my mother explained to me that Americans were a very smart people. By the end of the year I was excelling. Throughout my elementary school years I frankly had a smugness about me, because I accepted what my parents told me, that academic outcome is a function of the virtue of effort. And I had quite a bit of virtue if the results were any gauge.
But as I said, it is the fashion today to reject I.Q. Usually people put intelligence in air quotes. The converse of intelligence, stupidity, is also not well acknowledged. Just as I took my realized intelligence to be a mark of my virtue (false, my virtue and moral compass are distinct, and perhaps even at some cross-purposes, with my analytic powers), I perceived stupidity as evidence of sloth and low moral character. This is just not so.
I.Q. is probably a hot-potato topic because of its associations with realized group differences, mostly race, but to some extent class. I think that the phenomenon is real and important, but that may not matter. I’ve been sobered by the realization recently that Soviet Communism persisted for 70 years. I don’t bring this example up to analogize skepticism of I.Q. with Communism, but to illustrate even patently grotesque and false views can persist for decades beyond their “sell-by” date. And yet sometimes it turns out that I’m not the only person out there who thinks that some people are smart, and some people are stupid. Here’s Felix Salmon, Who is speaking for the poor?:
Ed Yong has a piece in Nature on the problems of confirmation bias and replication in psychology. Yong notes that “It has become common practice, for example, to tweak experimental designs in ways that practically guarantee positive results.” The way this has been explained to me is that you perform an experiment, get a p-value of > 0.05 (significance). You know that your hunch is warranted, so just modulate the experiment, and hope that the p-value comes in at < 0.05, and you have publishable results! Obviously this is not just a problem in psychology; John Ioannidis has famously focused on medicine. But here’s a chart which shows that positive results are particular prevalent in psychology:
Several readers have pointed me to this amusing story, Court OKs Barring High IQs for Cops:
A man whose bid to become a police officer was rejected after he scored too high on an intelligence test has lost an appeal in his federal lawsuit against the city.
“This kind of puts an official face on discrimination in America against people of a certain class,” Jordan said today from his Waterford home. “I maintain you have no more control over your basic intelligence than your eye color or your gender or anything else.”
Jordan, a 49-year-old college graduate, took the exam in 1996 and scored 33 points, the equivalent of an IQ of 125. But New London police interviewed only candidates who scored 20 to 27, on the theory that those who scored too high could get bored with police work and leave soon after undergoing costly training.
The average score nationally for police officers is 21 to 22, the equivalent of an IQ of 104, or just a little above average.
But the U.S. District Court found that New London had “shown a rational basis for the policy.” In a ruling dated Aug. 23, the 2nd Circuit agreed. The court said the policy might be unwise but was a rational way to reduce job turnover.
In this article, we accomplish two things. First, we show that despite empirical psychologists’ nominal endorsement of a low rate of false-positive findings (≤ .05), flexibility in data collection, analysis, and reporting dramatically increases actual false-positive rates. In many cases, a researcher is more likely to falsely find evidence that an effect exists than to correctly find evidence that it does not. We present computer simulations and a pair of actual experiments that demonstrate how unacceptably easy it is to accumulate (and report) statistically significant evidence for a false hypothesis. Second, we suggest a simple, low-cost, and straightforwardly effective disclosure-based solution to this problem. The solution involves six concrete requirements for authors and four guidelines for reviewers, all of which impose a minimal burden on the publication process.
Since the paper is behind a paywall, I’ve cut & pasted the solutions belows:
Ms. Aoi, and others like her, are the secret of a winning formula stumbled upon by Maxima Pictures, the production house where Mr. Hidayat is an executive producer. For two years, Maxima has made some of Indonesia’s most popular domestic films based on a simple premise: that many in Muslim-majority Indonesia will pay to see foreign porn stars perform — clothed — in local films. Just don’t expect Indonesians to own up to it.
“We’re hypocrites,” said Mr. Hidayat, who is a Muslim. “People know who they are, but they won’t admit it. It’s a love-hate thing.”
This sort of “counter-intuitive” behavior makes total sense in light of the work reported in Paul Bloom’s How Pleasure Works: The New Science of Why We Like What We Like. People consume in a particular context. The hedonic experience can’t be isolated from its history and the prior facts (and expectations) you bring into it. This sort of insight is essential when we start talking about utilitarianism as if it’s a simple calculus.
I approached Sheril Kirshenbaum’s The Science of Kissing: What Our Lips Are Telling Us with some trepidation and excitement. The former is a consequence of my hypochondria and its associated germophobia. I have no aversion to kissing in my own life (apologies for divulging personal information), but I did have some worries about having to read about other humans engaged in such an act of hygienic daring. And yet I was excited because I am interested in multidisciplinary explorations of human behavior. And of course I was familiar with the author’s oeuvre, and was expecting an engaging and wide-ranging exploration of the topic at hand.
I was not disappointed. The Science of Kissing is an intellectual full-court press; every conceivable discipline of relevance is brought into the mix. History, ethnography, ethology, neuroscience, evolutionary biology, physiology, and epidemiology, all receive attention, to name just a few of the more prominent lenses which the author fits over the course of the narrative. In other words you’re presented with an intellectual buffet. A well rounded meal will require you to sample widely, but if the lack of punctiliousness of Romans in matters of hygiene is not to your taste, you may find a discussion of the latest neuroimaging techniques and their application to matters of behavioral response more to your liking.
A few days Kevin Drum proposed a “Human Nature Top 10.” Here are the criteria:
Not personal pet theories, but aspects of human nature that are (a) widely accepted and relatively noncontroversial among professionals, and (b) underappreciated by most of us. They can come from anywhere: economics, psychology, sociology, politics, anthropology, whatever.
He offers two: loss aversion & regression to the mean. These are excellent. I’ll chip in with two. Mine are probably a touch more tendentious, but I’m going to offer them anyway (I don’t even know what the right terms are, but I think they’re real phenomena which I’ve seen alluded to in the literature):
One of the issues when talking about the effect of environment and genes on behavioral and social outcomes is that the entanglements are so complicated. People of various political and ideological commitments tend to see the complications as problems for the other side, and yet are often supremely confident of the likely efficacy of their predictions based on models which they shouldn’t even been too sure of. That is why cross-cultural studies are essential. Just as psychology has overly relied on the WEIRD nature of data sets, so it is that those interested in social science need to see if their models are robust across cultures (I’m looking at you economists!).
That is why this ScienceDaily headline, Family, Culture Affect Whether Intelligence Leads to Education, grabbed my attention. The original paper is Family Background Buys an Education in Minnesota but Not in Sweden:
Educational attainment, the highest degree or level of schooling obtained, is associated with important life outcomes, at both the individual level and the group level. Because of this, and because education is expensive, the allocation of education across society is an important social issue. A dynamic quantitative environmental-genetic model can help document the effects of social allocation patterns. We used this model to compare the moderating effect of general intelligence on the environmental and genetic factors that influence educational attainment in Sweden and the U.S. state of Minnesota. Patterns of genetic influence on educational outcomes were similar in these two regions, but patterns of shared environmental influence differed markedly. In Sweden, shared environmental influence on educational attainment was particularly important for people of high intelligence, whereas in Minnesota, shared environmental influences on educational attainment were particularly important for people of low intelligence. This difference may be the result of differing access to education: state-supported access (on the basis of ability) to a uniform higher-education system in Sweden versus family-supported access to a more diverse higher-education system in the United States.
Sometimes books advertise themselves very well with their title. The Invisible Gorilla: And Other Ways Our Intuitions Deceive Us is one of those books. Alternatively it could have been titled: “Giving thinking a second chance.” Or, with an eye toward pushing copies: “Why everything Malcolm Gladwell tells you is crap.” And finally, a more highbrow possibility: “Reflection: man’s greatest invention.”
The “hook” for The Invisible Gorilla is the experiment which goes colloquially by the same name. The authors of the book, Christopher Chabris and Daniel Simons, actually wrote the paper Gorillas in our midst: sustained inattentional blindness for dynamic events (though they note that the basic insight goes back to the 1970s). Here’s a YouTube clip illustrating Chabris & Simons’ set up. Despite the eye-catching way the authors grab your attention the core message of The Invisible Gorilla is often very Plain Jane: thinking is hard, it yields real results, and, beware of short-cuts. Many sections of the book read as counterpoints to the counterintuitive defenses of intuition which Malcolm Gladwell presents in Blink: The Power of Thinking Without Thinking (Gladwell as it happens played a key role in popularizing knowledge of “the invisible gorilla” phenomenon). Despite being “sexed” up in the past few decades the defense of intuition, of “gut,” has a long intellectual history. For every Kant there is a Wang Yangming. And yet the borderlands between intuition and deduction, reflex and reflection, can often be gray. I would argue that much of human culture actually emerges from rational extensions of intuition. David Hume famously asserted reason’s slavery to passion, but I think a less grand way of characterizing the nature of different aspects of cognition is that they complement and supplement each other (see How We Decide).
Harvard University psychologist Marc Hauser — a well-known scientist and author of the book “Moral Minds’’ — is taking a year-long leave after a lengthy internal investigation found evidence of scientific misconduct in his laboratory.
The findings have resulted in the retraction of an influential study that he led. “MH accepts responsibility for the error,’’ says the retraction of the study on whether monkeys learn rules, which was published in 2002 in the journal Cognition.
Two other journals say they have been notified of concerns in papers on which Hauser is listed as one of the main authors.
It is unusual for a scientist as prominent as Hauser — a popular professor and eloquent communicator of science whose work has often been featured on television and in newspapers — to be named in an investigation of scientific misconduct. His research focuses on the evolutionary roots of the human mind.
Hauser is a prominent public intellectual. Here’s his Edge page. Obviously problems in some aspects of his work doesn’t necessarily invalidate all his findings, but it doesn’t look good for his credibility. This sort of incident points to the importance of trust within the culture of science. Collaborators and researchers who cited his results are scrambling to make sense of it all. I’ve cited Moral Minds in past posts, but I probably won’t be doing so now.
I highly recommend this discussion between Paul Bloom & Robert Wright. The topic under consideration is the psychology of pleasure, as reviewed in Bloom’s new book How Pleasure Works: The New Science of Why We Like What We Like. You can also find out about Bloom’s ideas in this exchange in Slate. The essentialism examined in Descarte’s Baby is being taken for another spin, though with a more precise focus. The bottom line is that pleasure is often contingent on more than proximate empirical sensory input; it depends on what you perceive to be the essence of the object of pleasure, even its history (or more crassly, its price). This truth may make the calculation project of the utilitarian heirs of Gottfried Leibniz pragmatically impossible.
Quotes because you might not find it counterintuitive, Self-Esteem Development From Young Adulthood to Old Age: A Cohort-Sequential Longitudinal Study:
The authors examined the development of self-esteem from young adulthood to old age. Data came from the Americans’ Changing Lives study, which includes 4 assessments across a 16-year period of a nationally representative sample of 3,617 individuals aged 25 years to 104 years. Latent growth curve analyses indicated that self-esteem follows a quadratic trajectory across the adult life span, increasing during young and middle adulthood, reaching a peak at about age 60 years, and then declining in old age. No cohort differences in the self-esteem trajectory were found. Women had lower self-esteem than did men in young adulthood, but their trajectories converged in old age. Whites and Blacks had similar trajectories in young and middle adulthood, but the self-esteem of Blacks declined more sharply in old age than did the self-esteem of Whites. More educated individuals had higher self-esteem than did less educated individuals, but their trajectories were similar. Moreover, the results suggested that changes in socioeconomic status and physical health account for the decline in self-esteem that occurs in old age
As a person well under 60 but slowing walking in that direction I’m pretty heartened by this. On the other hand, I’m one o those people who also tend to think that “self-esteem” is a bit overrated, so I’m not that heartened.
Via Randall Parker