Update: An ungated version of the paper.
I used to spend a lot more time talking about cognitive science of religion on this weblog. It was an interest of mine, but I’ve come to a general resolution of what I think on this topic, and so I don’t spend much time discussing it. But in the comments below there was a lot of fast & furious accusation, often out of ignorance. I personally find that a little strange. I’ve been involved in freethought organizations in the past, and so have some acquaintance with “professional atheists.” Additionally, I’ve also been a participant and observer of the internet freethought websites since the mid-1990s (yes, I remember when alt.atheism was relevant!). In other words, I know of whom I speak (and I am not completely unsympathetic to their role in the broader ecology of ideas).
But the bigger issue is a cognitive model of how religiosity emerges. Luckily for me a paper came out which speaks to many of the points which I alluded to, Divine intuition: Cognitive style influences belief in God:
This doesn’t mean that we should stop socializing on the web. But it does suggest that we reconsider the purpose of our online networks. For too long, we’ve imagined technology as a potential substitute for our analog life, as if the phone or Google+ might let us avoid the hassle of getting together in person.
But that won’t happen anytime soon: There is simply too much value in face-to-face contact, in all the body language and implicit information that doesn’t translate to the Internet. (As Mr. Glaeser notes, “Millions of years of evolution have made us into machines for learning from the people next to us.”) Perhaps that’s why Google+ traffic is already declining and the number of American Facebook users has contracted in recent months.
These limitations suggest that the winner of the social network wars won’t be the network that feels the most realistic. Instead of being a substitute for old-fashioned socializing, this network will focus on becoming a better supplement, amplifying the advantages of talking in person.
For years now, we’ve been searching for a technological cure for the inefficiencies of offline interaction. It would be so convenient, after all, if we didn’t have to travel to conferences or commute to the office or meet up with friends. But those inefficiencies are necessary. We can’t fix them because they aren’t broken.
One of the issues when talking about the effect of environment and genes on behavioral and social outcomes is that the entanglements are so complicated. People of various political and ideological commitments tend to see the complications as problems for the other side, and yet are often supremely confident of the likely efficacy of their predictions based on models which they shouldn’t even been too sure of. That is why cross-cultural studies are essential. Just as psychology has overly relied on the WEIRD nature of data sets, so it is that those interested in social science need to see if their models are robust across cultures (I’m looking at you economists!).
That is why this ScienceDaily headline, Family, Culture Affect Whether Intelligence Leads to Education, grabbed my attention. The original paper is Family Background Buys an Education in Minnesota but Not in Sweden:
Educational attainment, the highest degree or level of schooling obtained, is associated with important life outcomes, at both the individual level and the group level. Because of this, and because education is expensive, the allocation of education across society is an important social issue. A dynamic quantitative environmental-genetic model can help document the effects of social allocation patterns. We used this model to compare the moderating effect of general intelligence on the environmental and genetic factors that influence educational attainment in Sweden and the U.S. state of Minnesota. Patterns of genetic influence on educational outcomes were similar in these two regions, but patterns of shared environmental influence differed markedly. In Sweden, shared environmental influence on educational attainment was particularly important for people of high intelligence, whereas in Minnesota, shared environmental influences on educational attainment were particularly important for people of low intelligence. This difference may be the result of differing access to education: state-supported access (on the basis of ability) to a uniform higher-education system in Sweden versus family-supported access to a more diverse higher-education system in the United States.
A few years ago I was hearing a lot about mirror neurons. There was a hyped up article on The Edge website about them, MIRROR NEURONS and imitation learning as the driving force behind “the great leap forward” in human evolution. But I haven’t heard much since then, though I’m not neuro nerd so perhaps I’m out of the loop. So I pass on this link with interest, Single-Neuron Responses in Humans during Execution and Observation of Actions:
Direct recordings in monkeys have demonstrated that neurons in frontal and parietal areas discharge during execution and perception of actions…Because these discharges “reflect” the perceptual aspects of actions of others onto the motor repertoire of the perceiver, these cells have been called mirror neurons. Their overlapping sensory-motor representations have been implicated in observational learning and imitation, two important forms of learning . In humans, indirect measures of neural activity support the existence of sensory-motor mirroring mechanisms in homolog frontal and parietal areas…other motor regions…and also the existence of multisensory mirroring mechanisms in nonmotor region…We recorded extracellular activity from 1177 cells in human medial frontal and temporal cortices while patients executed or observed hand grasping actions and facial emotional expressions. A significant proportion of neurons in supplementary motor area, and hippocampus and environs, responded to both observation and execution of these actions. A subset of these neurons demonstrated excitation during action-execution and inhibition during action-observation. These findings suggest that multiple systems in humans may be endowed with neural mechanisms of mirroring for both the integration and differentiation of perceptual and motor aspects of actions performed by self and others.
ScienceDaily has a hyped-up headline, First Direct Recording Made of Mirror Neurons in Human Brain.
skepticcritic has much more.
The Evolution Of Symbolic Language by Terrence Deacon and Ursula Goodenough. Deacon’s The Symbolic Species: The Co-Evolution of Language and the Brain is a book I liked a great deal, though in hindsight I don’t think I had the background to appreciate it in any depth (nor do I now).
Social Cognition in Dogs, or How did Fido get so smart?. This you know:
Domesticated dogs seem to have an uncanny ability to understand human communicative gestures. If you point to something the dog zeroes in on the object or location you’re pointing to (whether it’s a toy, or food, or to get his in-need-of-a-bath butt off your damn bed and back onto his damn bed). Put another way, if your attention is on something, or if your attention is directed to somewhere, dogs seem to be able to turn their attention onto that thing or location as well.
Amazingly, dogs seem to be better at this than primates (including our nearest cousins, the chimpanzees) and better than their nearest cousins, wild wolves.
But there are two explanations for how/why dogs are better than primates at this task:
And so it was that biological anthropologist Brian Hare, director of the of Duke University Canine Cognition Center wondered: did dogs get so smart because of direct selection for this ability during the domestication of dogs, or did this apparent intelligence evolve, in a sense, by accident, because of selection against fear and aggression?
I didn’t even consider that it would be anything except for direct selection. In any case, read the whole post for a run-down of the paper, but here’s the blogger’s conclusion:
Compared with notable successes in the genetics of basic sensory transduction, progress on the genetics of higher level perception and cognition has been limited. We propose that investigating specific cognitive abilities with well-defined neural substrates, such as face recognition, may yield additional insights. In a twin study of face recognition, we found that the correlation of scores between monozygotic twins (0.70) was more than double the dizygotic twin correlation (0.29), evidence for a high genetic contribution to face recognition ability. Low correlations between face recognition scores and visual and verbal recognition scores indicate that both face recognition ability itself and its genetic basis are largely attributable to face-specific mechanisms. The present results therefore identify an unusual phenomenon: a highly specific cognitive ability that is highly heritable. Our results establish a clear genetic basis for face recognition, opening this intensively studied and socially advantageous cognitive trait to genetic investigation.
In other words, the strength of face recognition does not seem to track other intelligence test results much at all (including tests which measure verbal and visual memory). Rather, it seems to be a domain-specific competency, rather than emerging out of general intelligence. And, the variation in face recognition ability is highly heritable.
What’s going on here? A reasonable guess for me is that the ability to recognize many, many, different faces isn’t something that came up for most of human history. Even in a pre-modern village you’d see the same people over and over. By contrast, if you work in sales you probably need to juggle a lot of faces & names to be successful.
Remember that if a quantitative trait is highly heritable then by definition that means that directional selection wasn’t operating to drive genes to fixation so that the population was monomorphic in trait value. In English that means if there was a huge benefit to being able to recognize hundreds of faces very well in the past, then we would be able to recognize hundreds of faces very well to the same extent. As it is the strongly for face recognition has to be more complex, with the direct selection applicable being some sort of balancing selection.
Citation: Jeremy B. Wilmer, Laura Germine, Christopher F. Chabris, Garga Chatterjee, Mark Williams, Eric Loken, Ken Nakayama, and Bradley Duchaine, Human face recognition ability is specific and highly heritable, doi:10.1073/pnas.0913053107
Covers all the major angles. Nice that there’s a newspaper which can support this sort of reporting (on the other hand). Not surprising that Amy Bishop seems to have some history of delusions of grandeur, she’s claiming that both she and her husband have an I.Q. of 180. That’s 5.3 standard deviations above the mean. Assuming a normal distribution that’s a 1 in 20 million probability. Of course the tails of the distribution are fatter beyond 2 standard deviations than expectation for I.Q., but at these really high levels (above 160) I’m skeptical that most tests are measuring anything real.
During early adulthood, a phase in which the central nervous system displays considerable plasticity and in which important cognitive traits are shaped, the effects of exercise on cognition remain poorly understood. We performed a cohort study of all Swedish men born in 1950 through 1976 who were enlisted for military service at age 18 (N = 1,221,727). Of these, 268,496 were full-sibling pairs, 3,147 twin pairs, and 1,432 monozygotic twin pairs. Physical fitness and intelligence performance data were collected during conscription examinations and linked with other national databases for information on school achievement, socioeconomic status, and sibship. Relationships between cardiovascular fitness and intelligence at age 18 were evaluated by linear models in the total cohort and in subgroups of full-sibling pairs and twin pairs. Cardiovascular fitness, as measured by ergometer cycling, positively associated with intelligence after adjusting for relevant confounders (regression coefficient b = 0.172; 95% CI, 0.168-0.176). Similar results were obtained within monozygotic twin pairs. In contrast, muscle strength was not associated with cognitive performance. Cross-twin cross-trait analyses showed that the associations were primarily explained by individual specific, non-shared environmental influences (≥80%), whereas heritability explained <15% of covariation. Cardiovascular fitness changes between age 15 and 18 y predicted cognitive performance at 18 y. Cox proportional-hazards models showed that cardiovascular fitness at age 18 y predicted educational achievements later in life. These data substantiate that physical exercise could be an important instrument for public health initiatives to optimize educational achievements, cognitive performance, as well as disease prevention at the society level.
The figure to the left is pretty striking, though the general correlation between intelligence and overall health has been long known. I’m not too sure if I really accept that this correlation is as causal as they say it is, but it probably can’t hurt to encourage for moderate exercise within the population. So even if this is another spurious correlation which leads to educational programs which don’t have the effect intended (increase IQ), it wouldn’t do that much harm, and perhaps might result in some good.
Citation: Maria A. I. Åberg, Nancy L. Pedersen, Kjell Torén, Magnus Svartengren, Björn Bäckstrand, Tommy Johnsson, Christiana M. Cooper-Kuhn, N. David Åberg, Michael Nilsson, and H. Georg Kuhn, Cardiovascular fitness is associated with cognition in young adulthood, PNAS 2009 : 0905307106v1-pnas.0905307106.
More Singularity stuff. I’m Not Saying People Are Stupid, says Eliezer Yudkowsky in response to my summary of his talk. The last line of his post says: “I’m here because I’m crazy,” says the patient, “not because I’m stupid.” So the issue is craziness, not stupidity in Eliezer’s reading. The problem I would say is that stupid people have the “Not Even Crazy” problem. They often can’t get beyond their basic cognitive biases because they don’t have a good grasp of a rational toolkit, nor are they comfortable and fluent in analysis and abstraction. I can grant that many smart people are wrong or crazy, but at least there’s a hope of having them internalize Bayes’ rule.
Visa announced this spring that spending on Visa debit cards in the United States surpassed credit for the first time in the company’s history. In 2008, debit payment volume was $206 billion, compared with credit volume of $203 billion. MasterCard reported that for the first six months of this year, the volume of purchases on its debit cards increased 4.1 percent, to $160 billion, in the United States. Spending on credit and charge cards sank 14.8 percent, to $233 billion.
“Consumers are rational thinking individuals, and they’re going to shift their behavior in a way that fits with their current economic situation,” said Scott Strumello, an associate with the Auriemma Consulting Group, a Long Island-based payment card advisory firm. “They’re thinking more seriously about it, and many may decide, ‘I’m going to use debit where I can and reserve credit for larger purchases.’ ”
I think really what’s going on here is that people are embracing the pain of paying; when you decouple time of payment from what you’re purchasing that tends to result in more purchase than would otherwise be the case. A perfectly rational individual wouldn’t need to make a distinction between debit and credit, what does it matter if you pay for a latte tomorrow (that is, it comes out of your account tomorrow) vs. the next billing cycle? No, people are rational about the fact that they are irrational. Pay later = buy more, pay tomorrow = buy less. If you want to buy less then heighten the immediacy of the cost.
Recently I listened to the author of Addiction: A Disorder of Choice, Gene M. Heyman, interviewed on the Tom Ashbrook show. A lot of the discussion revolved around the term “disease”, which I can’t really comment on, but a great deal of Heyman’s thesis is grounded in rather conventional behavior genetic insights. In short, a behavioral trait can have a host of inputs, and is often a combination of environment & genes developing over a lifetime. Alcoholism is not much of an issue among observant Mormons because of their environment, not their genes. Heyman points out that whereas some behavioral phenotypes, such as schizophrenia or autism, are extremely difficult or impossible to cure through one’s own personal choice (i.e., for schizophrenia you may need medication, while many autistics are what they are no matter the drug or environment), addiction therapy can work and so change the expression of the trait. Additionally he makes some important criticisms of the methodologies of clinical studies of addiction which seem important to me, primarily that there is a strong selection bias in these samples which overstates the inability to control impulse in individuals prone to addiction (similar problems probably resulted in an overestimate of the concordance for homosexuality among twins).
But the bigger issue is the same as the one that crops up with obesity, what role does personal responsibility and public policy play? Many of the critics of Heyman seem to be suggesting that he is reverting to blaming someone with an illness. The fat acceptance movement makes similar arguments. These issues, and the fact that policy and culture revolve around them, mean that we have to begin to rethink our conceptions of free will, choice and decision making. It isn’t about people being good, bad, irresponsible or moral, it is people being who they are, and confronting the cards they’re dealt.
We know that dogs can read human faces, it turns out that babies can infer the meaning of different dog barks:
New research shows babies have a handle on the meaning of different dog barks – despite little or no previous exposure to dogs.
Infants just 6 months old can match the sounds of an angry snarl and a friendly yap to photos of dogs displaying threatening and welcoming body language.
Update: See Ed Yong.
Randall Parker points me to a new paper from Joshua Greene which describes the neurological responses of individuals when do, or don’t, lie, when lying might be in their self-interest. From EurekaAlert:
The research was designed to test two theories about the nature of honesty – the “Will” theory, in which honesty results from the active resistance of temptation, and the “Grace” theory in which honesty is a product of lack of temptation. The results of this study suggest that the “Grace” theory is true, because the honest participants did not show any additional neural activity when telling the truth.
Using fMRI, Greene found that the honest individuals displayed little to no additional brain activity when reporting their prediction of the coin toss. However, the dishonest participants’ brains were most active in control-related brain regions when they chose not to lie. These control-related brain regions include the dorsolateral prefrontal cortex and the anterior cingulate cortex, and previous research has shown that these regions are active when an individual is asked to lie.
“When the honest people leave money on the table, you don’t see anything special or extra going on in their brains at all,” says Greene. “Whereas, when the dishonest people leave money on the table, that’s when you saw the most robust control network activation.”
If neuroscience is able to identify lies by peering into the brain of the liar, it will be important to distinguish between activity in the brain when lying and activity caused by the temptation to lie. Greene says that eventually it may be possible to detect lies by looking at someone’s brain activity, although a lot more work must be done before this is possible.
But behavioral economics experiments routinely show that despite similar outcomes, people (and other primates) hate a loss more than they desire a gain, an evolutionary hand-me-down that encourages organisms to preserve food supplies or to weigh a situation carefully before risking encounters with predators.
One group that does not value perceived losses differently than gains are individuals with autism, a disorder characterized by problems with social interaction. When tested, autistics often demonstrate strict logic when balancing gains and losses, but this seeming rationality may itself denote abnormal behavior. “Adhering to logical, rational principles of ideal economic choice may be biologically unnatural,” says Colin F. Camerer, a professor of behavioral economics at Caltech. Better insight into human psychology gleaned by neuroscientists holds the promise of changing forever our fundamental assumptions about the way entire economies function–and our understanding of the motivations of the individual participants therein, who buy homes or stocks and who have trouble judging whether a dollar is worth as much today as it was yesterday.
The gain vs. loss dictum indicates a strong risk aversion in humanity. Why might this be? I suspect it has to do with the fact that for most of our history we’ve been an animal like any other, on the Malthusian boundary, always facing individual or group extinction. The possibility of becoming as rich as Warren Buffet, or as prolific as Genghis Khan, by taking risks or trodding the path less taken, simply did not exist. The downside was extinction, the upside might be temporary success, only to see your lineage be swept away by history due to a propensity to gamble.
Theories of empathy suggest that an accurate understanding of another’s emotions should depend on affective, motor, and/or higher cognitive brain regions, but until recently no experimental method has been available to directly test these possibilities. Here, we present a functional imaging paradigm that allowed us to address this issue. We found that empathically accurate, as compared with inaccurate, judgments depended on (i) structures within the human mirror neuron system thought to be involved in shared sensorimotor representations, and (ii) regions implicated in mental state attribution, the superior temporal sulcus and medial prefrontal cortex. These data demostrate that activity in these 2 sets of brain regions tracks with the accuracy of attributions made about another’s internal emotional state. Taken together, these results provide both an experimental approach and theoretical insights for studying empathy and its dysfunction.
“When a face is distorted, we have no pattern to match that,” Rosenberg said. “All primates show this [staring] at something very different, something they have not evolved to see. They need to investigate further. ‘Are they one of us or not?’ In other species, when an animal looks very different, they get rejected.”
And so, we stare. (An averted gaze is triggered in some people. This too can be overridden only with great difficulty.)
It doesn’t take much of a facial anomaly to trigger a transfixed response; a normal human face upside down will do it. Or one that is simply unmoving.
In her work with Paul Ekman, who pioneered the widely accepted theory that human emotion conveyed via facial expressions is biological in origin, Rosenberg studied a group of people with a condition that prevents their facial muscles from moving.
“They talk about how difficult it is to interact with people because people can’t handle looking at a face that doesn’t move,” Rosenberg said.
The Neurocritic points me to a paper, The brain structural disposition to social interaction:
Social reward dependence (RD) in humans is a stable pattern of attitudes and behaviour hypothesized to represent a favourable disposition towards social relationships and attachment as a personality dimension. It has been theorized that this long-term disposition to openness is linked to the capacity to process primary reward. Using brain structure measures from magnetic resonance imaging, and a measure of RD from Cloninger’s temperament and character inventory, a self-reported questionnaire, in 41 male subjects sampled from a general population birth cohort, we investigated the neuro-anatomical basis of social RD. We found that higher social RD in men was significantly associated with increased gray matter density in the orbitofrontal cortex, basal ganglia and temporal lobes, regions that have been previously shown to be involved in processing of primary rewards. These findings provide evidence for a brain structural disposition to social interaction, and that sensitivity to social reward shares a common neural basis with systems for processing primary reward information.
The primary figure, reedited for easy viewing on the page-width of this weblog:
Andrew Gelman has a post up titled Difficulties in trying to understand the views of others, responding to a Robin Hanson taxonomy outline the motivations of liberals, conservatives and libertarians. Gelman is skeptical of Hanson’s glosses of each group.
The human ability to engage in Meta-Representation is one of the hallmarks of our species. We can analyze abstract ideas, take the positions of others, examine counter-factuals and what-if’s. In terms of core competencies our Theory of Mind is a sharp knife, we are unparalleled at modeling social relations contingent upon the mental states of other human beings and how they might react to a huge range of inputs. Many scholars have made the case that core first order cognitive competencies such as Theory of Mind, Folk Physics, Meta-Representation, etc., are the units from our more complex mental activities or cultural productions are derived. Scientific models for example are abstractions of reality which allow us to examine alternative outcomes as we shift the parameters of the system. Many cognitive scientists would argue that our belief in supernatural agents, gods and ghosts, is only possible because of our well developed Theory of Mind.
What does this have to do with Robin Hanson’s post and Andrew Gelman’s response?
David Brooks has a column out where he mulls over the role of time invested in amplifying talent:
If you wanted to picture how a typical genius might develop, you’d take a girl who possessed a slightly above average verbal ability. It wouldn’t have to be a big talent, just enough so that she might gain some sense of distinction. Then you would want her to meet, say, a novelist, who coincidentally shared some similar biographical traits. Maybe the writer was from the same town, had the same ethnic background, or, shared the same birthday — anything to create a sense of affinity.
The primary trait she possesses is not some mysterious genius. It’s the ability to develop a deliberate, strenuous and boring practice routine.
Brooks’ attempt is to slap back at genetic determinism, but it sounds like he could be describing a gene-environment correlation. To a great extent that’s what “amplifying talent” is, a positive feedback loop between propensity and hard work.