Jonah Lehrer has a post up, How Preschool Changes the Brain over at Frontal Cortex. He reports on a paper, Investing in our young people, which has been around for about 5 years. The top line of it is this, an investment in a $2,500/year (inflation adjusted) pre-school program in the early 1960s seems to have been effective in improving the life outcomes of at-risk low SES young black Americans tracked over their lives up to the age of 40. Their measured I.Q.s were not initially high, 85-75, 15th to the 5th percentile (though the median black American IQ is ~85, so not so low within ethnic group). They did gain an initial I.Q. boost, but like most of these programs that boost disappeared over time. But in terms of their non-cognitive skills there remained an appreciable effect which impact their life outcomes. What were these non-cognitive skills? To me they resemble classical bourgeois values rooted in low time preference. Willing to be a “grind,” work hard and forgo short-term pleasures and not cave in to impulses with short-term gains and long-term costs.
Here’s a figure from the paper which I’ve reedited with labels:
Intuitively we understand this. Through experience we know of this. There are individuals with high intellectual aptitudes who lack self-control. Who do not succeed in life because of poor life choices. There are individuals with mediocre intellectual aptitudes who achieve a certain amount of comfort and prestige in their life because of their rock solid focus on their goals. By analogy an old under-powered computer with Ubuntu installed on it running Open Office will still perform at a higher level in achieving productivity goals than a high-powered computer which is loaded with Windows riddled with spyware and mostly running games which require a lot of computational muscle power beyond the specs of the box.
My main question is one of interpretation: is the change in non-cognitive skill portfolio due to intervention at a “critical period” in a neurobiological sense? The authors make explicit analogy to language. If children are exposed to a language before the age of 12 they generally can learn and speak it without an accent with marginal effort. Severely abused, or in rarer cases “feral children,” who are not exposed to language at all in their formative years, may remain unable to speak fluently in any language for the rest of their years after recontact with mainstream society. This is likely a function of the biological aspect of language acquisition and learning. Or at least that is the contemporary consensus.
Does this apply to non-cognitive skills? I am moderately skeptical, though my attitude here is provisional at best. Through the pre-prints the authors take a methodological individualistic perspective. Individuals invest in their skills, and the earlier they invest in their skills the more positive feedback loops can emerge so that their skills can mature, extend and sharpen. There’s clearly something to this. But the focus on family environment and such in the paper makes me a touch skeptical. There is a large behavior genetic literature which suggests that family environment, “shared environment,” is not very predictive of long term outcomes. Rather, “non-shared environment” explained about 1/2 of the outcomes for many behavioral traits (the balance is genetic variation).
In The Nurture Assumption Judith Rich Harris argued that the non-shared environment really referred to peer groups. Again, the analogy to language is illustrative. Children do not speak with the accent of their parents, they speak with the accent of their peer groups. There is an exception to this: autistic children (or, children who consciously want to have a particular affect). Though I was not explicit, this is the sort of dynamic I was indicating when I suggested that culture matters in saving. Different cultures have different norms, values, and frameworks in which you can express your personality predispositions. In genetic terminology I’m talking about a norm of reaction.
Quickly skimming through the original paper which Jonah Lehrer’s post was based on (and skipping over the guts of the economic modeling) I was unclear if there was a long-term peer group effect, as they didn’t seem to explore this possibility. Perhaps instead of a critical period in a neurobiological sense, what we’re seeing here is the emergence of specific peer groups which reinforce and buffer individuals in decision making and goal setting? Perhaps the original intervention resulted in the emergence of a new subculture within the low SES black community of Ypsilanti, Michigan?
Life outcomes can vary a great deal based simply on social norms.
In terms of the bottom line this may not change the policy conclusion that much. The operational outcome of a given policy may be the same even if the means by which the outcomes are realized differ. That being said, I probably does matter on the margins if the effect is due to individual level biological changes vs. group level norm shifts when it comes to details of policy formation.
Image Credit: CDC