Again, Chagnon, Sahlins, and science:
When we allow personal ideological bias rule to our scholarly work, we limit the value of our research to answer real questions and to contribute to broader social and scientific debates. If you have an ideological axe to grind, either leave scholarship and go into politics, or else find ways to achieve a level of scholarly objectivity in your research and writing. (yeah, I know, the postmodernists are going to smirk about how naive I am to even use the word “objectivity.” Check out my past posts on epistemology; one can employ objective methods and maintain an overall level of objectivity while admitting that the world is messy and researchers are never free of preconceptions or bias.).
To paraphrase John Hawks, “I think its time to reclaim the name ‘archaeology” from past generations.” We have lots of data and ideas to contribute to major scholarly and public debates today, but too often our writing and epistemological stance work against any wider relevance.
For various reasons cool detachment is harder in anthropology, nor should it always be employed. But the pretense and striving for detachment is an essential part of science (coupled with curiosity and passion about the subject of interest). A counterpoint can be found in the comments below:
As many of you know, right before the election I made a $50 bet with Hank Campbell that Nate Silver would get at least 48 out of 50 states correct for the 2008 presidential election. I also got one of Hank’s readers to sign on to the same bet. Additionally, a few readers and Twitter followers got in on the wager; they were bullish on Romney’s prospects, and I was not (more honestly, I was moderately sure they were self-delusional, and willing to take their money to make them more cautious about their self-delusional biases in the future). But there’s a major precondition that needs to be stated here: I hedged.
Last February a friend told me he was 100% confident that Barack Hussein Obama would be reelected. This prompted me to ask for favorable terms on a bet. The logic was simple, if he was 100% confident, then it shouldn’t be a major issue for him, because he was collecting anyhow. As it happens he gave me 5 to 1 odds, so that I would collect $5 for every $1 he might collect. I told him beforehand that I actually thought that Obama had a 60-70% chance of winning, so I went into the wager assuming I’d be out a modest amount of money. But that was no concern. My goal was now to convince those who were irrationally supportive of Romney to take the other side of the bet. For whatever reason people have an inordinate bias toward their hoped-for-candidate in terms of who they think will win, as opposed to who they wish to win. The future ought gets confused with the future is.* I got people to take the other side, which means that I was going to make money no matter who won.
At this point one might wonder about my comment that I suspected that those who were bullish on Romney were delusional. It’s rather strong, and my reasoning is actually rather strange. Overall I accepted the polling averages. A few years back I was an economic determinist in election outcomes, but Nate Silver had convinced me that the sample size was too small to get a good sense of the real proportion of variation being predicted here. In short, the economy matters, but I stepped back from the supposition that it was determinative (as it happens, purely economic models that were excellent at predicting past elections face-planted this time). So that’s why I relied on the polls. Though I leaned on Nate Silver, I didn’t think he was particularly oracular, and I’d say that I’m mildly skeptical of the excessive faith some put in his particular person. When I put a link up to Colby Cosh’s mild take-down of Silvermania I received a few moderately belligerent comments. This despite the fact that I was willing to put money on Silver’s prediction.
Betsey Stevenson and Justin Wolfers hail the way increases in computing power are opening vast new horizons of empirical economics.
I have no doubt that this is, on the whole, change for the better. But I do worry sometimes that social sciences are becoming an arena in which number crunching sometimes trumps sound analysis. Given a nice big dataset and a good computer, you can come up with any number of correlations that hold up at a 95 percent confidence interval, about 1 in 20 of which will be completely spurious. But those spurious ones might be the most interesting findings in the batch, so you end up publishing them!
Those in genomics won’t be surprised at this caution. I think in some ways social psychology and areas of medicine suffered a related problem, where a massive number of studies were “mined” for confirming results. And we see this more informally all the time. In domains where I’m rather familiar with the literature and distribution of ideas it is often easy to infer exactly which Google query the individual entered to fetch back the result they wanted. More worryingly I’ve noticed the same trend whenever people find the historian or economist who is willing to buttress their own perspective. Sometimes I know enough to see exactly how the scholars are shading their responses to satisfy their audience.
With great possibilities comes great peril. I think the era of big data is an improvement on abstruse debates about theory which can’t ultimately be resolved. But you can do a great deal of harm as well as good.
As most long time readers know I generally screen to at least a cursory level comments by people who have not posted before. Except for purposes of entertainment only I won’t publish Creationist comments. Naturally some comments are offensive, but a surprising number I just don’t let through are “not-even-wrong” or “too-stupid-to-understand-the-original-post” class. But yesterday a comment was in the mod queue which really confused me. My initial instinct was to spam it, but I was moderately intrigued, so I let it through. In response to my assertion of having read material which indicated that Gaelic was the language of the Irish peasantry before 1800, Paul Crowley asserted:
What you have read is quite wrong. It is a common misconception (especially in Ireland) based on wishful nationalistic thinking. Farmers and peasants do not drop their native language and learn to speak another without extreme compulsion. While there was some pressure, there was no compulsion. The ancient ruling class — as represented later by the Irish Earls, and as seen in the courts of local chiefs — spoke Gaelic, and it is they who left nearly all the records. Illiterate farmers leave very few records, but what little there is suggests that English has been tongue of the great bulk of the Irish peasantry for as far back as we want to go. The rebels of 1598 all spoke English. Walter Raleigh had no difficulty understanding the speech of local people in Cork in the 1570s.
The great difficulty with the records is that the ‘data’ on this matter reflects aspirations rather than facts. Since the ‘English’ (actually the Norman-French) invaded in 1172, every self-respecting Irishman has declared his deep love and respect for the language so cruelly taken from him….
There were statements in the comment which I’m very skeptical of (e.g., “Farmers and peasants do not drop their native language and learn to speak another without extreme compulsion” is obviously plain bullshit, there are plenty of ethnographic and historical counter-examples to this!). But the commenter asserted forcefully, in cogent English. I don’t know the area, so though I was very skeptical I let the comment through.
Paul Ó Duḃṫaiġ responded rather well, with citations. In hindsight I made a mistake in letting the original comment go through without citation. But I assumed that Paul & Paul would respond to Paul (the Irish are not creative in first names?), and they did. Paul Crowley’s success it getting through my bullshit filter indicates the power of assertive coherency; far too many nuts exhibit standard nut style. Pegging someone as a nut by style rather than substance is far easier. In the case of substance you have to have a relatively good grasp of the field. Irish historical linguistics is not a field which I’m very deeply knowledgeable in, so I used my style bullshit detector, despite my misgivings.
This is analogous to the “Shaggy defense.” Make shit up in the face of overwhelming evidence, and see if anyone buys it. It worked with me. Live and learn.
Nate Silver has an important post, Herman Cain and the Hubris of Experts. It’s not really about Herman Cain. Rather, it’s about the reality that pundits tend to underestimate uncertainty and complexity. Saying you don’t know isn’t as satisfying as making a definitive categorical assertion. This manifests particularly in the domains of sports and politics because there are clear and distinct criteria to assess predictive power. Politicians win or lose elections, while teams win or lose games. And yet despite the long history of minimal value-add on the part of pundits they persist in both domains. Why? I think it’s pretty obviously a cognitive bias toward storytelling. Similarly, in the 1930s the Alfred Cowles concluded that financial newsletters didn’t help their readers “beat the market,” but he also assumed these newsletters would persist. There was a psychological need for them.
The key here is to change the attitude of the pundit class. The populace will always have a preference for stories with plausible and clean conclusions over radical uncertainty. Not surprisingly many professional pundits reacted with hostility to Silver’s observation that they’re quite often wrong. I don’t venture into political punditry often, but when the Democrats passed health care reform I predicted that Mitt Romney would have no shot to win the the Republican nomination. The facts in this case seemed so clear. Romney was going to be walloped over and over again over his record on health care reform when he was governor. I was wrong. Romney may not win, but obviously he’s a contender. My logic was simple and crisp, but the logic was wrong. That’s why you let reality play out. If what was “on paper” determined national elections, then we’d be talking about President Hillary Clinton.
Of course political journalists that engage in analysis still have a role to play. Don’t newspapers have horoscopes and style sections?
If you know of John Ioannidis‘ work, Jonah Lehrer’s new piece in The New Yorker won’t be a surprise to you. It’s alarmingly titled The Truth Wears Off – is there something wrong with the scientific method? Here are some sections which you can’t get without a subscription, and I think they get to the heart of the problem:
“Whenver I start talking about this, scientists get very nervous,” he says….
Jennions admits that his findings are troubling, but expresses a reluctance to talk about them publicly. “This is a very sensitive issue for scientists,” he says. “You know, we’re supposed to be dealing with hard facts, the stuff that’s supposed to stand the test of time. But when you see these trends you become a little more skeptical of things.”
There is no mysterious “force” in the universe. The answer is probably going to come down to a combination of the reality of randomness (regression to the mean falls into this category), individual bias, and the cultural incentives of the system of scientific production. This is partly a coordination problem. Most social psychologists, to pick on one discipline which even other psychologists will finger-point toward, are probably aware that their results aren’t going to be robust over the long haul. But they have tenure to gain, mortgages to pay, and fame to accrue. This is not furthering the collective system-building which is science, but the first person to opt-out of rat-race for sexy findings which have publishable p-values will soon be an ex-scientist.
If you don’t have a subscription to The New Yorker, buying one off the newsstands for an article like this is much more worthwhile than another boring political profile. You should also check out Why Most Published Research Findings Are False. You can read that for free. Also see David Dobbs’ How to Set the Bullshit Filter When the Bullshit is Thick.
Note: Statistics are ubiquitous across many of the sciences, but the reality is that most people who use statistics don’t understand them too well. That’s not necessarily an issue, most people who use computers don’t know how they work, but then again, most people don’t use the mouse as a foot pedal.
Update: The title is way too strong as a reflection of my opinion. I’ve added a question mark.
A friend once observed that you can’t have engineering without science, making the whole concept of “social engineering” somewhat farcical. Jim Manzi has an article in City Journal which reviews the checkered history of scientific methods as applied to humanity, What Social Science Does—and Doesn’t—Know: Our scientific ignorance of the human condition remains profound.