The cultural construction of truth

By Razib Khan | December 7, 2010 2:36 pm

If you know of John Ioannidis‘ work, Jonah Lehrer’s new piece in The New Yorker won’t be a surprise to you. It’s alarmingly titled The Truth Wears Off – is there something wrong with the scientific method? Here are some sections which you can’t get without a subscription, and I think they get to the heart of the problem:

“Whenver I start talking about this, scientists get very nervous,” he says….

Jennions admits that his findings are troubling, but expresses a reluctance to talk about them publicly. “This is a very sensitive issue for scientists,” he says. “You know, we’re supposed to be dealing with hard facts, the stuff that’s supposed to stand the test of time. But when you see these trends you become a little more skeptical of things.”

There is no mysterious “force” in the universe. The answer is probably going to come down to a combination of the reality of randomness (regression to the mean falls into this category), individual bias, and the cultural incentives of the system of scientific production. This is partly a coordination problem. Most social psychologists, to pick on one discipline which even other psychologists will finger-point toward, are probably aware that their results aren’t going to be robust over the long haul. But they have tenure to gain, mortgages to pay, and fame to accrue. This is not furthering the collective system-building which is science, but the first person to opt-out of rat-race for sexy findings which have publishable p-values will soon be an ex-scientist.

If you don’t have a subscription to The New Yorker, buying one off the newsstands for an article like this is much more worthwhile than another boring political profile. You should also check out Why Most Published Research Findings Are False. You can read that for free. Also see David Dobbs’ How to Set the Bullshit Filter When the Bullshit is Thick.

Note: Statistics are ubiquitous across many of the sciences, but the reality is that most people who use statistics don’t understand them too well. That’s not necessarily an issue, most people who use computers don’t know how they work, but then again, most people don’t use the mouse as a foot pedal.

MORE ABOUT: Epistemology

Comments (9)

  1. Markk

    Traditional statistical significance and the thresholds of P values always are a little off-putting to me. I am always much more comfortable reading about actual models with confidence limits. The thing is, we do 10’s of thousands of scientific studies every year. To me that is the population for general P-value significance. Thus there will be thousands of “statistically significant” studies that are not showing us anything about reality.

    As long as researchers know that, things are ok, but when you see the numbers being plugged in R or whatever, ugh. Meta-studies in particular. We must use statistics in science, it is the right tool, but I think the subject should be taught in place of calculus in HS and undergraduate.

  2. J

    Any recommendations (good textbook or web-tutorial) to someone who’s interested in learning statistics, but never took classes in college on the subject? Regrettably, I wasted a lot of time in college and only took one math class. One of many things I would change if I could go back in time:)

  3. IMO, scientific method by itself sooner or later turns into a circular argument.

    How do we know that facts are “facts” and hard facts are hard facts? Only because a scientist told us so. But how do we know that his assumptions are still valid? The reason some scientists get nervous is because they often know that there’s an ascertainment bias going on all the time. For science to thrive and ignorance to subside, science needs to establish a meaningful dialogue between the social and the hard sciences, between sciences and humanities, between sciences/humanities and religions, etc., so that both parties constantly cross-check each other’s claims and translate them into each other’s “languages.” This dialogue shouldn’t turn into a sticky compromise but a well-regulated mutual commitment to respect each other’s territory.

    The same danger as awaits science has already befallen Christianity with its miracles: apparently, they used to be abundant shortly after the emergence of Christianity and in the middle ages but now it’s hard to come across a single one. And some branches of Christianity don’t need them to maintain their integrity.

  4. I’m reading Delusions of Gender right now, and this is one of the points Fine makes repeatedly– as well as the fact that studies with “interesting” results are more likely to get published than ones with boring results.

  5. I think it’s great that issues of replicability are getting more attention now, but it would be nice to see more attention paid to the details. I like Ioannidis’ work because he sketches out nicely why the current publication system leads to high false positive rates. Moreover, there’s nothing necessary about it. Right now, the incentive structure rewards people who publish false positives and punishes those who try to replicate work. Yes, scientists in theory should be willing to sacrifice their careers (and their families) in the pursuit of truth, but wouldn’t it be better to set up the incentive structure so that clean data was in our own narrow interests?

    I’ve got a fully expanded post on these issues here.

  6. @German: If you don’t read any science yourself, then yes, it is true, the only way you know if something is scientific fact is by asking a scientist. Alternatively, you could read the actual experiments or do experiments yourself.

    Two things make a scientific theory successful: (a) it provides deep, satisfying explanations for the state of the world, and (b) it makes good predictions about the future. These are in theory separable, but to the extent they are separated, most scientists focus on (b). Whether or not a scientific theory correctly predicts the future is not a matter of opinion. It either does or it doesn’t.

    You are right, though, that there is some circularity here. The scientific method is provably the best method of producing theories that do a good job of predicting the future precisely because that’s at the core of the scientific method. Why evaluate theories in terms of their ability to predict the future? Why not prefer theories that are most intuitively correct? No reason. Science can only tell us about the state of the world, not about which parts of the world we should care about.

  7. “If you don’t read any science yourself, then yes, it is true, the only way you know if something is scientific fact is by asking a scientist. Alternatively, you could read the actual experiments or do experiments yourself.”

    By becoming a scientist yourself you can’t solve the problem of circularity. You can forget about it and just join the culture of experimentation and believing in results. Science is like Communism – it works as long as everybody buys into it. If someone refuses to buy into it, it reverts back to Capitalism or the free, unregulated production of knowledge that “works.”


Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

Gene Expression

This blog is about evolution, genetics, genomics and their interstices. Please beware that comments are aggressively moderated. Uncivil or churlish comments will likely get you banned immediately, so make any contribution count!

About Razib Khan

I have degrees in biology and biochemistry, a passion for genetics, history, and philosophy, and shrimp is my favorite food. In relation to nationality I'm a American Northwesterner, in politics I'm a reactionary, and as for religion I have none (I'm an atheist). If you want to know more, see the links at


See More


RSS Razib’s Pinboard

Edifying books

Collapse bottom bar