Speaking of contests, Orac has winnowed a list of “terminal terminator cranks” into the top three and asks readers to pick a winner:
If you’re a climate blogger who often plays up the most potentially catastrophic consequences of global warming, and a new sociological study finds that “dire messages warning of the severity of global warming and its presumed dangers can backfire,” what do you do?
You shoot the messengers, of course. In this case, as Joe Romm headlined last month, that would not be the researchers of the study but the journalists who wrote about it:
Media blows the story of UC Berkley study on climate messaging
A day after that post appeared, the messengers received another scolding from Brad Johnson, Romm’s blogging colleague at the Center for American Progress. Johnson elaborated on how the media purportedly botched its reporting of the findings. But he was also mildly critical of the Berkley press release, for its “confusing portrayal of the study’s results,” and also of the researchers. Johnson asserted that their conclusions “have been somewhat misleadingly presented” in their own paper.
But what really caught my eye was this passage from Johnson:
In part because of the misleading presentation in the paper and the press release, journalists like the Washington Post’s Juliet Eilperin, New York Times’ Andy Revkin (who rejects the science that significant climate impacts are already being felt in the United States), Time Magazine’s Bryan Walsh, Greener World Media’s Adam Aston, Discovery News’s Kieran Mulvaney, and social scientist Matthew Nisbet misinterpreted the results.
That’s a whole lot of misinterpreting going on, and from people I trust to get things right. So I thought I’d check this out with Robb Willer, a UC Berkley sociologist and a co-author of the study. What follows are two questions I asked him and his responses (via email):
Q: Was your paper presented in a misleading fashion, and if so how?
BW: Perhaps predictably, being one of the two authors of the paper, I don’t think that it was presented in a misleading fashion at all. I corresponded with Brad Johnson over email some about this. I think that his blog post was, on the whole, a pretty accurate discussion of the paper and its findings. Where I differ with Brad is in his characterization of the messages we used in our research. In them we tried to, (1) give an accurate description of the scientific view of global warming’s “possible” consequences, and then (2) close with either optimistic or pessimistic conclusions. The result, we claimed, was that one message was “dire” and the other “positive.” I think they were on the whole. I think this is straightforward reasoning, and not at all misleading.
Here’s an analogy I drew in my email with Brad: if I told you something terrible was “possible” (e.g., “your neighbor could get lung cancer”), but I am optimistic and confident it won’t if reasonable steps are taken (e.g., “if she quits smoking and adopts a healthy lifestyle”), I wouldn’t call that a “dire” message. And certainly it wouldn’t seem dire in contrast with a message that said her prospects for recovery were extremely poor. I think it would be justifiable to label the message with the optimistic conclusion as “positive”…or at least as positive as it gets when you’re messaging about cancer, global warming, etc.
Q: Did those journalists Brad listed misinterpret the results of the paper? If so, was there any particular misinterpretation that warrants correcting?
BW: One thing that has come up in a couple places is the small sample size used in the research. For example see here. [KK: Willers is referring to this criticism from Robert Brulle: "The conclusions drawn from a tiny study don't support the extravagant claims made in the press."] It’s very understandable to criticize social scientific research for having a have a small sample size, and I think it’s a criticism with some very obvious merits. But a couple points should be kept in mind.
First, these were experimental studies. The value of experimental studies lies not in the use of large, representative samples from larger populations, but in the use of random assignment of study participants to experimentally controlled conditions. Such an approach largely mitigates concerns regarding spurious causation. Experiments are a powerful empirical method, especially well-suited for testing causal claims. Of course there’s an important place for large-scale representative survey studies as well, though they are typically viewed as most useful for describing patterns of opinion in large populations. They are useful, but not as well-suited as experiments, for evaluating causal claims such as the one we examined (the old “correlation doesn’t equal causation” point). In our paper we began by citing large-scale survey data suggesting that belief in global warming had leveled off or even declined in the U.S. in recent years. We then turned to experiments to test a hypothesis that might help explain that pattern.
Second, my co-author and I have conducted other studies on this subject that agree with those in the paper. For example, an earlier version of our paper included studies showing that negative messages about global warming tend to backfire in their effects on belief in global warming. One was another smallish study with 50 undergraduate participants, the other a larger field study of 306 Americans recruited via the internet. In the course of peer review, Psychological Science suggested that we cut these studies in order to make the paper more focused on the link between individuals’ “just world” beliefs and belief in global warming. They viewed this as the more significant scientific contribution of the article, not the point about positive versus dire messaging. The latter point is not as novel from a scientific perspective, as I understand that there is a fair amount of past research on the relative efficacy of fear-based appeals versus more positive ones.
But, these clarifications aside, I want to say that I don’t have any desire to quibble with global warming activists. My co-author and I conducted this research not just to advance the general, scientific understanding of political attitudes, but hopefully also to offer some practical insights on why so many Americans struggle to accept the conclusions of scientific research on global warming. We hope that this research offers some helpful insights for people working in this arena. My co-author and I continue to conduct research in this vein, including a couple projects we are working on now. We hope that these and future findings will help us better understand global warming skepticism and related patterns of opinion. The dynamics of public opinion and political attitudes are quite complex and we believe that systematic, scientific research in this domain is potentially very important and fruitful.
The other part of that passage from Brad Johnson’s post that struck me was his gratuitous swipe at Andrew Revkin. Johnson wrote that Revkin
rejects the science that significant climate impacts are already being felt in the United States
It seems odd that Johnson would shoehorn this in as an aside. Anyway, I emailed Revkin to ask if the characterization was accurate. He responded just before leaving for the climate talks in Cancun:
One quick gripe of course is with his (intentionally?) murky statement of my views on U.S. climate impacts. Of course the U.S. has felt significant climate impacts this year (ask anyone in Nashville).What is not clear at all is whether there’s a discernible contribution in US extreme weather from global warming driven by the global buildup of greenhouse gases.