'Doubters' of Climate Change Lack Scientific Expertise

By Sheril Kirshenbaum | July 23, 2010 11:14 am

Now there’s data–actual data–showing how few climate scientists doubt the existence of climate change. From Science Daily:

The small number of scientists who are unconvinced that human beings have contributed significantly to climate change have far less expertise and prominence in climate research compared with scientists who are convinced, according to a study led by Stanford researchers.

Expertise was evaluated by the number of papers on climate research written by each individual, with a minimum of 20 required to be included in the analysis. Climate researchers who are convinced of human-caused climate change had on average about twice as many publications as the unconvinced, said Anderegg, a doctoral candidate in biology.

Prominence was assessed by taking the four most frequently cited papers published in any field by each scientist — not just climate science publications — and tallying the number of times those papers were cited by other researchers. Papers by climate researchers convinced of human effects were cited approximately 64 percent more often than papers by the unconvinced.

“I never object to quoting opinions that are ‘way out.’ I think there is nothing wrong with that,” said Stephen Schneider, professor of biology and a coauthor of the paper in Proceedings of the National Academy of Sciences. “But if the media doesn’t report that something is a ‘way out’ opinion relative to the mainstream, then how is the average person going to know the relative credibility of what is being said?”

“It is sad that we even have to do this,” said Schneider. “[Too much of] the media world has just folded up and fired its reporters with expertise in science.”

The Stanford team is prepared for the doubters of anthropogenic climate change to object to their data.

Unfortunately, I am too. Carry on…

(H/T Philip)

Comments (53)

  1. Sheril,
    What’s really interesting is the original sotry ran nearly a month ago, and we all collectively missed it. Of ocurse, I was in an internet-free marsh at the time . . .

  2. Steve

    Just to be a party pooper, but based on the excerpts you used here, I can see those in the denial camp objecting strongly to the study.

    The criteria just seem to reinforce the mistaken belief of a vast conspiracy promoting climate change. They’ll say that just because someone writes more papers and has them cited more by other people (who are naturally in on the conspiracy) doens’t mean that they are right, it just means their papers were published more. And since one of the main claims I’ve always seen from deniers is that the mainstream is always supressing the dissenting opinions, this data will simply reinforce their beliefs.

  3. Problem is, the folks that are leading the denialism charge will just claim that doubters of AGW are less likely to be published because the man is keeping their opinions suppressed. That may be a legitimate criticism, but they’ll take it as evidence of conspiracy…

  4. Brent

    I love the irony that the method to prove that the people that have faith in A.C.C. are more scientific, is in itself unscientific. You have to first make the assumption that publication and citation to publication equates with scientific expertise. Sure, it may be a measure of it, but it could also indicate a scientists popularity. After all, isn’t reasonable to assume that scientists who buy into the religion of human caused climate change are more liked by their peers, and thus published and cited to more (even in non-climate change publications)? Test the theory with science, like any other theory. That is how you will convince skeptics like me; not with popularity contests.

  5. Nullius in Verba

    “showing how few climate scientists doubt the existence of climate change”

    There are virtually no sceptics who doubt the existence of climate change. This definition is nonsense.

    “The small number of scientists who are unconvinced that human beings have contributed significantly to climate change”

    That’s also the wrong definition. A large number of sceptics are also “convinced that human beings have contributed significantly to climate change”. You will thus be including huge numbers of sceptics in your total. The questions are over how much, and whether it is going to be a problem.

    “Expertise was evaluated by the number of papers on climate research written by each individual…”

    How is that a measure of expertise? Or correctness?

    You are seriously evaluating the quality of researchers by the number of papers written?! It’s an interesting form of ad populam argument, and certainly not new, but most scientists complain about the “publish or perish” use of publication/citation statistics instead of quality to judge academic worth. It’s generally regarded as a rather poor measure.

    It is also, very obviously, confounded by selection effects. Sceptics find it much harder to get funding. Sceptics often keep their heads down and avoid airing controversial views for ‘political’ reasons. (Academic politics is well known for it.) Sceptics report that peer review is biased in favour of the consensus. Obviously if you define “expertise” this way, you will get such an outcome whether the thesis is true or not, but any social sciences undergraduate or professional statistician would immediately recognise that there was a serious problem of selection bias and probably reject the result as meaningless.

    If you wanted to try to prove that journals were biased against scepticism (“It won’t be easy to dismiss out of hand as the math appears to be correct theoretically…”), and unfairly keeping sceptical papers out, you’d probably do exactly the same study.

    Why use such an indirect metric, rather than a direct one? Why not simply ask them?

    This entire post is based on the fallacy of argumentum ad populam. Science is not decided by a show of hands. And I think the fact that they use such an argument says a lot about the authors’ scientific expertise, as well. It’s quite funny, though, that this is now the best they can do.

    Personally, I think this sort of think helps the sceptics’ case. Anybody who has read the HARRY file or any of the rest of it will recognise that none of the substantive points have been, or are being answered, and see this sort of thing as a sign of desperation. So please, do continue to post this stuff. It’s very helpful.

  6. Philip, yes I saw it wasn’t this month, but thought it was a good reason to highlight Stephen Schneider.

    Although I didn’t realize CM covered this already.

  7. Jon

    Brent: [Publication] … could also indicate a scientists popularity.

    If that were the case, the lid would have blown off this a long time ago. If you can demonstrate that the paradigm is wrong, then your scientific career is made. The incentives to do that over the last few decades are obvious.

    Of course, the other way to make your career is to validate the prejudices of certain resource-heavy interests. Even better, get lots of media attention from your supposed victimization and “principled” dissent (the onion nailed this story a long time ago).

  8. gillt

    Nullius in Verba: “You are seriously evaluating the quality of researchers by the number of papers written?! It’s an interesting form of ad populam argument.”

    As I see it, you have the following options:

    1. You don’t understand the fallacy
    2. You don’t understand what peer reviewed or refereed means
    3. You reject how ALL modern scientific knowledge advances.

    Take your pick.

    And your opinion that most scientists complain about the threat of “publish or perish” (you’ve provided no data for that and therefore engage in a double standard) does not mean those same scientists reject the peer review process out of hand.

    Again, saying heaps of data have been and continue to be published in support of a theory, thereby making that theory more robust, only automatically gets labeled “argumentum ad populam when your personal bias gets in the way.

  9. Jon

    You don’t understand what peer reviewed or refereed means

    That’s the big one. If you have a conspiratorial mind, you could dream up a scenario where worldwide, every scientist, every last one of them, is suddenly going unscientific, letting things pass and not applying what they know during peer review,.. Mind you, all it would take is one scientist to break ranks and demonstrate how all the lines of evidence, all the independent researchers and their organizations are wrong. But not one of them is doing that, because, you know, its all a big conspiracy. All of them, every last one of them, would rather lie and bring about a communist utopia than do wonders for their careers by showing why all their colleagues are wrong. Because secretly, all scientists are commies.

  10. Nullius in Verba

    Jon,

    A number of aspects of it were demonstrated years ago. (And published.) But if one has an air of authority and continues to assert it confidently enough, most people won’t bother to check and will continue to take the authority on trust.

    gillt,

    1. The number of papers is no better a guide than the number of people or the number of times a thing is said as to the truth of a proposition, for essentially the same reasons.
    2. I’ve done peer-review. I know how it works.
    3. Scientific knowledge does not advance by journal peer review. It advances by the rest of the scientific community reading those papers and trying to falsify them. Peer review only seeks to identify papers worth trying to falsify.

    No scientist “rejects” the peer review process. But it doesn’t serve the purpose you seem to think it does. It’s a perfectly good way of doing what it is intended to do. But it is not intended to rule on scientific truth, or measure “expertise”.

    “Heaps of data” may be an argument for a theory – it depends on the data. “Heaps of papers” are not. (Nor are mere assertions that there are “heaps of data” without saying what this data is, or where it comes from.) When people stop talking about what’s in the data and turn instead to the number of papers written about it, you’ve got a heap of trouble.

    There were once “heaps of data” in support of Newtonian mechanics, and only the Michelson-Morley experimental result – a single null – against. But a single number, if it is the right one, can outweigh over 200 years worth of papers written by the scientific experts. (And there’s no doubt, they were experts.) You surely know the famous quote: “Why 100 authors? If I were wrong, then one would have been enough!”

  11. Jon

    It advances by the rest of the scientific community reading those papers and trying to falsify them.

    Over the past few decades, there have been no resources dedicated to trying to falsify these research results? Are you for real?

  12. Sorbit

    -There were once “heaps of data” in support of Newtonian mechanics, and only the Michelson-Morley experimental result – a single null – against

    Climate change is not theoretical physics. Things were much simper then. Either the speed of light was constant in all directions or it was not. A single experiment could decide this. I cannot imagine a single number overturning the consensus on climate change, just as I cannot imagine a single number validating it.

  13. Nullius in Verba

    #11,

    Yes, I’m for real.

    #12,

    Climatology is a branch of physics. But I agree that it’s a complicated one. But my point was that you cannot simply take the number of experts, the number of papers published, or the number of megabytes of data as being the definitive criterion on which science judges between hypotheses. Would you agree with that?

  14. gillt

    Nullius in verba: ”
    1. If you’re saying it’s for the same reasons, then you’re forcing an equivalency not an analogy. And hypotheses are NOT truth propositions and hypothesis testing is NOT the same as repeating a truth proposition: failed analogy, crappy logic.
    2. Irrelevant. Doesn’t mean you understand or appreciate it’s import.
    3. Sorry, but peer review doesn’t just magically end when your paper is published. Claims are always tentative. Everyone knows this. Peer review includes other scientists reading and testing the claims in your paper. If they fail to reproduce or otherwise challenge the results, they submit–often in the same journal!–and it gets published if it’s good science. Your paper? No longer cited, your ranking drops, you lose status. Science advances! All peer review!

    You admitting that no scientist rejects the peer review process exposes your false conflation in your first comment.

    Peer review is absolutely a vital part of increasing accuracy, which is preferable to saying “scientific truth.” You’ve also failed to explain why it’s a bad proxy for expertize. Clearly, if you publish shoddy work, you’re not going to be considered an expert in the field.

    Heaps of data and heaps of papers are roughly equivalent in this context, in the context of AGW.

    This is not the forum to discuss the data. Complain over at RealClimate if you have a problem with the data.

    Scientific methods and accuracy advances so historical comparisons are sketchy at best. I’ll happily reject it unless you offer something more compelling than “Well, it’s happened before, so…”

  15. Jon

    Right, Nullius, no one with any money to throw at research would be interested in someone falsifying those results. And no one in the sciences trying to make their careers would be interested in falsifying them either. Further, war is peace. Freedom is slavery.

  16. Nullius in Verba

    #14,

    “Peer review includes other scientists reading and testing the claims in your paper.”

    Excellent! A point I’ve made here before. I quite agree. But that’s not how the phrase is commonly used – a lot of people have been using it to describe peer review by journals, and since you used it in the phrase “peer reviewed or refereed” I assumed it was the way you was using it here too. There was nothing to suggest otherwise.

    You say “Clearly, if you publish shoddy work, you’re not going to be considered an expert in the field.” That’s the way it’s supposed to work, yes. But it depends on whether anybody checks it. Since in a number of cases obvious errors have been found that could never have passed a review, and in other the data and methods required to replicate a result had never been published, until some sceptic came along and asked, it’s certain that a lot of these results do get published and cited without being checked. I alluded above to the HARRY file, which describes software that forms the basis of a “flagship” gridded data product, associated with several published papers, but which only the most cursory examination of the HARRY file makes it quite clear could not ever have been checked. Not even CRU was able to replicate the output!

    We’ve all read the HARRY file. And we’ve noticed that none of these attempts to defend the consensus offer any answers. If peer review is working, how were results like HARRY or the Hockeystick possible? If it’s not, how can we have any confidence in the other claims?

    #15,

    “Right, Nullius, no one with any money to throw at research would be interested in someone falsifying those results. And no one in the sciences trying to make their careers would be interested in falsifying them either.”

    A number of people without any money to throw at research have already falsified them. And we can see the result of that effort in the fact that you don’t even seem to know about it. Far from being lauded as scientific heroes, as you suggest they would be, they are dismissed and vilified. Conspiracy theories are made up about them, their careers and reputations wrecked. What went wrong?

  17. Jon

    A number of people without any money to throw at research have already falsified them. And we can see the result of that effort in the fact that you don’t even seem to know about it

    I assume your argument is that no one followed up on these old climate scientists’ work, because they were afraid they’d get a professional kneecapping? And you’re saying we’re not hearing about any of this because the policing was so successful, that it managed to infiltrate every single one of these independent, professional scientific organizations? I think we’re still in conspiracy theory area here, Nullius… I’d ask you to get specific instead of engaging in hearsay, but then we’d probably be getting into a one way hash argument that certain species of internet commenter likes to engage in…

  18. Nullius in Verba

    “I assume your argument is that no one followed up on these old climate scientists’ work, because they were afraid they’d get a professional kneecapping?”

    You assume incorrectly. I have no idea why they didn’t follow up on it. I offer no theories. All I can say for sure is that they clearly didn’t.

    Do you want to get specific? OK, then. In the HARRY file, Harry says he is seriously worried that “our flagship gridded data product” is calculated by an obviously incorrect method, rendering part of the output meaningless. Why was this not detected by the scientific community at the time? Why was it not reported at the time Harry wrote this, some years later? Or if you’re saying that somebody did check it and found the calculations to be OK, can you tell me where the published explanation of how the calculation was done and why it is OK is to be found? Thanks.

    I agree about the one way hash arguments – in fact, I’d say that the public presentation of AGW theory was such an argument. It takes but a moment to say “the IPCC says so” but it takes a lot more work to explain why not. It’s not a good excuse for not answering the questions or addressing the problems though. Do you really want to try to defend the HARRY approach as the way science should be done?

  19. gillt

    One or two bad papers can slide through peer review, and in turn one or two papers does not a consensus make. That was the point of IPCC and this PNAS article…to show a consensus. It would therefore be wrong for the authors to point to one or two papers and draw conclusions about ACC. Just as it’s incorrect for Nullius in Verba–short of even one or two papers to make his case–to highlight one or two typos or alleged errors and claim the peer review process is broken or it’s all a conspiracy.

  20. Possibly we can request Kirshenbaum and Mooney to do one blog post just on the big hairy HARRY Thang just so we can have a thread to hash this one properly out?

  21. There were once “heaps of data” in support of Newtonian mechanics, and only the Michelson-Morley experimental result – a single null – against. But a single number, if it is the right one, can outweigh over 200 years worth of papers written by the scientific experts.

    This is an absurd argument. Newtonian mechanics is still supported by “heaps of data” today and is taught in graduate level physics courses. His equations are used by all manner of science and engineering disciplines and helped get us to the Moon. His theories describe the phenomena he was studying with great accuracy. The fact that Newton’s equations have been modified in light of experiments which used methods and instruments that were completely beyond the capability of 17th century physics is a description of the value of modern science, not evidence that Newton and his colleagues bungled it for 200 years.

    If global warming describes the Earth’s climate with anything like the precision that Newton’s laws describe physical mechanics, then we’d better pay very close heed to the warnings of today’s climatologists.

  22. Brian Too

    Nullius in Verba,

    I suggest to you that you’re fighting for a hill you don’t want to die upon.

    Yeah, sure, the scientific method has flaws. Counting papers is an imperfect proxy for scientific credibility. Counting citations is much better but probably still flawed.

    So what? It’s quantitative data and a whole lot better than the sceptics usual rubbish. “Oh, it’s been cold out all month, that AGW stuff can’t be real!” At least this is an effort to be scientific and objective. And of course many of the AGW skeptics will be against it for exactly that reason. “Those scientists, they live off the government dime and they’ve been LYING to us for years! Only us right-thinking, real-world, outsider types with a camera watching us and a penchant for hearing our own voice will tell the TRUTH!”

    I have my own favourite examples where science failed, sometimes for long periods of time. I still recognize it as the best way to describe the physical world ever devised.

    Consider the old newspaper dictum: Dog bites man is not news. Man bites dog, now THAT’S news!

    This is why the policy of bringing ‘both sides of the argument’ to a story can be dangerous. Sometimes there is no argument of any merit. And popular media have an attraction to writing up a fight in order to gain more readers/viewers. It’s also superfically fair and uncontroversial.

    So, to restate, just because counting citations is imperfect, it does not follow that it has no value. The Google PageRank algorithm uses a conceptually similar technique and it’s notably successful.

  23. Sean McCorkle

    Nullius:

    Who’s Harry? What is his file exactly? What is the flagship gridded product?

    also: in your example in #10, Newtonian mechanics still works fine for v << 3e5km/sec, i.e. for most things we encounter in everyday life. It wasn't really overturned by relativity (and quantum mechanics for that matter), rather it was found to break down in in the new regimes of speed and scale that were opening up around the turn of the century.

  24. Marion Delgado

    The nonsense from the denialists in response to the study apparently starts here in the comments.

    This criterion (peer reviewed publications+citations) was around long before denialists and shills were able to politicize global circulation models and the study of AGW and produce doubt and manufacture controversy.

    It’s yet more evidence that facts and reality aren’t even on the table when someone seriously questions applying a generations-old, bread-and-butter standard for evaluating scientific expertise.

    Put this with the other evidence of scientific illiteracy – “Science doesn’t work by consensus. Consensus is not science.”

    Science works by consensus, and consensus is science. If you want to know “what biologists think” about something (especially something not yet purely textbook science), you find out what a majority of biologists think. And if a biologist is outstanding and has a superior theory about it, it will eventually be adopted, and the proof of its validity will be precisely how much of the scientific community it can convert.

  25. Nullius in Verba

    Gurdur,

    A truly excellent idea! But I doubt it’s going to happen.

    Jinchi,

    I agree that climatology is not in the same league of accuracy as Newtonian physics. But that’s not a comparison that helps your case, much. I was talking about a different aspect than the accuracy of the theories though; one not specific to any one science. And that is that when the first papers of a scientific revolution are published, they stand against the consensus and a much larger pile of existing work by the experts. The judgement is made not on the basis of numbers for or against, but on whether the arguments work.

    And there’s a lot of meteorology and climatology that I have a lot of respect for. The AGW question is a refinement of the details, and I’m certain that the vast majority of climate science will still stand even if AGW is overturned.

    Brian,

    If that’s all you’ve seen of sceptics and sceptical arguments, then I’m not surprised that you’re dismissive. I would be too.

    I don’t deny the usefulness of heuristics. A lot of the formal fallacies like argument from authority, correlation implies causation, ad populam, and so on are so common precisely because they do often work. Sometimes there is no argument of any merit, and sometimes there is. How can you tell if you never look at the arguments?

    I also believe that science is the best way to describe the physical world ever devised. I believe that it is so because science does not dismiss challenges on the basis of their orthodoxy or the eminence of their authors, but insists on knowing what the arguments and evidence actually are. It corrects its own mistakes, and strengthens confidence in its conclusions by subjecting them to continual challenge.

    And I think the best antidote to media misinformation is a genuine understanding.

    Sean,

    You’ve never heard of Harry?! And there was me thinking you would all know about the most famous Climategate files, to have come to such a solid judgement against it… I do apologise.

    In the Climategate archive there are two folders, one containing the emails, and the other a large number of documents. The HARRY file is at the top level of the documents folder.

  26. TB

    Nullius
    You also refer to the hockey stick chart which, as a graphic artist, I understand and appreciate the neat “trick” he did to join the information – and fully explained what he did with the chart.
    So while I don’t understand what I’m looking at with that HARRY file you referenced, the fact that you compare it to the hockey stick chart incident doesn’t bode well for your argument. I’d enjoy a thread discussing this, I think I’d learn a lot.

  27. Sorbit

    @Nullius: True, but looking at this thread, I think you are indulging in one of the most common fallacies, namely, peer review is not perfect = peer review is worthless and cannot be held up as any kind of evidence. There’s a big gap between ‘imperfect’ and ‘of no use’.

  28. Nullius in Verba

    TB,

    I was thinking of a different Hockeystick chart incident. You’re thinking of the “hide the decline” incident, which comes in several different flavours. (i.e. there were at least three such charts, with three different methods.)

    I’d be happy to discuss it. It’s a much simpler and easier example than the one I was thinking of. But before I do, would you be willing to say what your understanding is of what he did?

  29. Nullius in Verba Says:

    “Gurdur,
    A truly excellent idea! But I doubt it’s going to happen.”

    If you actually care about the subject for the subject’s sake, rather than trying to squeeze specious points out of it, then make a polite request to Kirshenbaum and Mooney to do such a blog post to handle that one specific subject, the HARRY Thang, ffs.

    This isn’t rocket science. Instead of casting aspersions about it not going to happen, just bloody ask politely, ffs.

    And if you are serious, then make your request via email to them, rather than doing it in-thread where it might not be seen.

  30. Sean McCorkle

    Nullius,

    I went through the text file. Pretty painful, as it reminded me of the many times I was told to obtain, fire up, use and often debug obtuse and god-awfully written legacy code. I feel for Harry.

    Again, who is Harry? Reading the file gave impression of a grad student or new postdoc or hire going through a very painful, but not untypical, learning curve.

    In the HARRY file, Harry says he is seriously worried that “our flagship gridded data product” is calculated by an obviously incorrect method, rendering part of the output meaningless. Why was this not detected by the scientific community at the time? Why was it not reported at the time Harry wrote this, some years later?

    Um, maybe because he was wrong in that assessment? I often use README files in working directories as logs of what I’ve done to aid replication of the work at a later date. They are full of learning-curve/novice mistakes and do-overs as output is checked and processes corrected. Its not by any means the same as publication where there’s a moral imperative to be as correct as possible.

    How many others were involved in assembling and verifying that data set? I’d be surprised if Harry was the sole author. Were the problems addressed and perhaps rectified by the larger group prior to release?

    (PS: BTW Sheril and Chris – thank you SO much for the editing feature! its great!)

  31. Nullius in Verba

    Sorbit,

    I agree that peer review is not worthless. Assuming correlation implies causation is not worthless, either. But would you rely on it in making a scientific claim? Would you rely on it, in particular, in saying that somebody else’s claim should be rejected because it didn’t agree/comply with it?

    Gurdur,

    Chris and Sheril write here in support of their own beliefs and positions. I appreciate that they let me argue with them here – there are plenty of others who don’t – but I don’t ask any more than to be heard. If they have read HARRY themselves and think there’s something worth discussing here, then good. But I’m satisfied with what I’ve got.

    Sean,

    ‘Harry’ is generally believed to be Mr Ian (Harry) Harris, a member of the research staff at CRU. Judging from his photo, he isn’t a youngster, but I have no wish to dig any further into his private affairs. He is, in any case, not the one primarily responsible for this.

    Maybe, as you say, he was wrong in that assessment, and about all the other observations peppered throughout that file. (I’m pretty sure he wasn’t, but it would take me too long to explain so I’ll let it go.) Certainly a lot of the methods and features he complains about do sound like bad practice and evidence of an unreliable output to me, but we’ve only got a partial picture here.

    However, my original question was about whether the obvious questions raised by the HARRY file have been answered in all the official investigations, defences, and denials made in response? Can we say that just because the output of this suite of programs has been peer-reviewed published, and accepted and used by other researchers, that nobody published shoddy work and yet still got to be considered an expert in the field? If there is enough information in the open literature for an outsider to replicate and check their results, – to peer review it – why did Harry not use that?

    Do you see what I mean? To someone who can reel off a long list of specific problems down in the details, to someone who does understand a little bit about the science, do you see that fuzzy claims based on ‘authority’, ‘expertise’, ‘consensus’, ‘peer-review’ and ‘thousands of scientists’ without addressing any of the specifics are likely to be unconvincing?

    I’m not asking that you be unconvinced. I’m only trying to explain why a lot of other people aren’t. That they’re not being stupid, or dishonest – that there really is a problem here that isn’t being addressed. And indeed, that by treating them as stupid or dishonest and very obviously not addressing it, they’re only making the problem worse.

    Maybe it’s just a communication problem, or maybe it’s something deeper. But somebody on your side needs to take the question seriously, investigate it properly, and find out.

  32. Sean McCorkle

    Nullius,

    I did not mean to belittle the author of the file, nor his expertise. In fact, the commentary within reflects a commendable concern over data integrity.

    Concerning bad practices and methods: Good science is exploratory; it is not engineering. The seamy underbelly of the process is that it is messy. The paths to discoveries are often full of setbacks and half-baked methodologies. Scientists typically don’t have the best programming skills, although there are exceptions. Same goes for statistics & other analytical methods. Programming and analytical methods are seen as a means to an end: the possibility of a breakthrough in understanding. More often than not, in my experience, ensuring the correctness of results means confirming via entirely different lines of reasoning and approaches (as many as possible) rather than fine-tuning ones statistical methods (that is, if those methods don’t seriously alter the numbers), although, again, there are exceptions. Typically, software bugs are only considered bad if it affects the results at hand – hey if it doesn’t change our results, no harm done (this attitude still drives me out of my mind)

    The reason that I ask who the author is, is to try to establish the context of his work in the bigger picture. The scientific groups I’ve worked in tend to be highly argumentative, usually in a friendly way. Often, in group discussions, collaborators attempt to play adversarial roles of potential critics, and will raise objections that they may not necessarily believe are true, just for the purpose of bolstering the idea that is being proposed. In informal discussion, scientists will backtrack and back-and-forth on things all the time.

    I worry about something like this: the possibility that the hacker who posted this file selected it for its high content of raised issues, and did not post other material which may have responses to those very issues. Or even more likely, many, most or all of the questions were addressed, to the author’s satisfaction, in discussions, which were never recorded, even by him. Thus the public is poorly served by out-of-context material.

  33. TB

    Nullius
    Explain what chart you’re refering to specifically please.

  34. Nullius in Verba

    Sean,

    Concerning bad practices and methods: I can’t speak for scientists in general, but the way we do it here is to start with the rapid exploratory methods, play with it until we understand what works and why, and then do it all again properly with the full formal experimental/scientific controls. Bench work and field work recorded in log books, software under configuration control, and written to be replicable, verbose (in the sense of reporting all errors and unexpected conditions), and with errors and uncertainties quantified or at least known. Then when you’ve got everything working, checked, and are ready to publish, you archive the state of all the data and software somewhere safe.

    Replicability is a primary virtue in science. The easiest way is to set everything up to run automatically from scripts. But if you can’t, because of the third-party software you’re using, or because it requires human pattern recognition, then you document the manual inputs needed to get the required output. It’s better to get it wrong than not to be able to replicate it!

    And then if anyone asks any questions about what was done, you can set everything up the way it was and see. And if any other researcher wants to replicate it, you can just give them the archive. And I can tell you, the knowledge that you might have to does wonders for the quality of your science!

    It is, admittedly, not that clean in practice. Things are missed. People take short cuts. Very commonly, failures don’t get recorded. But those are generally seen as problems, not standard practice.

    There is certainly a possibility that whoever released this data (we don’t know if it was a hacker or a leaker) was selective. That’s why I would certainly give CRU the opportunity to explain. As prima facie evidence it is damning, and would require some strong evidence to overturn – it’s one of many things the subsequent enquiries ought to be really digging in to. If there is other material with responses to these very issues, then surely the best thing for them to do would be to publish it? Or at least, to tell people that it exists, and to extract/explain the substantive parts?

    This is not the first time we have come across this problem. Harry’s complaints are reminiscent of McIntyre’s commentary when he first tried to replicate the MBH98 Hockeystick. We’ve seen it too with the GISTEMP code, released only after several errors were reverse-engineered from the output. The open-source community are now working on replicating GISTEMP (Clear Climate Code). Many are of the opinion that this sort of thing is rife in climate science, and that this is the real reason why climate scientists are so uncooperative about releasing methods and data to sceptics. That while they tell themselves it is acceptable/normal in private, that they would actually be ashamed of it in public.

    For minor academic studies in obscure journals, I don’t suppose it matters. I regard it as an unfortunate waste of taxpayers money, and an erosion of scientific standards, but we see plenty of both of those.

    But we’re told that climate change is the most important issue facing mankind, and that we’ve got to change the entire world economy (for the worse) on the basis of these predictions. The accumulated consequences of the changes amount to trillions of dollars. We’re talking about billions of lives, here. This is several orders of magnitude more important than the space shuttle, or passenger jet avionics, or nuclear reactor safety code! This is not a game. We want to know that it has been done right.

    Are ‘academic’ software standards still appropriate, do you think?

    TB,

    ‘Referring to’ where?

  35. You’ve never heard of Harry?! And there was me thinking you would all know about the most famous Climategate files, to have come to such a solid judgement against it

    No, believe it or not, many of us couldn’t care less about the stolen emails. You’re talking about fragments of conversations, most out of context and many over a decade old. Every instance of a “smoking gun” email I’ve seen from the skeptic community falls apart on the slightest investigation. We have no interest in browsing through 10,000 more just to satisfy your concerns on that point.

    ‘Harry’ is generally believed to be Mr Ian (Harry) Harris

    And you don’t even know who Harry is.

    Anybody who has read the HARRY file or any of the rest of it will recognise that none of the substantive points have been, or are being answered

    Here’s what skeptics can do to prove to us that there is something there. Start answering the questions yourselves. You’ve supposedly been shown the Achilles heel of climate research and yet none of the skeptic scientists have made the slightest effort to perform the analysis that brings the whole thing crashing down. This would be the easiest graduate research project in history, if there were any “there” there.

  36. Sean McCorkle

    Nullius,

    Are ‘academic’ software standards still appropriate, do you think?

    It would be a nice thing, but since most scientists are not trained along those lines (beyond a few programming classes) you’d be hard-pressed to find any scientific endeavor that would pass such a test.

    Scientific groups suffer from some of the same problems as other endeavors when it comes to legacy code: i.e. because the old code works (kind of) there’s a general disincentive to screw with it. The old code was written in the old days before modern languages, scripting, and maybe even concepts of good programming technique, so even if you want to clean it up, its nearly impossible to understand (i.e thousands of lines of fortran IV spaghetti)

    There are some other hurdles specific to science, however. As I said, a typical scientist has at most one or maybe two semesters of programming classes. Many are self-taught from O’Reilly books etc. Furthermore, programming is not their main job description. They’re hired to do research and publish papers. And there’s often a clock running – the postdoc appointment or the grant ends in 3 years, for example. Usually the researcher is expected to write just enough software to do the research. A person can actually get into trouble with their boss if they spend what is perceived to be too much time making a robust program for general use.

    And here’s the big one: in my introductory programming class way back in college, we were taught to “define the problem” before we start coding. Well in science – especially good science – the problem is never quite defined until you write it up for publication. Science, in its very nature, is unplanned. The goal posts shift AS you write – faster than you write, even. Ideas and hypothesis are tested with software and then code is subsequently modified for the new ideas – maybe the old code is abandoned, maybe not – maybe its modified badly to try to adapt. Layers and layers and layers of this can accumulate before you know it. And you have to be able to do this at the speed-of-thought. And documenting the code as all this is happening? It can be a nightmare. (BTW this is a really good and relevant PhD comic)

    The open-source GISTEMP project is heartwarming. I hope its a big success and becomes widely adopted. However, what makes it achievable is that it has a clear goal – do the things that the old GISTEMP did (with some improvements maybe). The authors have the advantage of a couple of decades of hindsight in the definition of the problem.

    If you want to hold climate software to “critical” level standards, then you need to make it a outright mission of the institute, and then fund it. Thats the way to get the appropriate level of CS expertise. Hire software engineers and then tap some fraction of the researchers time to make sure the programmers don’t go off on the wrong track. (That in itself is more difficult than you might think – I’ve bounced around different research fields and they ARE different cultures, with different vocabularies and different ways of thinking. Lots of opportunity for miscommunication, even between well-meaning people)

    The accumulated consequences of the changes amount to trillions of dollars.

    I don’t buy this. Switching to new economies would be like any other investment or series of investments; we could easily end up generating more than enough revenue to pay for the switchover.

  37. TB

    OK, Nullius, I can’t take you seriously at all now. In comment 28, you said “I was thinking of a different Hockeystick chart incident. “

  38. Nullius in Verba

    Jinchi,

    “No, believe it or not, many of us couldn’t care less about the stolen emails.”

    Oh, I believe it. None of you care. None of you know. None of you are interested. None of you will even look. Because you already know all the answers – the truth – and anything that contradicts them must therefore by definition be false.

    That’s OK with me. It just makes me sad.

    “Here’s what skeptics can do to prove to us that there is something there. Start answering the questions yourselves. You’ve supposedly been shown the Achilles heel of climate research and yet none of the skeptic scientists have made the slightest effort to perform the analysis that brings the whole thing crashing down. This would be the easiest graduate research project in history, if there were any “there” there.”

    Oh, it’s already been done, to the extent that there’s anything much to do. You’re just ignoring it. Or dismissing it. Or accepting the slightest pretext to claim it’s been debunked.

    But the winds of opinion are shifting, and a lot of people suddenly aren’t ignoring it. You’ve seen the opinion polls. Is that something that you think you ought to care about?

    Sean,

    “It would be a nice thing, but since most scientists are not trained along those lines”

    Then they should be. It’s a necessary part of the job. What do universities have Computer Science departments for?

    “you’d be hard-pressed to find any scientific endeavor that would pass such a test.”

    Ummm.. I think I just described one?

    This is a point that a number of sceptics find quite surprising. McIntyre, for example, comes from a mining statistics background, in which not only do they do it properly – tens of millions of dollars of the customer’s investment can ride on getting the geology right – but it’s actually illegal to publish results without due care and attention. McKittrick comes from an econometrics background, where it is a routine requirement, and one actually enforced, that academic papers have to fully archive data and methods. Other people from engineering and aeronautics and pharmaceuticals backgrounds have commonly expressed astonishment and incredulity that this is not universally standard scientific practice. It seems like basic professionalism, to many of us.

    Do you think that scientific endeavours like the writing of Mathematica or R fail this test? Do you really think the LHC runs on spaghetti code from out of the ark? That’s a scary thought!

    I’ve even been told off in the past for daring to suggest that climate scientists would behave in such an unprofessional manner. As if it was insulting to even suggest it, or that it would get past peer review if it ever happened.

    It’s really not very difficult, and even ancient computer languages are perfectly capable. It is a chore, like tidying your room, and I know that’s something students hate to do. But really, it does make it an awful lot easier to find things later, and you don’t have problems months later with the cleaner finding that stinky thing hiding in the fridge. Eternal students, eh?

    “If you want to hold climate software to “critical” level standards, then you need to make it a outright mission of the institute, and then fund it.”

    I agree! Absolutely! Yes! And I’d say they ought to have done this at least ten or fifteen years ago, when it first became important. People seem to have been spending an awful lot of money on ‘dealing’ with AGW – I think Al Gore alone promised $300m? It would have been useful to spend a bit more of that on doing the science properly. Yes?

    “I don’t buy this. Switching to new economies would be like any other investment or series of investments; we could easily end up generating more than enough revenue to pay for the switchover.”

    Then it would be a paying proposition and people would do it without having to be forced. They’d do it because they’d want a share of that revenue.

    But the reason they don’t do it, and they have to be forced through taxation and regulation, is that they all know it isn’t a paying proposition, it’s a huge loss-making one, unless you’re one of the few who can get government subsidies for building stuff nobody wants at the price.

    Many investments fail, and that’s just the ones that made a convincing enough case to get funding. The number of proposals seeking investment that didn’t get funding is even larger.

    I’m all in favour of people choosing to do the research, and developing the technology, and making lots of money off it. I’m a firm believer that it will happen anyway within the next fifty years or so. But it’s not ready yet. Forcing it won’t work.

    TB,

    I also mentioned the three charts involved in “hide the decline”. I had got the impression in the previous comment that this was the one you wanted to talk about, but it wasn’t clear. And several of the other things I said could be interpreted as referring to charts. Your question was a bit short, no?

    In that particular case, I was thinking of MBH98/99.

    Does that help?

  39. TB

    Nullius, no, I’d like a direct link please. I don’t want to take the chance that you’re thinking of a different mbh98/99.

  40. Nullius in Verba

    TB,

    There is only one MBH98/99. It’s one of the most famous charts in the world. Have you read Montford’s book The Hockeystick Illusion yet? That MBH98/99.

  41. Sean McCorkle

    Nullius,

    Then they should be. It’s a necessary part of the job. What do universities have Computer Science departments for?

    So if it takes 4 years for CS BS/BA, then that undergraduate courseload should be added to a 4 year science BS/BA, bringing the total to 8 years for a science degree?

    Ummm.. I think I just described one?

    Im sorry I guess I missed your example entirely. You follow with some other examples, which actually make my point in that they’re not science endeavors – that is, not basic research. Perhaps they’re applied science, which differs from basic in the important way in that they usually have well-defined goals. Mining, in particular, is not science. It is a business. It may rely heavily on the results of the science of geology and it may hire scientists, but it is not a research endeavor. While Mathematica is used by scientists, it is a business product marketed and sold to the scitech community. I daresay Wolfram Research’s R&D is largely CS-related, no? R was a rewrite and extension of Bell Lab’s S, both largely developed by programmers who were also statisticians – applied mathmeticians. (There are lots of R packages written by other researchers, but my experience with them is that your milage may vary).

    LHC machine- okay now this is interesting. Accelerator machine groups are different from experimenters. Machine groups have to deliver a beam and are much more engineering-oriented than the experiments. They DO have critical-level software requirements. However, the experiments are very different animals from machine operations. They have pretty good budgets, and I think one or two might have some pretty good software control (we’re talking big groups here – thousands of PhDs) but I’ve had some awful, personal experiences otherwise. (I’ve done real-time data acquisition programming at CERN).

    It’s really not very difficult, and even ancient computer languages are perfectly capable. It is a chore, like tidying your room, and I know that’s something students hate to do. But really, it does make it an awful lot easier to find things later, and you don’t have problems months later with the cleaner finding that stinky thing hiding in the fridge. Eternal students, eh?

    When the roof of your building is leaking, your hood area is rusting, computer A/C is down all because of budget cuts and your advisor is beating you over the head to get the the next grant request/publication out before the soft money runs out and you’re out on your coattails , somehow documenting or rewriting the 95% of the code that you wrote that you ended up not using doesn’t seem like the biggest priority, no.

    Other people from engineering and aeronautics and pharmaceuticals backgrounds have commonly expressed astonishment and incredulity that this is not universally standard scientific practice.

    If you are saying we should raise the levels of funding of basic research funding to the monetary levels of the pharmaceutical, aeronautics and other engineering industries, I say AMEN BROTHER! How many orders of magnitude increase would that be, lets see… just one airbus 380 goes for over 300 million dollars – that would almost pay for a small accelerator right there! Two would cover a national lab!

    Then it would be a paying proposition and people would do it without having to be forced. They’d do it because they’d want a share of that revenue.

    There’s so much wrong with this inference I don’t even know where to begin. Maybe to start with, how about general business timidity of long-term, high risk directions. Jeez we put men on the moon almost 50 years ago, and there’s still hardly any investment out there beyond a few communications satellites. Companies that generated some incredibly creative output (Xerox PARC, Bell Labs) didn’t make a killing from their own inventions, but others who came along later did do, enjoying the fruits of Xeroxs and Bell’s labors).Theres lots of reasons for companies to not take risks.

  42. Nullius in Verba

    “So if it takes 4 years for CS BS/BA, then that undergraduate courseload should be added to a 4 year science BS/BA, bringing the total to 8 years for a science degree?”

    No. I’d say about two days would do it. You could probably teach it in a morning, but you would need some practical demonstration to get the point across. Any Computer Science department could teach it.

    Businesses do science. They often do research too. And they are most certainly familiar with time/funding pressure! But if you want to look at it that way, these compilations of climate data are not exactly pure research, either. They’re just an application of statistics – of a form already well known in econometrics and other sciences.

    The pure/applied distinction is no excuse for shoddy work, anyway. Quality is quality, whatever the reason you’re doing it for.

  43. Sean McCorkle

    Nullius, I should add that I agree with you, deep down, on the principles that some scientific software, specificially critical or otherwise important programs, be written or at least scrutinized by experts. Also public disclosure of data and software are very important and should be standard grant requirements. I’ve spent my entire career at the interface between science and programming, in different fields. During the early days, I argued with a lot of scientists trying to get them to adopt new languages, formal methods etc, and eventually gave up when I realized that there was a substantial values disconnect between the fields of CS and the various sciences.

  44. Sean McCorkle

    No. I’d say about two days would do it.

    Forgive me if I don’t take you seriously on that.

    But if you want to look at it that way, these compilations of climate data are not exactly pure research, either. They’re just an application of statistics – of a form already well known in econometrics and other sciences.

    That’s a good point, actually.

  45. Oh, I believe it. None of you care. None of you know. None of you are interested. None of you will even look. Because you already know all the answers – the truth – and anything that contradicts them must therefore by definition be false.

    That’s OK with me. It just makes me sad.

    What makes me sad is the number of climate denialists who can cite chapter and verse from a series of stolen emails, but never bother to frame anything as a scientific question. There must have been a hundred comments on Chris’s blog that could be summed up as ‘Just read the emails!’ as though some revelation would strike simply by repeating the mantra. And when anyone challenged their interpretation of the emails, they responded with indignant outrage.

    I’m not going to try to read your mind as you try to read Harry’s mind. If you’ve got an actual point, why don’t you simply state it instead of sending us off on a wild goose chase to some climate skeptic website.

    Oh, it’s already been done, to the extent that there’s anything much to do. You’re just ignoring it. Or dismissing it. Or accepting the slightest pretext to claim it’s been debunked.

    Again. I still have no idea what you’re even complaining about. Now you’re telling me your point has been proven. Why don’t you start by pointing me to this cracker-jack analysis instead of sputtering about my lack of interest in wasting my time trolling through the CRU email dump.

    You’ve seen the opinion polls. Is that something that you think you ought to care about?

    Not as a scientist, I don’t.

  46. Nullius in Verba

    Sean,

    Thank you. You have cheered me up, with that.

    Jinchi,

    I’ve asked all sorts of scientific questions. And there must have been a hundred comments that could be summed up as “just read the IPCC reports!”, “just read the peer-reviewed literature!”, or worst of all, those ones sending me off on a wild goose chase to some non-scientific anti-sceptic polemic website. As if I was some sort of idiot who didn’t know anything about the debate.

    I asked one above. Harry said that he was seriously worried that “our flagship gridded data product” was calculated by a wrong method, rendering a significant part of the output “meaningless”. Was he wrong? I don’t necessarily expect anyone to weigh in with a technical exposition on interpolation algorithms, but I would expect somebody to be able to comment on how the extended peer-review process would be expected to handle this. (Sean made a good attempt.)

    We mentioned above the “hide the decline” charts – this is another scientific question. If your temperature reconstruction method doesn’t work, is it scientifically and statistically valid to cut off the bits of your reconstruction that don’t fit the real temperature, and add in the real data?

    We’ve also been skirting around the MBH98/99 question above. When you reconstruct temperature from indirectly from other records, it’s useful to keep back part of the real temperature data to test the accuracy of the reconstruction. If the reconstruction does not correlate with what actually happened, you probably have a case of spurious regression. The standard way of measuring this is by means of the squared Pearson correlation coefficient rho, sometimes called the r-squared or R2 test. Did MBH98/99 verification pass the R2 test? Did Mann know the result at the time? I’ve asked these questions before, and didn’t get an answer.

    You see? We have lots of scientific questions. Do we have any scientific answers, though?

  47. Harry said that he was seriously worried that “our flagship gridded data product” was calculated by a wrong method, rendering a significant part of the output “meaningless”. Was he wrong?

    Again, you’re asking me to read Harry’s mind. Who is Harry? What is the wrong method he’s complaining about? Does it actually give the wrong answers? Did anyone answer Harry’s question satisfactorily? I don’t know, and apparently you don’t know either, because you’re only looking at fragments of a conversation.

    Now if there is something fundamentally wrong with the “flagship gridded data product” it should be straightforward for some skeptic to take a look at that product, look at the methodology and identify the flaw Harry was talking about. You still haven’t bothered to show me that anyone has done this, so we’re arguing hypotheticals here.

    As to your question about MBH98/99, the questions you’re asking have been addressed repeatedly, in particular here ( http://www.realclimate.org/index.php/archives/2004/12/myths-vs-fact-regarding-the-hockey-stick/ ) and most recently here ( http://www.realclimate.org/index.php/archives/2010/07/the-montford-delusion/ ) .

    Notice that your comments tend to fall under MYTH#1 The “Hockey Stick” Reconstruction is based solely on two publications by climate scientist Michael Mann and colleagues (Mann et al, 1998;1999). MBH98/99 are not the only studies to have identified the “hockey stick” pattern. It has been reproduced and verified with additional data several times now.

  48. Nullius in Verba

    Jinchi,

    Well, I replied to that, but it seems to have disappeared. Not sure why. I’ll try again.

    First paragraph – this isn’t a fragment of a conversation. All of these questions ought to have been answered by the enquiries.

    Second paragraph – if you read the rest of the file, and bear in mind that we don’t have even the level of access that Harry does – you’ll see why it is impossible for outsiders to “look at the methodology”. Not even the CRU’s own research staff can determine the methodology.

    As it happens, I do have a fair idea of what he’s talking about, and I’d tend to agree with him. But it’s really besides the point. To have Harry’s concerns left hanging unanswered like this is problem enough. The question is what have the scientific community done about this? What should they have done? How is extended peer-review supposed to operate in a case like this?

    Third paragraph – I’m already aware of both of those, and already knew that the arguments in them had been shown to be incorrect years ago. But more to the point, neither of them answer this specific question.

    Fourth paragraph – I didn’t say “the hockeystick pattern” I said the MBH98/99 Hockeystick, the famous one. It was published. It got past journal peer-review. It got past the IPCC review. It got past the paleo-climate chapter’s lead editor (for obvious reasons, given who that was). It got past the rest of the scientific community, the journalists, governments, activists for years subsequently.

    It’s no use trying to distract attention away from MBH98 by offering me a set of alternative targets. Once we’ve settled what’s what with MBH, then we can move on to pointing out the obvious flaws in all the rest. But there’s not much point if we can’t even answer a straightforward scientific question about a specific reconstruction.

    Did the MBH98 reconstruction pass the R2 verification test? Had Mann calculated it at the time?

  49. Not even the CRU’s own research staff can determine the methodology.

    So nobody knows the methodology, but you’re absolutely certain that it’s wrong.

    Suffice it to say that A.) I think you’re absolutely wrong on the first point and B.) You apparently have no evidence to claim that there was anything for peer review to catch. Let me know when you can answer the question “What is wrong with the methodology used to calculate the “flagship data product?”.

    I didn’t say “the hockeystick pattern” I said the MBH98/99 Hockeystick, the famous one. It was published. It got past journal peer-review. It got past the IPCC review.

    And it’s a result that has held up repeatedly since. Again, what is your complaint here? That they didn’t apply your preferred statistical analysis?

  50. Nullius in Verba

    Jinchi,

    Nobody knows the whole of the methodology. We do know bits.

    I haven’t said that I’m absolutely certain it’s wrong. I’m saying that there’s no evidence to indicate that it’s right.

    Suppose I were to say I have absolute proof that AGW is wrong. You would naturally ask what my evidence for this was. I would reply that nobody knew, but that since you haven’t shown any flaws in my proof we can go ahead and dismiss AGW without further ado. And what I say is true. You haven’t found any flaws in my proof, and neither has anybody else, because nobody has checked it. Granted, one person managed to figure out some bits of it and said he thought the results it produced were “meaningless” but how can we tell if he was right? Since nobody has checked, and that one guy didn’t have a complete understanding, it isn’t really the identification of an actual flaw in my proof, is it? Maybe my proof is right?

    This is data associated with a published, peer-reviewed paper. The scientific method, as I was reminded above, relies on other scientists being able to replicate and check published science to find and remove any errors. But in this case, this is impossible because the methods were not public, were sufficiently non-trivial that Harry spent three nightmare years disentangling the code rather than start again from scratch, and where even Harry said he could not reproduce exactly what was done.

    And a lot of people seem to think this is perfectly normal and nothing to worry about.

    “And it’s a result that has held up repeatedly since.”

    By “result” I assume you have subtly shifted topics to the hockeystick pattern again, rather than the “result” I was actually talking about which was MBH98/99 specifically.

    I’ll ask again. Did the MBH98 reconstruction pass the R2 verification test? Had Mann calculated it at the time?

  51. Chuck Burton

    What seems to be the point is that there is plenty of evidence that climate, world wide, is warming up, whether as a short term or a long term phenomonon. Maybe humans contribute significantly to this warming – maybe not so significantly. What seems to be important is that reducing the degree to which we pollute our envronment can only be beneficial to us all. So, let’s get to it.

  52. Suppose I were to say I have absolute proof that AGW is wrong. You would naturally ask what my evidence for this was. I would reply that nobody knew, but that since you haven’t shown any flaws in my proof we can go ahead and dismiss AGW without further ado.

    Actually, that is exactly what we’ve been doing.

    You are arguing that there are problems with the methodology used by climate modelers.
    I am asking you what your evidence is.
    You are replying that nobody knows and expecting me to take your criticism at face value.

    You’ve made an allegation that you haven’t backed up.

  53. Nullius in Verba

    Jinchi,

    In this particular case, I am arguing that the problem with the methodology is that nobody is checking it, and indeed, it is uncheckable.

NEW ON DISCOVER
OPEN
CITIZEN SCIENCE
ADVERTISEMENT

Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

About Sheril Kirshenbaum

Sheril Kirshenbaum is a research scientist with the Webber Energy Group at the University of Texas at Austin's Center for International Energy and Environmental Policy where she works on projects to enhance public understanding of energy issues as they relate to food, oceans, and culture. She is involved in conservation initiatives across levels of government, working to improve communication between scientists, policymakers, and the public. Sheril is the author of The Science of Kissing, which explores one of humanity's fondest pastimes. She also co-authored Unscientific America: How Scientific Illiteracy Threatens Our Future with Chris Mooney, chosen by Library Journal as one of the Best Sci-Tech Books of 2009 and named by President Obama's science advisor John Holdren as his top recommended read. Sheril contributes to popular publications including Newsweek, The Washington Post, Discover Magazine, and The Nation, frequently covering topics that bridge science and society from climate change to genetically modified foods. Her writing is featured in the anthology The Best American Science Writing 2010. In 2006 Sheril served as a legislative Knauss science fellow on Capitol Hill with Senator Bill Nelson (D-FL) where she was involved in energy, climate, and ocean policy. She also has experience working on pop radio and her work has been published in Science, Fisheries Bulletin, Oecologia, and Issues in Science and Technology. In 2007, she helped to found Science Debate; an initiative encouraging candidates to debate science research and innovation issues on the campaign trail. Previously, Sheril was a research associate at Duke University's Nicholas School of the Environment and has served as a Fellow with the Center for Biodiversity and Conservation at the American Museum of Natural History and as a Howard Hughes Research Fellow. She has contributed reports to The Nature Conservancy and provided assistance on international protected area projects. Sheril serves as a science advisor to NPR's Science Friday and its nonprofit partner, Science Friday Initiative. She also serves on the program committee for the annual meeting of the American Association for the Advancement of Science (AAAS). She speaks regularly around the country to audiences at universities, federal agencies, and museums and has been a guest on such programs as The Today Show and The Daily Rundown on MSNBC. Sheril is a graduate of Tufts University and holds two masters of science degrees in marine biology and marine policy from the University of Maine. She co-hosts The Intersection on Discover blogs with Chris Mooney and has contributed to DeSmogBlog, Talking Science, Wired Science and Seed. She was born in Suffern, New York and is also a musician. Sheril lives in Austin, Texas with her husband David Lowry. Interested in booking Sheril Kirshenbaum to speak at your next event? Contact Hachette Speakers Bureau 866.376.6591 info@hachettespeakersbureau.com For more information, visit her website or email Sheril at srkirshenbaum@yahoo.com.

ADVERTISEMENT

See More

ADVERTISEMENT
Collapse bottom bar
+

Login to your Account

X
E-mail address:
Password:
Remember me
Forgot your password?
No problem. Click here to have it e-mailed to you.

Not Registered Yet?

Register now for FREE. Registration only takes a few minutes to complete. Register now »