For years many in the biological sciences community have been jealous of the exist of arXiv. This preprint server allows researchers to distribute their work widely to all comers. On occasion when when there have been debates about mimicking arXiv for biology there has been skepticism about the nature of the outcomes (my own rejoinder is that fields where a preprint culture is the norm, such as economics and physics, don’t seem to be doing badly). Now we’ll see if the end is nigh in biological science due to preprints; bioRxiv is live (sponsored by CSHL). The first paper, The Population Genetic Signature of Polygenic Local Adaptation. There’s not much up yet, but there will be.
First, “Is it okay to introduce non-human DNA in our genome?” The premise is false. A substantial proportion of the human genome is derived from viruses. Lateral gene transfer in complex organisms is not unknown, and may sometimes be quite functional (arguably endosymbiogenesis and mitochondria is the classic case, but that’s so far back in the past that people aren’t shocked by it). Second, the piece also asks if we “Should we biologically enhance non-human animals?” Last I checked selection was a biological process. Domestication events have radically changed many organisms. The io9 piece spends some time on the possible Uplift of other species, but as a matter of reality coexistence with humans tends to reduce the intelligence of domestic animals (they offload many tasks to us). The narrow exception though is the case of dogs. Yes, they are uniformly less intelligent than wolves, but excel at reading human social cues. We’ve modified them to be our perfect companion animals!
I am old enough to remember card catalogs. They did not make me happy. As a small child I noticed omissions and incorrect classifications so often that for long periods of time I would simply avoid the catalog, and methodically consume books from whole sections of the public library in line with my preferences through tedious manual browsing. I am also old enough to remember when the internet was still primitive in its data organization and storage capacity (i.e., pre-Google, pre-Wikipedia), and the library was the first, last, and best, recourse toward retrieving data. When Braveheart was released in 1995 I ran down to the local university library to see if I could find more about the protagonist’s biography than was present in Britannica. By chance there was a book available on the life and times of William Wallace, but it was checked out, and there were more than 10 holds ahead of me! This was not an uncommon occurrence in the age before the data rich internet. The reality is what I wanted to know about Wallace is probably found in the Wikipedia entry, but then there was no Wikipedia! These are just a few of the reasons that I have little patience for neo-Luddites such as Nicholas Carr. When I read Carr’s “old man” jeremiads I always wonder, “son, were you even around back in my day?”*
The converse situation is also true in regards to experience and familiarity. Most who are enmeshed in the humanities have only a cursory knowledge of science, and a general unfamiliarity with the culture of science (though more students switch out of science to non-science degree programs than the reverse). In most cases I find the ignorance of science by non-scientists sad rather than concerning, but in some instances it does lead to the ludicrous solipsism which was highlighted in books such as Higher Superstition: The Academic Left and Its Quarrels with Science. Though there is often a focus on fashionable Leftism in these critiques, it may be notable that the doyen of “Intelligent Design” has admitted a debt to Critical Theory. The scientist-turned-theologian Alister McGrath positively welcomes post-modernism in his The Twilight of Atheism: The Rise and Fall of Disbelief in the Modern World. The problem is not ignorance of science, as much as the dismissal and mischaracterization which that ignorance can give birth to in the right arrogant hands.
Outreach is a buzz term in academic science right now. Scientists have to publish. And they have to teach. Then there is service (e.g. committees and such). Outreach is now part of the service element. It doesn’t need to be hard or sophisticated. Not only that, outreach can be general (to the public) and specific (to your peers). As an example of what I’m talking about Michael Eisen’s blog is more aimed toward a broad audience, though on occasion he delves specifically into the science which is the bread and butter of his research. Haldane’s Sieve is more tightly focused on researchers working at the intersection of evolution, genomics, and population genetics. But even it expands further out toward biologists who take an interest in specific evolutionary or genomic questions in their own research (e.g., I have known several molecular biologists who had no idea who was behind Haldane’s Sieve, but had read the site because of an interest in a specific preprint).
This isn’t rocket science, so to speak. Information dissemination is pretty easy right now, and that is theoretically one of the major things which drives science. This should be a great time for scientific progress! Is it? In genomics, yes, though that’s not because of more efficient flow of information, as opposed to technology. With that prefatory comment, I think John Hawks’ recent jeremiad is worth reading, Speak up and matter:
Every few months someone asks me what I use to manage my papers. Stupidly, I don’t use anything. Or I haven’t. Over the past few weeks I’ve been playing around with PubChase and Mendeley. You probably know of the latter, and the fact that it’s been purchased Elsevier. Elsevier is what it is. Mendeley on the other hand is a firm that I have a positive view of, in part because of their culture of openness and support for the free flow of information, but also due to the fact that I’ve known their head of outreach for ten years. You trust people, not things. Mendeley‘s not a charity, and I don’t begrudge them their new resources now that they are under the corporate wing of Elsevier. Whether you’re pessimistic or optimistic about their future, I think caution is warranted.
It’s no secret to people who read this blog that I hate the way scientific publishing works today. Most of my efforts in this domain have focused on removing barriers to the access and reuse of published papers. But there are other things that are broken with the way scientists communicate with each other, and chief amongst them is pre-publication peer review. I’ve written about this before, and won’t rehash the arguments here, save to say that I think we should publish first, and then review. But one could argue that I haven’t really practiced what I preach, as all of my lab’s papers have gone through peer review before they were published.
No more. From now on we are going to post all of our papers online when we feel they’re ready to share – before they go to a journal. We’ll then solicit comments from our colleagues and use them to improve the work prior to formal publication. Physicists and mathematicians have been doing this for decades, as have an increasing number of biologists. It’s time for this to become standard practice.
Some ground rules. I will not filter comments except to remove obvious spam. You are welcome to post comments under your name or under a pseudonym – I will not reveal anyone’s identity – but I urge you to use your real name as I think we should have fully open peer review in science.
Peter A. Combs and Michael B. Eisen (2013). Sequencing mRNA from cryo-sliced Drosophila embryos to determine genome-wide spatial patterns of gene expression.
Please leave comments on Eisen’s post.
Via Haldane’s Sieve.
Three articles which illustrate the difficulty of the sort of science which tackles what Jim Manzi would term phenomena characterized by high causal density. First, the simplest one is the report that extrapolating from some mouse models to human biological systems may be problematic. Anyone who has talked to human geneticists who use mouse models is aware that these inbred lineages can be somewhat particular and specific. Order the wrong mice, and all of your experimental designs might be for naught. So the result is not surprising, but it seems useful to have it documented in such a concrete fashion (though this has been reported in the media before).
Second, a long piece in The Chronicle of Higher Education on the problems in replicating ground breaking research in the area of priming. This may be a case of a robust result which turns out to fade into irrelevance as time passes, and illustrates the fundamental problems of attempting to do sciences on humans; we’re diverse and protean. I think the jury’s out on this, and we’ll wait and see. Fortunately this probably won’t be an issue we’ll be debating in 10 years, as replications will start to occur, or, they won’t.
Over at ScienceDaily there is a report on a new paper on affirmative action and academia, Understanding the Impact of Affirmative Action Bans in Different Graduate Fields of Study. The paper is gated, but the regression model used really doesn’t seem to do much more than confirm intuition. The descriptive details are more interesting and straightforward.
A week ago Keith Kloor had a post up, What Science, Environmentalism and the GOP Have in Common, where he bemoaned the lack of representation of non-whites in these categories. As a matter of fact I think Keith is wrong about science. Even constraining the data set to American citizens and permanent residents people of Asian ancestry are well represented in many areas of science. But not all sciences are created equal. In 2011 there were 158 doctorates which were awarded within the category of ‘evolutionary biology’ for American citizens or permanent residents. Of these 135 were non-Hispanic white, and 5 were Asian. In ‘neuroscience’ the respective figures were 742, 535, and 96. In ‘zoology’ 55, 49, and 0. In ‘bioinformatics’ they were 80, 51, and 17. Finally, in ‘ecology’ the breakdown was 330, 300, and 11. If you are involved in academic biology I’m rather sure that these numbers won’t surprise you too much, even if you’d never thought about it. You can even infer these by walking through the posters at ASHG 2012, and seeing how the demographics of the crowds shift.
We can look at this issue another way. In 2010 US News & World Report listed the top 10 ecology & evolution graduate programs. I went to the faculty websites after typing the university and ‘ecology,’ and then ‘neuroscience.’ Looking at names, and sometimes head shots, I classified everyone as ‘Asian’ (as defined by the US Census) and ‘Not Asian.’ You can find the data here. Please note that the left columns are ecology faculty, and the right are neuroscience.
As many of you know, right before the election I made a $50 bet with Hank Campbell that Nate Silver would get at least 48 out of 50 states correct for the 2008 presidential election. I also got one of Hank’s readers to sign on to the same bet. Additionally, a few readers and Twitter followers got in on the wager; they were bullish on Romney’s prospects, and I was not (more honestly, I was moderately sure they were self-delusional, and willing to take their money to make them more cautious about their self-delusional biases in the future). But there’s a major precondition that needs to be stated here: I hedged.
Last February a friend told me he was 100% confident that Barack Hussein Obama would be reelected. This prompted me to ask for favorable terms on a bet. The logic was simple, if he was 100% confident, then it shouldn’t be a major issue for him, because he was collecting anyhow. As it happens he gave me 5 to 1 odds, so that I would collect $5 for every $1 he might collect. I told him beforehand that I actually thought that Obama had a 60-70% chance of winning, so I went into the wager assuming I’d be out a modest amount of money. But that was no concern. My goal was now to convince those who were irrationally supportive of Romney to take the other side of the bet. For whatever reason people have an inordinate bias toward their hoped-for-candidate in terms of who they think will win, as opposed to who they wish to win. The future ought gets confused with the future is.* I got people to take the other side, which means that I was going to make money no matter who won.
At this point one might wonder about my comment that I suspected that those who were bullish on Romney were delusional. It’s rather strong, and my reasoning is actually rather strange. Overall I accepted the polling averages. A few years back I was an economic determinist in election outcomes, but Nate Silver had convinced me that the sample size was too small to get a good sense of the real proportion of variation being predicted here. In short, the economy matters, but I stepped back from the supposition that it was determinative (as it happens, purely economic models that were excellent at predicting past elections face-planted this time). So that’s why I relied on the polls. Though I leaned on Nate Silver, I didn’t think he was particularly oracular, and I’d say that I’m mildly skeptical of the excessive faith some put in his particular person. When I put a link up to Colby Cosh’s mild take-down of Silvermania I received a few moderately belligerent comments. This despite the fact that I was willing to put money on Silver’s prediction.
Science is about “updating” with new information. But people are attached to their propositions, and shifts in paradigms can take a very long time, often dependent more on human lifespans than the constellation of the data. But please see this post by Luke Jostins’ over at Genomes Unzipped. He has “updated” his own view of his recent Nature paper on inflammatory bowel disease. This is rather awesome, because yes, there was some talk about the balancing selection aspect of the paper at ASHG, and now Luke has gone and amended his own position.
The reality is that emotions are a big deal in science. But in theory we simply look at the evidence. Bridging that gap, and shifting the balance to the latter, is very important in keeping the enterprise honest, fruitful, and attractive to young scholars. I’m hoping that the more rapid dissemination of information via projects like Haldane’s Sieve will aid in the rate of iteration.
Richard Lewontin’s fame rests in part on his pioneering role in the development of the field of molecular evolution, and secondarily due to his trenchant Left-wing politics. Several readers have already pointed me to his rather strange review of two new works in The New York Review of Books. The prose strikes me as viscous and meandering, but some of the assertions are rather peculiar. For example:
The other exception to random inheritance is not in the chromosomes, but in cellular particles called ribosomes that contain not DNA but a related molecule, RNA, which has heritable variation and is of basic importance to cell metabolism and the synthesis of proteins. Although the cells of both sexes have ribosomes, they are inherited exclusively through their incorporation in the mother’s egg cell rather than through the father’s sperm. Our ribosomes, then, provide us, both male and female, with a record of our maternal ancestry, uncontaminated by their male partners.
Harry Ostrer, who is a professor of genetics at Albert Einstein College of Medicine, and Raphael Falk, who is one of Israel’s most prominent geneticists, depend heavily on our ability to trace ancestry by looking at the DNA of Y chromosomes and ribosomes….
There is no mention of ribosomes in Legacy: A Genetic History of the Jewish People. I know, because I used Amazon’s ‘search inside’ feature. Rather, there’s a lot of reference to mitochondrial DNA and mtDNA, which is what Lewontin truly meant. Or at least I hope that’s what he meant. Because Lewontin is an eminent evolutionary biologist I assume they felt like they didn’t need a science editor, but perhaps they need to reconsider that.
I was a little sad when I heard my friend Steve Hsu had accepted a position at Michigan State some months back. My reasons were two-fold. First, I swing by Eugene now and then, and I wouldn’t have the opportunity to drop in on his office. Second, it seemed that Steve was becoming an Administrator. To some extent I feel like that’s going over to the dark side. But ultimately it’s his decision, and Steve has a lot of things going on at any given moment, and I’m hopeful he’ll continue to be involved in the production of scholarship in some form (he’s more of a scholar than most as it is).
Now apparently his move has resulted in submerged tensions coming to the fore. You can read the article in The Lansing Journal, New director’s experience a plus for MSU, but his controversial views concern some. Let’s qualify who these “some” are:
Fifteen years ago John Horgan wrote The End Of Science: Facing The Limits Of Knowledge In The Twilight Of The Scientific Age. I remain skeptical as to the specific details of this book, but Carl’s write-up in The New York Times of a new paper in PNAS on the relative commonness of scientific misconduct in cases of retraction makes me mull over the genuine possibility of the end of science as we know it. This sounds ridiculous on the face of it, but you have to understand my model of and framework for what science is. In short: science is people. I accept the reality that science existed in some form among strands of pre-Socratic thought, or among late antique and medieval Muslims and Christians (not to mention among some Chinese as well). Additionally, I can accept the cognitive model whereby science and scientific curiosity is rooted in our psychology in a very deep sense, so that even small children engage in theory-building.
I’m reading Jim Manzi’s Uncontrolled: The Surprising Payoff of Trial-and-Error for Business, Politics, and Society right now. No complaints, though that’s no surprise, as I’m familiar with the broad outline’s of Manzi’s work, and have found much to agree with him on in the past (though there are issues where we differ, never fear). That being said, I did ponder one aspect of Manzi’s characterization of science: that it makes non-obvious predictions. This is not controversial, and I don’t want to really quibble with it too much. But in the context of social science in particular I think one of the gains of ‘science’ is the clarification of obvious predictions.
Dr. Joe Pickrell has a follow up to his widely discussed post on updating scientific publication for the 21st century. One section jumped out at me, not because it was revolutionary, but because it made explicit a complaint that I had often heard:
The solution to this problem relies on a simple observation–in my field, I am completely indifferent to whether a paper has been “peer-reviewed” for the basic reason that I consider myself a “peer”. I do not think it extremely hubristic to say that I am reasonably capable of evaluating whether a paper in my field is worth reading, and then if so, of judging its merits. The opinions of other people in the field are of course important, but in no way does the fact that two or three nameless people thought a paper worth publishing influence my opinion of it. This immediately suggests a system in which papers are posted online as soon as the authors think they are ready (on so-called pre-print servers). This system is the default in many physics, math, and economics communities, among others, and as far as I can tell it’s been quite successful.
The reality is that often the “peers” are not peers. How else to explain the publication of the longevity study in Science, now retracted? Or the non-canonical RNA editing? (presumably this is less common of a problem in specialized journals). And sometimes the feedback of peers can indicate that they don’t really know what they’re talking about. For example, I was once told that the authors of a phylogenetics paper which used Bayesian methods were asked to reanalyze their data with a max likelihood framework (jump to the last sentence of this section to see why this is peculiar).
The theory of classical peer review made sense in the pre-internet age. But now there are a plenty of reasons why we might need to revisit this.*
* Not to mention that “peer review” is a somewhat subjective concept. Richard A. Muller has gotten into a back & forth on this issue whether his latest work has undergone peer review. He claims it has, others claim not. I suspect most traditional biologists would be skeptical of Muller’s claim, but physicists would accept it.
Here’s a comment which is interesting, if hard to actually engage with because of the difficulty of the subject matter:
You’re obviously aware of the arguments employed by feminists in the critique of the philosophy of science; that cultural values, in their view patriarchy, could unintentionally contaminate science by affecting how evidence is interpreted and what hypothesises are formed from it. This argument is usually combined with the more fundamental problem of using inductive logic in science, especially biology, and how any cultural norms could be mistaken for biological facts.
My question is how do you separate out the biases from the facts?
What makes you think that the lefts reservations about the studies into sex and race are the result of their own bias and not a legitimate acusation of bias within science? It is obviously not a totally improbable claim considering the long history of racist science in the two previous centuaries.
From my own lay mans knowledge of the subject I’ve got the impression the jury is still out on both innate sex difference and the genetic realities of race.
There is one other drawback to the arXiv that makes me, as a potential submitter, very nervous: being scooped.
A paper is “scooped” if someone else publishes the same (or very similar) concept before you get a chance to publish yours. But, wait, if it is on the arXiv, isn’t that documentation that I had the idea first? Well, yes, but… the arXiv isn’t commonly used in Biology yet, so it isn’t clear how important or how much priority will be given to authors who publish there before “traditional” peer review. This is especially concerning if the novelty of the paper is the idea (which is easy to reproduce with the same or different data) versus a method (which is more difficult to replicate). Maybe this isn’t a valid concern, because anonymous reviewers could, one might argue, just as easily “scoop” ideas from a manuscript they have reviewed. Furthermore, perhaps posting ideas/research early might facilitate more collaborations instead of competitions between research groups.
All said, I think that submitting to pre-print servers can be a very valuable tool for facilitating scientific discourse and advances. Will I start submitting there? We will have to wait and see.
It doesn’t matter to me at this point that people might have qualms. Once sufficient consciousness is raised and critical mass is achieved, then you’ll see a stampede. Some fields in biology may be late into the shift toward preprint distribution, but for the purposes of a lot of the stuff I cover on this weblog I doubt that will matter. When it comes to evolutionary biology that isn’t being funded by pharma or private foundations I don’t think there’s much holding people back aside from the worry about being scooped.
I don’t know much about academia and its intrigues personally, but I have heard of instances of reviewers squatting on a paper until someone else associated with the reviewer publishes (yes, people know who is reviewing in many cases, or suspects). This is a form of scooping, but it occurs in the shadows, and there’s always deniability. Who knows how we can quantify this sort of behavior? But it’s something to that we need to keep in mind when we’re worried about the pitfalls of open access and preprint distribution.