First, “Is it okay to introduce non-human DNA in our genome?” The premise is false. A substantial proportion of the human genome is derived from viruses. Lateral gene transfer in complex organisms is not unknown, and may sometimes be quite functional (arguably endosymbiogenesis and mitochondria is the classic case, but that’s so far back in the past that people aren’t shocked by it). Second, the piece also asks if we “Should we biologically enhance non-human animals?” Last I checked selection was a biological process. Domestication events have radically changed many organisms. The io9 piece spends some time on the possible Uplift of other species, but as a matter of reality coexistence with humans tends to reduce the intelligence of domestic animals (they offload many tasks to us). The narrow exception though is the case of dogs. Yes, they are uniformly less intelligent than wolves, but excel at reading human social cues. We’ve modified them to be our perfect companion animals!
The converse situation is also true in regards to experience and familiarity. Most who are enmeshed in the humanities have only a cursory knowledge of science, and a general unfamiliarity with the culture of science (though more students switch out of science to non-science degree programs than the reverse). In most cases I find the ignorance of science by non-scientists sad rather than concerning, but in some instances it does lead to the ludicrous solipsism which was highlighted in books such as Higher Superstition: The Academic Left and Its Quarrels with Science. Though there is often a focus on fashionable Leftism in these critiques, it may be notable that the doyen of “Intelligent Design” has admitted a debt to Critical Theory. The scientist-turned-theologian Alister McGrath positively welcomes post-modernism in his The Twilight of Atheism: The Rise and Fall of Disbelief in the Modern World. The problem is not ignorance of science, as much as the dismissal and mischaracterization which that ignorance can give birth to in the right arrogant hands.
Outreach is a buzz term in academic science right now. Scientists have to publish. And they have to teach. Then there is service (e.g. committees and such). Outreach is now part of the service element. It doesn’t need to be hard or sophisticated. Not only that, outreach can be general (to the public) and specific (to your peers). As an example of what I’m talking about Michael Eisen’s blog is more aimed toward a broad audience, though on occasion he delves specifically into the science which is the bread and butter of his research. Haldane’s Sieve is more tightly focused on researchers working at the intersection of evolution, genomics, and population genetics. But even it expands further out toward biologists who take an interest in specific evolutionary or genomic questions in their own research (e.g., I have known several molecular biologists who had no idea who was behind Haldane’s Sieve, but had read the site because of an interest in a specific preprint).
This isn’t rocket science, so to speak. Information dissemination is pretty easy right now, and that is theoretically one of the major things which drives science. This should be a great time for scientific progress! Is it? In genomics, yes, though that’s not because of more efficient flow of information, as opposed to technology. With that prefatory comment, I think John Hawks’ recent jeremiad is worth reading, Speak up and matter:
Every few months someone asks me what I use to manage my papers. Stupidly, I don’t use anything. Or I haven’t. Over the past few weeks I’ve been playing around with PubChase and Mendeley. You probably know of the latter, and the fact that it’s been purchased Elsevier. Elsevier is what it is. Mendeley on the other hand is a firm that I have a positive view of, in part because of their culture of openness and support for the free flow of information, but also due to the fact that I’ve known their head of outreach for ten years. You trust people, not things. Mendeley‘s not a charity, and I don’t begrudge them their new resources now that they are under the corporate wing of Elsevier. Whether you’re pessimistic or optimistic about their future, I think caution is warranted.
It’s no secret to people who read this blog that I hate the way scientific publishing works today. Most of my efforts in this domain have focused on removing barriers to the access and reuse of published papers. But there are other things that are broken with the way scientists communicate with each other, and chief amongst them is pre-publication peer review. I’ve written about this before, and won’t rehash the arguments here, save to say that I think we should publish first, and then review. But one could argue that I haven’t really practiced what I preach, as all of my lab’s papers have gone through peer review before they were published.
No more. From now on we are going to post all of our papers online when we feel they’re ready to share – before they go to a journal. We’ll then solicit comments from our colleagues and use them to improve the work prior to formal publication. Physicists and mathematicians have been doing this for decades, as have an increasing number of biologists. It’s time for this to become standard practice.
Some ground rules. I will not filter comments except to remove obvious spam. You are welcome to post comments under your name or under a pseudonym – I will not reveal anyone’s identity – but I urge you to use your real name as I think we should have fully open peer review in science.
Peter A. Combs and Michael B. Eisen (2013). Sequencing mRNA from cryo-sliced Drosophila embryos to determine genome-wide spatial patterns of gene expression.
Please leave comments on Eisen’s post.
Via Haldane’s Sieve.
Over at ScienceDaily there is a report on a new paper on affirmative action and academia, Understanding the Impact of Affirmative Action Bans in Different Graduate Fields of Study. The paper is gated, but the regression model used really doesn’t seem to do much more than confirm intuition. The descriptive details are more interesting and straightforward.
A week ago Keith Kloor had a post up, What Science, Environmentalism and the GOP Have in Common, where he bemoaned the lack of representation of non-whites in these categories. As a matter of fact I think Keith is wrong about science. Even constraining the data set to American citizens and permanent residents people of Asian ancestry are well represented in many areas of science. But not all sciences are created equal. In 2011 there were 158 doctorates which were awarded within the category of ‘evolutionary biology’ for American citizens or permanent residents. Of these 135 were non-Hispanic white, and 5 were Asian. In ‘neuroscience’ the respective figures were 742, 535, and 96. In ‘zoology’ 55, 49, and 0. In ‘bioinformatics’ they were 80, 51, and 17. Finally, in ‘ecology’ the breakdown was 330, 300, and 11. If you are involved in academic biology I’m rather sure that these numbers won’t surprise you too much, even if you’d never thought about it. You can even infer these by walking through the posters at ASHG 2012, and seeing how the demographics of the crowds shift.
We can look at this issue another way. In 2010 US News & World Report listed the top 10 ecology & evolution graduate programs. I went to the faculty websites after typing the university and ‘ecology,’ and then ‘neuroscience.’ Looking at names, and sometimes head shots, I classified everyone as ‘Asian’ (as defined by the US Census) and ‘Not Asian.’ You can find the data here. Please note that the left columns are ecology faculty, and the right are neuroscience.
Science is about “updating” with new information. But people are attached to their propositions, and shifts in paradigms can take a very long time, often dependent more on human lifespans than the constellation of the data. But please see this post by Luke Jostins’ over at Genomes Unzipped. He has “updated” his own view of his recent Nature paper on inflammatory bowel disease. This is rather awesome, because yes, there was some talk about the balancing selection aspect of the paper at ASHG, and now Luke has gone and amended his own position.
The reality is that emotions are a big deal in science. But in theory we simply look at the evidence. Bridging that gap, and shifting the balance to the latter, is very important in keeping the enterprise honest, fruitful, and attractive to young scholars. I’m hoping that the more rapid dissemination of information via projects like Haldane’s Sieve will aid in the rate of iteration.
Richard Lewontin’s fame rests in part on his pioneering role in the development of the field of molecular evolution, and secondarily due to his trenchant Left-wing politics. Several readers have already pointed me to his rather strange review of two new works in The New York Review of Books. The prose strikes me as viscous and meandering, but some of the assertions are rather peculiar. For example:
The other exception to random inheritance is not in the chromosomes, but in cellular particles called ribosomes that contain not DNA but a related molecule, RNA, which has heritable variation and is of basic importance to cell metabolism and the synthesis of proteins. Although the cells of both sexes have ribosomes, they are inherited exclusively through their incorporation in the mother’s egg cell rather than through the father’s sperm. Our ribosomes, then, provide us, both male and female, with a record of our maternal ancestry, uncontaminated by their male partners.
Harry Ostrer, who is a professor of genetics at Albert Einstein College of Medicine, and Raphael Falk, who is one of Israel’s most prominent geneticists, depend heavily on our ability to trace ancestry by looking at the DNA of Y chromosomes and ribosomes….
There is no mention of ribosomes in Legacy: A Genetic History of the Jewish People. I know, because I used Amazon’s ‘search inside’ feature. Rather, there’s a lot of reference to mitochondrial DNA and mtDNA, which is what Lewontin truly meant. Or at least I hope that’s what he meant. Because Lewontin is an eminent evolutionary biologist I assume they felt like they didn’t need a science editor, but perhaps they need to reconsider that.
Fifteen years ago John Horgan wrote The End Of Science: Facing The Limits Of Knowledge In The Twilight Of The Scientific Age. I remain skeptical as to the specific details of this book, but Carl’s write-up in The New York Times of a new paper in PNAS on the relative commonness of scientific misconduct in cases of retraction makes me mull over the genuine possibility of the end of science as we know it. This sounds ridiculous on the face of it, but you have to understand my model of and framework for what science is. In short: science is people. I accept the reality that science existed in some form among strands of pre-Socratic thought, or among late antique and medieval Muslims and Christians (not to mention among some Chinese as well). Additionally, I can accept the cognitive model whereby science and scientific curiosity is rooted in our psychology in a very deep sense, so that even small children engage in theory-building.
I’m reading Jim Manzi’s Uncontrolled: The Surprising Payoff of Trial-and-Error for Business, Politics, and Society right now. No complaints, though that’s no surprise, as I’m familiar with the broad outline’s of Manzi’s work, and have found much to agree with him on in the past (though there are issues where we differ, never fear). That being said, I did ponder one aspect of Manzi’s characterization of science: that it makes non-obvious predictions. This is not controversial, and I don’t want to really quibble with it too much. But in the context of social science in particular I think one of the gains of ‘science’ is the clarification of obvious predictions.
Dr. Joe Pickrell has a follow up to his widely discussed post on updating scientific publication for the 21st century. One section jumped out at me, not because it was revolutionary, but because it made explicit a complaint that I had often heard:
The solution to this problem relies on a simple observation–in my field, I am completely indifferent to whether a paper has been “peer-reviewed” for the basic reason that I consider myself a “peer”. I do not think it extremely hubristic to say that I am reasonably capable of evaluating whether a paper in my field is worth reading, and then if so, of judging its merits. The opinions of other people in the field are of course important, but in no way does the fact that two or three nameless people thought a paper worth publishing influence my opinion of it. This immediately suggests a system in which papers are posted online as soon as the authors think they are ready (on so-called pre-print servers). This system is the default in many physics, math, and economics communities, among others, and as far as I can tell it’s been quite successful.
The reality is that often the “peers” are not peers. How else to explain the publication of the longevity study in Science, now retracted? Or the non-canonical RNA editing? (presumably this is less common of a problem in specialized journals). And sometimes the feedback of peers can indicate that they don’t really know what they’re talking about. For example, I was once told that the authors of a phylogenetics paper which used Bayesian methods were asked to reanalyze their data with a max likelihood framework (jump to the last sentence of this section to see why this is peculiar).
The theory of classical peer review made sense in the pre-internet age. But now there are a plenty of reasons why we might need to revisit this.*
* Not to mention that “peer review” is a somewhat subjective concept. Richard A. Muller has gotten into a back & forth on this issue whether his latest work has undergone peer review. He claims it has, others claim not. I suspect most traditional biologists would be skeptical of Muller’s claim, but physicists would accept it.
Here’s a comment which is interesting, if hard to actually engage with because of the difficulty of the subject matter:
You’re obviously aware of the arguments employed by feminists in the critique of the philosophy of science; that cultural values, in their view patriarchy, could unintentionally contaminate science by affecting how evidence is interpreted and what hypothesises are formed from it. This argument is usually combined with the more fundamental problem of using inductive logic in science, especially biology, and how any cultural norms could be mistaken for biological facts.
My question is how do you separate out the biases from the facts?
What makes you think that the lefts reservations about the studies into sex and race are the result of their own bias and not a legitimate acusation of bias within science? It is obviously not a totally improbable claim considering the long history of racist science in the two previous centuaries.
From my own lay mans knowledge of the subject I’ve got the impression the jury is still out on both innate sex difference and the genetic realities of race.
Over at Scientific American Blogs Maria Konnikova posts Humanities aren’t a science. Stop treating them like one. The whole write-up leaves me scratching my head, because I don’t really get what the whole point of all the prose is. This is a thesis that is as old as 19th century romantics, and not all too complicated. The author herself has an academic webpage which indicates she works within an analytic framework that’s anything but “soft.” There are huge confusions with terminology, and Jerry Coyne has a response which addresses many of my questions (e.g., what exactly is the alternative to doing statistical tests in psychology? Rely on the impressions and intuition of the researchers and just trust them?). But let me highlight one section:
… Societal conventions change. And is today’s real-world social network really comparable on any number of levels to one, say, a thousand, or even five or one hundred years ago?
Yes, today’s real-world social network probably is comparable to those of the past. There is some science on this issue. Not even rocket science with abstruse statistics. Science which is highly relevant today. Question science, and it may surprise you with what it has discovered!
Few principles are more depressingly familiar to the veteran scientist: the more surprising a result seems to be, the less likely it is to be true. We cannot know whether, or why, this principle was overlooked in any specific study. However, more generally, in a world in which unexpected results can lead to high-impact publication, acclaim and headlines in The New York Times, it is easy to understand how there might be an overwhelming temptation to move from discovery to manuscript submission without performing the necessary data checks.
This is not just an issue in genomics. I’ve discussed it before as being a major problem in psychology. Though the infamous centenarian study will do nothing for the careers of the scientists involved, I do wonder what the effects of publishing large numbers of false positive results in science are on an individuals’ career when it isn’t so inexpertly executed (i.e., in this particular case the technical errors were so glaring that the authors should never have submitted their findings). I wonder because apparently major newspapers are now running with stories which they know are highly likely to be exaggerations or misrepresentation to induce pageviews, and then subsequently ‘correcting’ them. More specifically, the number of corrections has been rising rapidly.
I, and I’m sure other people, have worried about being scooped and beaten to publication due our arXived papers. But really this is silly as we’ve usually given talks, posters, etc on them at big conferences, so the idea that people somehow don’t know about our work before it appears in print is ridiculous. It is far better to get work out, once you consider it worthy of publication, so it can be read and cited by others.
This is in reference to the paper The Geography of Recent Genetic Ancestry across Europe. Go and read the materials and methods. I’m sure that a substantial minority of the readers of this weblog have used every single piece of software listed therein. Phasing and such requires a little bit of computational muscle, but that’s not an impossible hurdle. Additionally, many readers with academic affiliations could get their hands on the POPRES data set. But the generation of a paper, from methods to results to discussion, is not simply a robotic sequence of running data through software or algorithms. You need a first-rate statistical geneticist (e.g., the authors) to actually assemble the pieces together together coherently and with insight even granting the fundamental units of the whole.
First, I’m sure that the blue-collar readers of this weblog are thinking “cry me a river.” Yes, American scientists (perhaps excluding engineers, and to a lesser extent pharmaceutical researchers) are generally Left-liberal, but the collapse of the American working class due to globalization is something that they fixate on only as part of a broader political vision, along with other concerns. But when it comes to tenure-track jobs, the end is nigh! Consider that the woman who seems to have “wasted” a neuroscience Ph.D. in yesterday’s Washington Post article now has a job in academic administration. This is the sort of failure that manual laborers and factory workers alike would probably kill for.
But in any case, some more posts for you. Reader Miko reflects on searching for a job, Mike the Mad Biologist keeps doing his thing, and fellow Discover blogger Julianne on Subtleties of the Crappy Job Market for Scientists:
Recently Daniel MacArthur pointed to the vibrant discussion over at Genomes Unzipped on a moderately infamous paper from Science last year, Widespread RNA and DNA Sequence Differences in the Human Transcriptome, asserting that it is “exactly what open peer review should be like.” This made me wonder, it’s been over five years since Chris Surridge asked why there was so much more commentary on a PLoS ONE paper, By Hook or by Crook? Morphometry, Competition and Cooperation in Rodent Sperm, on blogs than on the paper itself. Has anything changed? The most viewed paper on PLoS Biology, How Many Species Are There on Earth and in the Ocean?, has 9 comments for 45,000 article views. In contrast, Genomes Unzipped has 14 comments for likely far fewer page views. Additionally, if you find the post on the weblog the comments automatically load. Not so with the PLoS Biology paper, you have to click through (yes, I see how this can be a feature, not a bug, but in that case why even bother with comments if you provide an email address for correspondence?)
Over a 14-month period, the molecular geneticist at Stanford University in Palo Alto, California, analyzed his blood 20 different times to pluck out a wide variety of biochemical data depicting the status of his body’s immune system, metabolism, and gene activity. In today’s issue ofCell, Snyder and a team of 40 other researchers present the results of this extraordinarily detailed look at his body, which they call an integrative personal omics profile (iPOP) because it combines cutting-edge scientific fields such as genomics (study of one’s DNA), metabolomics (study of metabolism), and proteomics (study of proteins). Instead of seeing a snapshot of the body taken during the typical visit to a doctor’s office, iPOP effectively offers an IMAX movie, which in Snyder’s case had the added drama of charting his response to two viral infections and the emergence of type 2 diabetes.
Hopefully in about 10 years this will be the norm, not cutting edge science.