A new toxicology study states that rats eating genetically modified food and the weedkiller Roundup develop huge tumors and die. But many scientists beg to differ, and a close look at the study shows why.
Genetically modified organisms (GMOs) have always been a controversial topic. On the one hand are the many benefits: the higher crop yields from pesticide- and insect-resistant crops, and the nutritional modifications that can make such a difference in malnourished populations. On the other side is the question that concerns many people: We are modifying the genes of our food, and what does that mean for our health? These are important question, but the new study claiming to answer them misses the mark. It has many horrifying pictures of rats with tumors, but without knowledge about the control rats, what do those tumors mean? Possibly, nothing at all.
The recent study, from the Journal of Food and Chemical Toxicology has fueled the worst fears of the GMO debate. The study, by Italian and French groups, evaluated groups of rats fed different concentrations of maize (corn) tolerant to Roundup or Roundup alone, over a two year period, the longest type of toxicology study. (For an example of one performed in the U.S., see here.) The group looked at the mortality rates in the aging rats, as well as the causes of death, and took multiple samples to assess kidney, liver, and hormonal function.
The presented results look like a toxicologist’s nightmare. The authors reported high rates of tumor development in the rats fed Roundup and the Roundup-tolerant maize. There are figures of rats with visible tumors, and graphs showing death rates that appear to begin early in the rats’ lifespan. The media of course picked up on it, and one site in particular has spawned some reports that sound like mass hysteria. It was the first study showing that genetically modified foods could produce tumors at all, let alone the incredibly drastic ones shown in the paper.
Sophie Bushwick (Twitter, Tumblr) is a science journalist and podcaster, and is currently an intern at DISCOVERmagazine.com. She has written for Scientific American, io9, and DISCOVER, and has produced podcasts for 60-Second Science and Physics Central.
Human chromosomes (grey) capped by telomeres (white)
U.S. Department of Energy Human Genome Program
Renowned biologist Elizabeth Blackburn has said that when she was a young post-doc, “Telomeres just grabbed me and kept leading me on.” And lead her on they did—all the way to the Nobel Prize in Medicine in 2009. Telomeres are DNA sequences that continue to fascinate researchers and the public, partially because people with longer telomeres tend to live longer. So the recent finding that older men father offspring with unusually lengthy telomeres sounds like great news. Men of advanced age will give their children the gift of longer lives—right? But as is so often the case in biology, things aren’t that simple, and having an old father may not be an easy route to a long and healthy life.
Every time a piece of DNA gets copied, it can end up with errors in its sequence, or mutations. One of the most frequent changes is losing scraps of information from each end of the strand. Luckily, these strands are capped with telomeres, repeating sequences that do not code for any proteins and serve only to protect the rest of the DNA. Each time the DNA makes a copy, its telomeres get shorter, until these protective ends wear away to nothing. Without telomeres, the DNA cannot make any more copies, and the cell containing it will die.
But sperm are not subject to this telomere-shortening effect. In fact, the telomeres in sperm-producing stem cells not only resist degrading, they actually grow. This may be thanks to a high concentration of the telomere-repairing enzyme telomerase in the testicles; researchers are still uncertain. All they know is that the older the man, the longer the telomeres in his sperm will be.
Delegates to Indiana’s constitutional convention worked under this tree in 1816.
It later succumbed to Dutch elm disease.
Unless you have a weakened immune system or a stubborn case of athlete’s foot, it’s unlikely you spend much time worrying about fungi. And you shouldn’t—fungal diseases are not generally a big problem for a healthy person; common ones like athlete’s foot are annoying but not serious. In terms of infections, it’s bacteria, parasites, and viruses that kill us.
But the rest of nature tells a different story. According to a recent review of fungal diseases in Nature, fungi are responsible for 72% of the local extinctions of animals and 64% among plants. White nose syndrome in bats and Dutch elm disease are two high-profile examples of extremely deadly fungal diseases gaining wider ranges through global trade. While each fungus itself is unique, many fungal pathogens share several special abilities that make them especially lethal.
Unlike viruses and most bacteria, fungi can survive—and survive for years—in dry or frigid environments outside of hosts. All they need to do is make spores: small, hardy reproductive structures containing all the necessary DNA to grow a new fungus. As spores, fungi can tough out adverse conditions and drift thousands of miles in the wind to find more livable settings. Aspergillus sydowii, for example, hitches a ride in dust storms from Africa to the Caribbean, where it infects coral reefs. They’re also ubiquitous in the air; there are one to ten spores in every breath you take. Wheat stem rust, a common fungus that causes $60 billion of crop damage a year, produces up to 1011 spores per hectare, and they can travel 10,000 kilometers through the atmosphere to find new hosts. That’s only taking into account one of its five spore forms, which are produced at different times in its life cycle. For plants in general, fungi are the number one infectious threat, far above bacteria or viruses.
Many fungi are also generalists that use a scorched-earth strategy to parasitize a wide range of hosts. To invade host cells, viruses need to sneak their way in by fitting into specific proteins like a key in a lock. Because viruses need to have this precision, it’s hard for them to jump from one species to another one with a different set of proteins, and it’s a big deal when it does happen. Fungi, on the hand, don’t need to enter cells; like the mold that eats your bread, it squirts its digestives juices and rots everything in sight. While viruses nimbly pick your locks, fungi are like a bomb that will blow up your door—or anyone else’s.
Razib Khan’s degrees are in biochemistry and biology. He has blogged about genetics since 2002 (see his Discover Blog, Gene Expression), previously worked in software development, is an Unz Foundation Junior Fellow and lives in the western US. He loves habaneros.
…At some future period, not very distant as measured by centuries, the civilized races of man will almost certainly exterminate and replace throughout the world the savage races. At the same time the anthropomorphous apes, as Prof. Schaaffhause has remarked, will no doubt be exterminated. The break will then be rendered wider, for it will intervene between man in a more civilized state, as we may hope, than the Caucasian and some ape as low as a baboon, instead of as at present between the negro or Australian and the gorilla.
The above quote is not to vilify Charles Darwin. On the contrary, I believe Darwin was a scientific hero whose work is the foundation of modern biology. Nevertheless, he was a man of his age. Despite the fact that Darwin was a political liberal from a family of liberals, with pristine credentials in progressive social movements of his day, such as the anti-slavery campaigns, it is clear that he had Victorian biases nonetheless; some of the passages in The descent of man clearly come from a fortunately bygone era, when white scholars and adventurers cataloged and surveyed the unexplored corners of our world, and created taxonomies of the “lower races” as if they were just part of the local fauna. The reality is that Charles Darwin’s age was fundamentally one of white supremacy. In the year 1900, one out of three human beings alive was of European extraction. In the four centuries since Christopher Columbus, Europe and its Diaspora had entered into massive demographic expansion—which many Victorians saw as survival of the fittest. Progressives of the late 19th and early 20th century, such as H. G. Wells, foresaw a future where the “higher races” would naturally marginalize those peoples who were lesser participants in civilization. Such was taken as the judgment of nature.
How 100 years do change things. And yet just as Darwin could not help but reflect the presuppositions of his era, so we in our day can not help but channel the zeitgeist. Like Charles Darwin, today’s scholars have concluded that humans are fundamentally an African species. But unlike Darwin they conclude from this that there is a biological, essential unity of humankind, such that talk of “civilized” and “savage” is rendered moot and irrelevant. We do look through the mirror of our ages darkly, seeing startlingly different insights from the same shadows of reality. Whereas racist assumptions and beliefs were supported by interpretations of science of the 19th century, today we attempt to harness science in the opposing direction.
The topic of human variation, and more plainly, race, is fraught. The past century has seen a wild swing from the widespread acceptance of the idea that human races are real, with big, important differences, to the opposite position: that race is fundamentally an illusion, a social construction of the human mind. But both of these arguments are mistaken. The established modern consensus about the equality of people, irrespective of race, is morally and ethically justified. But these beliefs we hold to be true do not derive from the natural science, which doesn’t present a clear moral lesson.
By Luke Jostins, a postgraduate student working on the genetic basis of complex autoimmune diseases. Jostins has a strong background in informatics and statistical genetics, and writes about genetic epidemiology and sequencing technology on the his blog Genetic Inference. A different version of this post appeared on the group blog Genomes Unzipped.
One of the great hopes for genetic medicine is that we will be able to predict which people will develop certain diseases, and then focus preventative measures to those at risk. Scientists have long known that one of the wrinkles in this plan is that we will only rarely be able to say with certainty whether someone develop a given disease based on their genetics—more often, we can only give an estimate of their disease risk.
This realization came mostly from twin studies, which look at the disease histories of identical and non-identical twins. Twin studies use established models of genetic risk among families and populations, along with the different levels of similarity of identical and non-identical twins, to estimate how much of disease risk comes from genetic factors and how much comes from environmental risk factors. (See this post for more details.) There are some complexities here, and the exact model used can change the results you get, but in general the overall message is the same: genetic risk prediction contains a lot of information, but not enough to give guaranteed predictions of who will and who won’t get certain diseases. This is not only true of genetics either: parallel studies of environmental risk factors usually reveal tendencies and probabilities, not guarantees.
This means that two people with exactly the same weight, height, sex, race, diet, childhood infection exposures, vaccination history, family history, and environmental toxin levels will usually not get the same disease, but they are far more likely to than two individuals who differ in all those respects. To take an extreme example, identical twins, despite sharing the same DNA, socioeconomic background, childhood environment, and (generally) placenta, usually do not die from the same thing—but they are far more likely to than two random individuals. This is a perfect analogy for how well (and badly) risk prediction can work: you will never have a better prediction than knowing the health outcomes of a genetic copy of you. The health outcomes of another version of you will be invaluable, and will help guide you, your doctor, and the health-care establishment, if they use this information properly. But it won’t let them know exactly what will happen to you, because identical twins usually do not die from the same thing.
There is no health destiny: There is always a strong random component in anything that happens to your body. This does not mean that none of these things are important; being aware of your disease risks is one of the most important things you can do for your own future health. But risk is not destiny. And this central fact has been well known to scientists for a while now.
This was the context into which a recent paper in Science Translational Medicine by Bert Vogelstein and colleagues was published, which also used twin study data to ask how well genetics could predict disease. The take-home message from the study (or at least the message that many media outlets have taken home) is that DNA does not perfectly determine which disease or diseases you may get in the future. The paper was generally pretty flawed: many geneticists expressed annoyance at the paper, and Erika Check Hayden carried out a thorough investigation into the paper for the Nature News blog. In short, the study used a non-standard and arbitrary model of genetic risk, and failed to properly model the twin data, handling neither the many environmental confounders nor the large degree of uncertainty associated with studies of twins.
Many geneticists were annoyed that the authors seemed to be unaware of the existing literature on the subject, and that they presented their approach and their results as if they were novel and controversial at a well-attended press release at the American Association for Cancer Research annual meeting. However, what came as more of a shock was how surprised the media as a whole seemed to be at the results, with headlines such as “DNA Testing Not So Potent for Prevention“ and “Your DNA blueprint may disappoint.” No reporter (other than Erika) even mentioned the information that we already had about the limits of genetic risk prediction. As Joe Pickrell pointed out on twitter, we can’t really know whether this was genuine surprise or merely newspapers hyping the message to make it seem more like news, but having talked to a few journalists and members of the public, the surprise appears to be at least in part genuine. The gap between the public perception and the established consensus on genetic risk prediction seemed to us to be unexpected and worrying.
Charles Q. Choi is a science journalist who has also written for Scientific American, The New York Times, Wired, Science, and Nature. In his spare time, he has ventured to all seven continents.
The Fertile Crescent in the Near East was long known as “the cradle of civilization,” and at its heart lies Mesopotamia, home to the earliest known cities, such as Ur. Now satellite images are helping uncover the history of human settlements in this storied area between the Tigris and Euphrates rivers, the latest example of how two very modern technologies—sophisticated computing and images of Earth taken from space—are helping shed light on long-extinct species and the earliest complex human societies.
In a study published this week in PNAS, the fortuitously named Harvard archaeologist Jason Ur worked with Bjoern Menze at MIT to develop a computer algorithm that could detect types of soil known as anthrosols from satellite images. Anthrosols are created by long-term human activity, and are finer, lighter-colored and richer in organic material than surrounding soil. The algorithm was trained on what anthrosols from known sites look like based on the patterns of light they reflect, giving the software the chance to spot anthrosols in as-yet unknown sites.
This map shows Ur and Menze’s analysis of anthrosol probability for part of Mesopotamia.
Armed with this method to detect ancient human habitation from space, researchers analyzed a 23,000-square-kilometer area of northeastern Syria and mapped more than 14,000 sites spanning 8,000 years. To find out more about how the sites were used, Ur and Menze compared the satellite images with data on the elevation and volume of these sites previously gathered by the Space Shuttle. The ancient settlements the scientists analyzed were built atop the remains of their mostly mud-brick predecessors, so measuring the height and volume of sites could give an idea of the long-term attractiveness of each locale. Ur and Menze identified more than 9,500 elevated sites that cover 157 square kilometers and contain 700 million cubic meters of collapsed architecture and other settlement debris, more than 250 times the volume of concrete making up Hoover Dam.
“I could do this on the ground, but it would probably take me the rest of my life to survey an area this size,” Ur said. Indeed, field scientists that normally prospect for sites in an educated-guess, trial-by-error manner are increasingly leveraging satellite imagery to their advantage.
The phylogeny of Prozac yogurt.
Christina Agapakis is a synthetic biologist and postdoctoral research fellow at UCLA who blogs about about biology, engineering, biological engineering, and biologically inspired engineering at Oscillator.
A few weeks ago, I saw a retweet that claimed “biohacking is easier than you think” with a link to a post on a blog accompanying a book called Massively Networked. The post included video of Tuur van Balen’s presentation at the NextNature power show a few months earlier. Van Balen is a designer whose work I’ve followed for a couple years now, and his most recent project imagines how synthetic biology might produce and deliver medicines in the future. He demonstrates—using homemade tools, equipment purchased on eBay, and online resources for finding and synthesizing DNA sequences—how someone could engineer a strain of bacteria to produce Prozac-laced yogurt. While he’s not actually making Prozac, his demonstration does show pretty accurately how someone could get DNA into a bacterium (without, of course, the frustrating months of troubleshooting that almost any experiment inevitably requires). I posted my own version of the story, writing that art projects like this can ask important questions about biological design.
The next day, my post was syndicated on the Huffington Post with a modified title that emphasized Prozac. Then a version appeared on Gizmodo, and it went on from there, spreading across the Internet. By the time its spread was complete, Van Balen, an artist interested in the implications of emerging biotechnologies, had mutated into a bioengineer at the forefront of synthetic biology research, creating Prozac yogurt in five days with just 860 base pairs of DNA. (If you were to actually make Prozac biologically, it would certainly take the action of many enzymes, each encoded by their own sequence of hundreds or thousands of base pairs).
How did an art piece, a design fiction that asks us to think critically about the possibilities opened up by synthetic biology, provoke an unskeptical acceptance of what bioengineering has made possible? Perhaps I should have been clearer in my post, or perhaps it’s the fault of sensationalized click-bait headlines. But I think it may be that we’ve become so accustomed to the hype surrounding the science of genes and DNA, so used to hearing about groundbreaking genetics, from the “gene for dry ear wax” to the “gene for Alzheimer’s” to the “gene for [common human behavior]” that we don’t think twice when we hear about mixing bacteria with the “gene for Prozac” to create antidepressant yogurt.
Erika Check Hayden is a journalist at Nature and educator in San Francisco. Her work has taken her to wild and beautiful places, but focuses most of the time on the inner terrain of the human body. You can find her online at erikacheck.com and twitter.com/Erika_Check.
This piece was originally published at The Last Word on Nothing.
A few years ago, Eric Klavins found himself starting at the ceiling of his room in the Athenaeum, a private lodging on the grounds of the California Institute of Technology, in the middle of the night. Unable to sleep, Klavins was instead pondering a question that had been posed to him earlier that day at a meeting.
Klavins, a robotics researcher, was funded by grants from the U.S. Air Force and the Defense Advanced Research Projects Agency (DARPA) on robot self-organization: making many simple robots work together to assemble themselves into a shape or structure. While working on the grants, Klavins would routinely be called into meetings to discuss his work with various defense officials, and it was at one of these meetings that a Defense Department researcher had posed his question. “He said, ‘Do you think you could figure out how something that has been broken up into lots of little pieces could be reassembled so we could figure out what it was?’” he recalls.
Klavins spent hours thinking about how one could actually do it. Then, he realized, he had no idea why one would even want to—and hadn’t asked that question at all during all the years he worked with Defense Department funding. He suddenly felt uncomfortable about that. “It bothered me that someone would spend their time studying how things get blown up and working to make things get blown up better,” Klavins says. Not long after, he decided to steer away from defense funding and towards applications in biology and medical research that are part of the realm of synthetic biology, the field of science that tries to turn biology into more of an engineering discipline.
But if Klavins thought that the change would help him escape the moral dilemmas that used to keep him up in the middle of the night, he was wrong. The U.S. Department of Defense has emerged as one of the major funders of synthetic biology; last fall, for instance, DARPA accepted proposals for a highly coveted set of grants in a new program, Living Foundries, that aims to “enable the rapid development of previously unattainable technologies and products, leveraging biology to solve challenges associated with production of new materials, novel capabilities, fuel and medicines.”
Earlier this week, food columnist Ari LeVaux set off a storm of media reaction with a piece with this premise: tiny plant RNAs, recently discovered to survive digestion and alter host gene expression, are a major reason why genetically modified foods should be considered dangerous. For anyone familiar with the paper he referred to, or with molecular biology in general, the article was full of conflation and sloppy logic, and even as it became the most-emailed story on TheAtlantic.com, where it was published, biology bloggers and science writers were pointing out its significant flaws. To his credit, LeVaux revised the article to fix many (though not all) of the errors concerning genetics; the new version appeared yesterday at AlterNet and today replaced his original piece at The Atlantic.
So what did LeVaux get so wrong, and, once all of the wheat was sorted from the chaff, was there anything to what he was trying to say?
At the heart of the fracas is LeVaux’s claim that a class of molecules called miRNA is a reason to fear GMOs specifically, more than any other food plant or animal. miRNA, which is short for microRNA, is a class of molecules that perform various tasks in plants and animals. They were first discovered about twenty years ago, in nematode worms, and they regulate gene expression by binding the messenger RNA involved in translating a gene into a protein. The messenger RNA carries the “message” of the DNA’s sequence to a group of enzymes that translate it into the amino acid sequence of a protein. But if a miRNA binds to a messenger RNA, the message is destroyed, and the protein is never made. Thus, miRNA can be a powerful tool for preventing the expression of genes. In fact, that is what’s made it such an important lab tool in recent years: it allows researchers to knock down the expression of genes without physically removing them from an organism’s genome.
In the paper that LeVaux pegged his article on, Nanjing University researchers found that miRNAs usually seen in rice were circulating in the blood of humans, and that mice fed rice had the miRNA in their blood as well. That particular miRNA, in its native context, regulates plant development. When the researchers added it to human cells, it appeared to bind to the messenger RNA of a gene involved in removing cholesterol from the blood. Previous papers had found that plants have plenty of miRNA floating around in them [pdf] (as does just about everything we eat, since plants and animals make them by the thousands), but having them show up whole and unmolested in blood, apparently after digestion, was a new and very intriguing discovery.
Malcolm MacIver is a bioengineer at Northwestern University who studies the neural and biomechanical basis of animal intelligence. He also consults for sci-fi films (e.g., Tron Legacy), and was the science advisor for the TV show Caprica.
A few years ago, the world was aflame with fears about the virulent H5N1 avian flu, which infected several hundred people around the world and killed about 300 of them. The virus never acquired the ability to move between people, so it never became the pandemic we feared it might be. But recently virologists have discovered a way to mutate the bird flu virus that makes it more easily transmitted. The results were about to be published in Science and Nature when the U.S. government requested that the scientists and the journal withhold details of the method to make the virus. The journals have agreed to this request. Because the information being withheld is useful to many other scientists, access to the redacted paragraphs will be provided to researchers who pass a vetting process currently being established.
As a scientist, the idea of having any scientific work withheld is one that does not sit well. But then, I work mostly on “basic science,” which is science-speak for “unlikely to matter to anyone in the foreseeable future.” But in one area of work, my lab is developing new propulsion techniques for high-agility underwater robots and sensors that use weak electric fields to “see” in complete darkness or muddy water. This work, like a lot of engineering research, has the potential to be used in machines that harm people. I reassure myself of the morality of my efforts by the length of the chain of causation from my lab to such a device, which doesn’t seem much shorter than the chain for colleagues making better steels or more powerful engines. But having ruminated about my possible involvement with an Empire of Dark Knowledge, here’s my two cents about how to balance the right of free speech and academic freedom with dangerous consequences.
Consider the following thought experiment: suppose there really is a Big Red Button to launch the nukes, one in the U.S., and one in Russia, each currently restricted to their respective heads of government. Launching the nukes will surely result in the devastation of humanity. I’m running for president, and as part of my techno-libertarian ideology, I believe that “technology wants to be free” and I decide to put my money where my slogan is by providing every household in the U.S. with their very own Big Red Button (any resemblance to a real presidential candidate is purely accidental).
If you think this is a good idea, the rest of this post is unlikely to be of interest. But, if you agree that this is an extraordinarily bad idea, then let’s continue.