By Kenneth Miller
The border guards, wary of advancing rebels, fired their guns in the air as three motorized skiffs approached along the Sangha River in the late March night. But the boats’ occupants were unarmed foreigners, fleeing a bloody insurrection that had gripped the Central African Republic (CAR). Among the refugees was elephant researcher Andrea Turkalo, carrying $25,000 in cash and six hard drives—packed with more than 20 years of data—which she’d grabbed before fleeing her jungle compound.
Turkalo, 60, is a field biologist for the Wildlife Conservation Society, and one of the world’s foremost experts on African forest elephants (Loxodonta cyclotis). Since 1990, she’s been observing the elusive pachyderms—thought to be a different species from their larger, curvier-tusked, savannah-dwelling cousins (Loxodonta africana)—at a clearing known as Dzanga Bai, in the CAR’s southwestern rainforest. But her life’s work now hangs in the balance, as does the fate of the elephants themselves.
By Virginia Gewin
Chuck Bonham, director of the California Department of Fish and Wildlife, is no stranger to potentially controversial species restoration plans: His agency will soon poison non-native fish in an effort to re-establish the Paiute cutthroat trout to its historic range.
Still, the practicalities of efforts to revive extinct species raised mixed emotions among Bonham and participants at the “De-Extinction: Ethics, Law and Politics” conference held at the Stanford Law Auditorium last Friday.
Addressing the scientists in the audience, Bonham (who made clear he wasn’t speaking for the agency) said he and colleagues were “scared, worried, thrilled, excited, and angry at you guys for exploring this idea.” Yet, with the planet facing massive biodiversity loss, he said, de-extinction may be one of the few options for protecting species in perpetuity.
By Jon Tennant
Fossils, as we typically think of them, tell us about the death of an animal. The teeth, bones, shells, fragmented pseudopods and other weird and wonderful bits of carcass all only ever reflect one thing: a permanent geological limbo. These types of fossil are known as body fossils.
The other major group of fossils, that are generally less common, less researched, less known about, but arguably more important for guiding our understanding of the history of life on Earth, are trace fossils. The study of trace fossils is called ichnology, and the fossils don’t represent death; they represent life, behavior, activity. Often trace fossils are actually found in or on body fossils—anything from boring holes from bivalves on other bivalves, to bite marks on bones detailing a poor creature’s last painful seconds as a living animal. Or, in this case, an ancient injury preserved on a dinosaur’s skin.
Dinosaur skin is one of the rarest treasures the paleontological record can reveal to us. We now have a pretty good idea about the texture and nature of dinosaur skin, thanks to a couple of exceptional “dinosaur mummies” and the occasional fragments of skin preserved not as a mold of skin impressed on the surrounding sediment but of the actual fleshy flesh.
By Eric Wagner
The first sign that we’re close is a sickly sweet odor among the otherwise clean scents of fir and spruce. Scott Fitkin, a biologist with the Washington Department of Fish and Wildlife, pushes through a thatch of branches. “Here it is,” he says, coming to a large brush pile.
We are near Castle Mountain in Washington’s Pasayten Wilderness, about ten miles south of the US-Canada border. The pile sits within a perimeter of barbed wire called a snare corral. The brush is meant to mimic a grizzly bear food cache and so pique the bears’ interest. To entice them still more, Fitkin has poured a slurry of fermented salmon guts and cows’ blood over it. Now, in late September, the pile is a shambles—a good sign. If the bears have taken the bait, then they’ve left behind something even more valuable.
Fitkin begins a careful circuit, examining each barb. On four he finds fur, varying from a few strands to a generous tuft. He tweezes each barb’s worth into an envelope. DNA analysis will tell him later what species the hair belongs to, but he isn’t getting his hopes up: all of them are black, and probably from black bears.
Fitkin wants grizzlies. He has searched for them in these mountains for more than 20 years and has yet to see one. But in this, the final year of a three-year survey, he and his colleagues have gone all out. They’ve set up dozens of snare corrals throughout remote parts of the Cascade Mountains, miles from anywhere. If there are grizzlies to be found, then they will find them. Or so they hope.
Eric Michael Johnson has a master’s degree in evolutionary anthropology focusing on great ape behavioral ecology. He is currently a doctoral student in the history of science at University of British Columbia looking at the interplay between evolutionary biology and politics. He blogs at The Primate Diaries at Scientific American, where this post originally appeared.
“Rand” by Nathaniel Gold
“Every political philosophy has to begin with a theory of human nature,” wrote Harvard evolutionary biologist Richard Lewontin in his book Biology as Ideology. Thomas Hobbes, for example, believed that humans in a “state of nature,” or what today we would call hunter-gatherer societies, lived a life that was “solitary, poor, nasty, brutish and short” in which there existed a “warre of all against all.” This led him to conclude, as many apologists for dictatorship have since, that a stable society required a single leader in order to control the rapacious violence that was inherent to human nature. Building off of this, advocates of state communism, such as Vladimir Lenin or Josef Stalin, believed that each of us was born tabula rasa, with a blank slate, and that human nature could be molded in the interests of those in power.
A new toxicology study states that rats eating genetically modified food and the weedkiller Roundup develop huge tumors and die. But many scientists beg to differ, and a close look at the study shows why.
Genetically modified organisms (GMOs) have always been a controversial topic. On the one hand are the many benefits: the higher crop yields from pesticide- and insect-resistant crops, and the nutritional modifications that can make such a difference in malnourished populations. On the other side is the question that concerns many people: We are modifying the genes of our food, and what does that mean for our health? These are important question, but the new study claiming to answer them misses the mark. It has many horrifying pictures of rats with tumors, but without knowledge about the control rats, what do those tumors mean? Possibly, nothing at all.
The recent study, from the Journal of Food and Chemical Toxicology has fueled the worst fears of the GMO debate. The study, by Italian and French groups, evaluated groups of rats fed different concentrations of maize (corn) tolerant to Roundup or Roundup alone, over a two year period, the longest type of toxicology study. (For an example of one performed in the U.S., see here.) The group looked at the mortality rates in the aging rats, as well as the causes of death, and took multiple samples to assess kidney, liver, and hormonal function.
The presented results look like a toxicologist’s nightmare. The authors reported high rates of tumor development in the rats fed Roundup and the Roundup-tolerant maize. There are figures of rats with visible tumors, and graphs showing death rates that appear to begin early in the rats’ lifespan. The media of course picked up on it, and one site in particular has spawned some reports that sound like mass hysteria. It was the first study showing that genetically modified foods could produce tumors at all, let alone the incredibly drastic ones shown in the paper.
Sophie Bushwick (Twitter, Tumblr) is a science journalist and podcaster, and is currently an intern at DISCOVERmagazine.com. She has written for Scientific American, io9, and DISCOVER, and has produced podcasts for 60-Second Science and Physics Central.
Human chromosomes (grey) capped by telomeres (white)
U.S. Department of Energy Human Genome Program
Renowned biologist Elizabeth Blackburn has said that when she was a young post-doc, “Telomeres just grabbed me and kept leading me on.” And lead her on they did—all the way to the Nobel Prize in Medicine in 2009. Telomeres are DNA sequences that continue to fascinate researchers and the public, partially because people with longer telomeres tend to live longer. So the recent finding that older men father offspring with unusually lengthy telomeres sounds like great news. Men of advanced age will give their children the gift of longer lives—right? But as is so often the case in biology, things aren’t that simple, and having an old father may not be an easy route to a long and healthy life.
Every time a piece of DNA gets copied, it can end up with errors in its sequence, or mutations. One of the most frequent changes is losing scraps of information from each end of the strand. Luckily, these strands are capped with telomeres, repeating sequences that do not code for any proteins and serve only to protect the rest of the DNA. Each time the DNA makes a copy, its telomeres get shorter, until these protective ends wear away to nothing. Without telomeres, the DNA cannot make any more copies, and the cell containing it will die.
But sperm are not subject to this telomere-shortening effect. In fact, the telomeres in sperm-producing stem cells not only resist degrading, they actually grow. This may be thanks to a high concentration of the telomere-repairing enzyme telomerase in the testicles; researchers are still uncertain. All they know is that the older the man, the longer the telomeres in his sperm will be.
Delegates to Indiana’s constitutional convention worked under this tree in 1816.
It later succumbed to Dutch elm disease.
Unless you have a weakened immune system or a stubborn case of athlete’s foot, it’s unlikely you spend much time worrying about fungi. And you shouldn’t—fungal diseases are not generally a big problem for a healthy person; common ones like athlete’s foot are annoying but not serious. In terms of infections, it’s bacteria, parasites, and viruses that kill us.
But the rest of nature tells a different story. According to a recent review of fungal diseases in Nature, fungi are responsible for 72% of the local extinctions of animals and 64% among plants. White nose syndrome in bats and Dutch elm disease are two high-profile examples of extremely deadly fungal diseases gaining wider ranges through global trade. While each fungus itself is unique, many fungal pathogens share several special abilities that make them especially lethal.
Unlike viruses and most bacteria, fungi can survive—and survive for years—in dry or frigid environments outside of hosts. All they need to do is make spores: small, hardy reproductive structures containing all the necessary DNA to grow a new fungus. As spores, fungi can tough out adverse conditions and drift thousands of miles in the wind to find more livable settings. Aspergillus sydowii, for example, hitches a ride in dust storms from Africa to the Caribbean, where it infects coral reefs. They’re also ubiquitous in the air; there are one to ten spores in every breath you take. Wheat stem rust, a common fungus that causes $60 billion of crop damage a year, produces up to 1011 spores per hectare, and they can travel 10,000 kilometers through the atmosphere to find new hosts. That’s only taking into account one of its five spore forms, which are produced at different times in its life cycle. For plants in general, fungi are the number one infectious threat, far above bacteria or viruses.
Many fungi are also generalists that use a scorched-earth strategy to parasitize a wide range of hosts. To invade host cells, viruses need to sneak their way in by fitting into specific proteins like a key in a lock. Because viruses need to have this precision, it’s hard for them to jump from one species to another one with a different set of proteins, and it’s a big deal when it does happen. Fungi, on the hand, don’t need to enter cells; like the mold that eats your bread, it squirts its digestives juices and rots everything in sight. While viruses nimbly pick your locks, fungi are like a bomb that will blow up your door—or anyone else’s.
Razib Khan’s degrees are in biochemistry and biology. He has blogged about genetics since 2002 (see his Discover Blog, Gene Expression), previously worked in software development, is an Unz Foundation Junior Fellow and lives in the western US. He loves habaneros.
…At some future period, not very distant as measured by centuries, the civilized races of man will almost certainly exterminate and replace throughout the world the savage races. At the same time the anthropomorphous apes, as Prof. Schaaffhause has remarked, will no doubt be exterminated. The break will then be rendered wider, for it will intervene between man in a more civilized state, as we may hope, than the Caucasian and some ape as low as a baboon, instead of as at present between the negro or Australian and the gorilla.
The above quote is not to vilify Charles Darwin. On the contrary, I believe Darwin was a scientific hero whose work is the foundation of modern biology. Nevertheless, he was a man of his age. Despite the fact that Darwin was a political liberal from a family of liberals, with pristine credentials in progressive social movements of his day, such as the anti-slavery campaigns, it is clear that he had Victorian biases nonetheless; some of the passages in The descent of man clearly come from a fortunately bygone era, when white scholars and adventurers cataloged and surveyed the unexplored corners of our world, and created taxonomies of the “lower races” as if they were just part of the local fauna. The reality is that Charles Darwin’s age was fundamentally one of white supremacy. In the year 1900, one out of three human beings alive was of European extraction. In the four centuries since Christopher Columbus, Europe and its Diaspora had entered into massive demographic expansion—which many Victorians saw as survival of the fittest. Progressives of the late 19th and early 20th century, such as H. G. Wells, foresaw a future where the “higher races” would naturally marginalize those peoples who were lesser participants in civilization. Such was taken as the judgment of nature.
How 100 years do change things. And yet just as Darwin could not help but reflect the presuppositions of his era, so we in our day can not help but channel the zeitgeist. Like Charles Darwin, today’s scholars have concluded that humans are fundamentally an African species. But unlike Darwin they conclude from this that there is a biological, essential unity of humankind, such that talk of “civilized” and “savage” is rendered moot and irrelevant. We do look through the mirror of our ages darkly, seeing startlingly different insights from the same shadows of reality. Whereas racist assumptions and beliefs were supported by interpretations of science of the 19th century, today we attempt to harness science in the opposing direction.
The topic of human variation, and more plainly, race, is fraught. The past century has seen a wild swing from the widespread acceptance of the idea that human races are real, with big, important differences, to the opposite position: that race is fundamentally an illusion, a social construction of the human mind. But both of these arguments are mistaken. The established modern consensus about the equality of people, irrespective of race, is morally and ethically justified. But these beliefs we hold to be true do not derive from the natural science, which doesn’t present a clear moral lesson.
By Luke Jostins, a postgraduate student working on the genetic basis of complex autoimmune diseases. Jostins has a strong background in informatics and statistical genetics, and writes about genetic epidemiology and sequencing technology on the his blog Genetic Inference. A different version of this post appeared on the group blog Genomes Unzipped.
One of the great hopes for genetic medicine is that we will be able to predict which people will develop certain diseases, and then focus preventative measures to those at risk. Scientists have long known that one of the wrinkles in this plan is that we will only rarely be able to say with certainty whether someone develop a given disease based on their genetics—more often, we can only give an estimate of their disease risk.
This realization came mostly from twin studies, which look at the disease histories of identical and non-identical twins. Twin studies use established models of genetic risk among families and populations, along with the different levels of similarity of identical and non-identical twins, to estimate how much of disease risk comes from genetic factors and how much comes from environmental risk factors. (See this post for more details.) There are some complexities here, and the exact model used can change the results you get, but in general the overall message is the same: genetic risk prediction contains a lot of information, but not enough to give guaranteed predictions of who will and who won’t get certain diseases. This is not only true of genetics either: parallel studies of environmental risk factors usually reveal tendencies and probabilities, not guarantees.
This means that two people with exactly the same weight, height, sex, race, diet, childhood infection exposures, vaccination history, family history, and environmental toxin levels will usually not get the same disease, but they are far more likely to than two individuals who differ in all those respects. To take an extreme example, identical twins, despite sharing the same DNA, socioeconomic background, childhood environment, and (generally) placenta, usually do not die from the same thing—but they are far more likely to than two random individuals. This is a perfect analogy for how well (and badly) risk prediction can work: you will never have a better prediction than knowing the health outcomes of a genetic copy of you. The health outcomes of another version of you will be invaluable, and will help guide you, your doctor, and the health-care establishment, if they use this information properly. But it won’t let them know exactly what will happen to you, because identical twins usually do not die from the same thing.
There is no health destiny: There is always a strong random component in anything that happens to your body. This does not mean that none of these things are important; being aware of your disease risks is one of the most important things you can do for your own future health. But risk is not destiny. And this central fact has been well known to scientists for a while now.
This was the context into which a recent paper in Science Translational Medicine by Bert Vogelstein and colleagues was published, which also used twin study data to ask how well genetics could predict disease. The take-home message from the study (or at least the message that many media outlets have taken home) is that DNA does not perfectly determine which disease or diseases you may get in the future. The paper was generally pretty flawed: many geneticists expressed annoyance at the paper, and Erika Check Hayden carried out a thorough investigation into the paper for the Nature News blog. In short, the study used a non-standard and arbitrary model of genetic risk, and failed to properly model the twin data, handling neither the many environmental confounders nor the large degree of uncertainty associated with studies of twins.
Many geneticists were annoyed that the authors seemed to be unaware of the existing literature on the subject, and that they presented their approach and their results as if they were novel and controversial at a well-attended press release at the American Association for Cancer Research annual meeting. However, what came as more of a shock was how surprised the media as a whole seemed to be at the results, with headlines such as “DNA Testing Not So Potent for Prevention“ and “Your DNA blueprint may disappoint.” No reporter (other than Erika) even mentioned the information that we already had about the limits of genetic risk prediction. As Joe Pickrell pointed out on twitter, we can’t really know whether this was genuine surprise or merely newspapers hyping the message to make it seem more like news, but having talked to a few journalists and members of the public, the surprise appears to be at least in part genuine. The gap between the public perception and the established consensus on genetic risk prediction seemed to us to be unexpected and worrying.