Sophie Bushwick (Twitter, Tumblr) is a science journalist and podcaster, and is currently an intern at DISCOVERmagazine.com. She has written for Scientific American, io9, and DISCOVER, and has produced podcasts for 60-Second Science and Physics Central.
Human chromosomes (grey) capped by telomeres (white)
U.S. Department of Energy Human Genome Program
Renowned biologist Elizabeth Blackburn has said that when she was a young post-doc, “Telomeres just grabbed me and kept leading me on.” And lead her on they did—all the way to the Nobel Prize in Medicine in 2009. Telomeres are DNA sequences that continue to fascinate researchers and the public, partially because people with longer telomeres tend to live longer. So the recent finding that older men father offspring with unusually lengthy telomeres sounds like great news. Men of advanced age will give their children the gift of longer lives—right? But as is so often the case in biology, things aren’t that simple, and having an old father may not be an easy route to a long and healthy life.
Every time a piece of DNA gets copied, it can end up with errors in its sequence, or mutations. One of the most frequent changes is losing scraps of information from each end of the strand. Luckily, these strands are capped with telomeres, repeating sequences that do not code for any proteins and serve only to protect the rest of the DNA. Each time the DNA makes a copy, its telomeres get shorter, until these protective ends wear away to nothing. Without telomeres, the DNA cannot make any more copies, and the cell containing it will die.
But sperm are not subject to this telomere-shortening effect. In fact, the telomeres in sperm-producing stem cells not only resist degrading, they actually grow. This may be thanks to a high concentration of the telomere-repairing enzyme telomerase in the testicles; researchers are still uncertain. All they know is that the older the man, the longer the telomeres in his sperm will be.
Steve Silberman (@stevesilberman on Twitter) is a journalist whose articles and interviews have appeared in Wired, Nature, The New Yorker, and other national publications; have been featured on The Colbert Report; and have been nominated for National Magazine Awards and included in many anthologies. Steve is currently working on a book on autism and neurodiversity called NeuroTribes: Thinking Smarter About People Who Think Differently (Avery Books 2013). This post originally appeared on his blog, NeuroTribes.
Photo by Flickr user Noodles and Beef
Your doctor doesn’t like what’s going on with your blood pressure. You’ve been taking medication for it, but he wants to put you on a new drug, and you’re fine with that. Then he leans in close and says in his most reassuring, man-to-man voice, “I should tell you that a small number of my patients have experienced some minor sexual dysfunction on this drug. It’s nothing to be ashamed of, and the good news is that this side effect is totally reversible. If you have any ‘issues’ in the bedroom, don’t hesitate to call, and we’ll switch you to another type of drug called an ACE inhibitor.” OK, you say, you’ll keep that in mind.
Three months later, your spouse is on edge. She wants to know if there’s anything she can “do” (wink, wink) to reignite the spark in your marriage. She’s been checking out websites advertising romantic getaways. No, no, you reassure her, it’s not you! It’s that new drug the doctor put me on, and I hate it. When you finally make the call, your doctor switches you over to a widely prescribed ACE inhibitor called Ramipril.
“Now, Ramipril is just a great drug,” he tells you, “but a very few patients who react badly to it find they develop a persistent cough…” Your throat starts to itch even before you fetch the new prescription. Later in the week, you’re telling your buddy at the office that you “must have swallowed wrong” — for the second day in a row. When you type the words ACE inhibitor cough into Google, the text string auto-completes, because so many other people have run the same search, desperately sucking on herbal lozenges between breathless sips of water.
In other words, you’re doomed. Cough, cough!
Emily Elert is a science journalist and writer. Her work has appeared in DISCOVER, Popular Science, Scientific American, and On Earth Magazine.
Last month, CBS Boston aired a story about a man in Massachusetts who caught fire while operating a grill in his backyard. He wasn’t going crazy with lighter fluid, nor was he being careless with propane. No, the culprit was Banana Boat Sport Performance spray-on sunscreen.
But don’t be too quick to blame the orange bottle. After all, this kind of thing does occasionally happen when people spray flammable substances from aerosol cans in close proximity to burning coals. There are, however, other reasons to be suspicious of the summertime mainstay: several recent reports have raised questions about both the effectiveness and safety of sunscreens.
In fact, the National Cancer Institute, a branch of the NIH, declares on its website that studies on sunscreen use and cancer rates in the general population have provided “inadequate evidence” that sunscreens help prevent skin cancer. What’s more, research suggests that some sunscreens might even promote it.
Those are heavy charges for a product that people have long felt so good about using.
Emily Willingham (Twitter, Google+, blog) is a science writer and compulsive biologist whose work has appeared at Slate, Grist, Scientific American Guest Blog, and Double X Science, among others. She is science editor at the Thinking Person’s Guide to Autism and author of The Complete Idiot’s Guide to College Biology.
In March the US Centers for Disease Control and Prevention (CDC) the newly measured autism prevalences for 8-year-olds in the United States, and headlines roared about a “1 in 88 autism epidemic.” The fear-mongering has led some enterprising folk to latch onto our nation’s growing chemophobia and link the rise in autism to “toxins” or other alleged insults, and some to sell their research, books, and “cures.” On the other hand, some researchers say that what we’re really seeing is likely the upshot of more awareness about autism and ever-shifting diagnostic categories and criteria.
Even though autism is now widely discussed in the media and society at large, the public and some experts alike are still stymied be a couple of the big, basic questions about the disorder: What is autism, and how do we identify—and count—it? A close look shows that the unknowns involved in both of these questions suffice to explain the reported autism boom. The disorder hasn’t actually become much more common—we’ve just developed better and more accurate ways of looking for it.
Leo Kanner first described autism almost 70 years ago, in 1944. Before that, autism didn’t exist as far as clinicians were concerned, and its official prevalence was, therefore, zero. There were, obviously, people with autism, but they were simply considered insane. Kanner himself noted in a 1965 paper that after he identified this entity, “almost overnight, the country seemed to be populated by a multitude of autistic children,” a trend that became noticeable in other countries, too, he said.
In 1951, Kanner wrote, the “great question” became whether or not to continue to roll autism into schizophrenia diagnoses, where it had been previously tucked away, or to consider it as a separate entity. But by 1953, one autism expert was warning about the “abuse of the diagnosis of autism” because it “threatens to become a fashion.” Sixty years later, plenty of people are still asserting that autism is just a popular diagnosis du jour (along with ADHD), that parents and doctors use to explain plain-old bad behavior.
Asperger’s syndrome, a form of autism sometimes known as “little professor syndrome,” is in the same we-didn’t-see-it-before-and-now-we-do situation. In 1981, noted autism researcher Lorna Wing translated and revivified Hans Asperger’s 1944 paper describing this syndrome as separate from Kanner’s autistic disorder, although Wing herself argued that the two were part of a borderless continuum. Thus, prior to 1981, Asperger’s wasn’t a diagnosis, in spite of having been identified almost 40 years earlier. Again, the official prevalence was zero before its adoption by the medical community.
And so, here we are today, with two diagnoses that didn’t exist 70 years ago (plus a third, even newer one: PDD-NOS) even though the people with the conditions did. The CDC’s new data say that in the United States, 1 in 88 eight-year-olds fits the criteria for one of these three, up from 1 in 110 for its 2006 estimate. Is that change the result of an increase in some dastardly environmental “toxin,” as some argue? Or is it because of diagnostic changes and reassignments, as happened when autism left the schizophrenia umbrella?
Howard Brody, MD, PhD, is the John P. McGovern Centennial Chair in Family Medicine and Director of the Institute for the Medical Humanities at the University of Texas Medical Branch, Galveston.
For years, doctors thought that placebos like sugar pills were totally inert, just something to be given out to mollify a demanding patient without any expected health benefits. Gradually, both physicians and medical researchers came to realize that such treatments can sometimes cause substantial improvement of symptoms, even when there’s no chemical or other biomedical explanation for what occurs—a phenomenon called the placebo effect. In a recent commentary in the Journal of Medical Ethics, Cory Harris and Amir Raz of McGill summarize the data from recent surveys of physician use of placebos in clinical practice in several nations.
They find that prescribing drugs like antibiotics or supplements like vitamins as placebos is now a widespread practice. This is happening without any public guidelines or regulations for placebos’ use, which raises an important question: How, exactly, should physicians be using the placebo effect to help patients?
This discussion is necessary because the understanding of the placebo effect is changing, and fast. In the past decade, scientists have used brain-scanning to see just which parts of the brain, and in what order, become active when a patient takes a placebo pill for various conditions. Other investigators have looked more closely at the treatment environment and sorted out what parts of that environment rev up a placebo response. For example, seeing a nurse inject a painkiller into your IV line gives you roughly twice as much pain relief as having the same dose of medicine administered by a hidden pump. Getting acupuncture treatment from a warm and friendly practitioner works better than the same treatment from a cold, distant one. There’s even some preliminary evidence to suggest that patients experience positive placebo effects even when told frankly that the pills they are taking are placebos, with no active chemical ingredients.
This research—and perhaps personal experience—has changed the way doctors view the importance of their patients’ mental states. Surveys from 20–30 years ago found a general belief among physicians that placebos were completely inert and powerless, and that if any good effect occurred, it was only in the patient’s imagination. The newer surveys, one of which I participated in, show a small revolution in physician thinking about mind-body relations. Physicians today generally agree that placebos can actually have a positive effect on the patient’s body, and that mind-body medicine “works.” That’s important, and has not been sufficiently noted.
The American Psychiatric Association have just published the latest update of the draft DSM-5 psychiatric diagnosis manual, which is due to be completed in 2013. The changes have provoked much comment, criticism, and heated debate, and many have used the opportunity to attack psychiatric diagnosis and the perceived failure to find “biological tests” to replace descriptions of mental phenomena. But to understand the strengths and weaknesses of psychiatric diagnosis, it’s important to know where the challenges lie.
Think of classifying mental illness like classifying literature. For the purposes of research and for the purposes of helping people with their reading, I want to be able to say whether a book falls within a certain genre—perhaps supernatural horror, romantic fiction, or historical biography. The problem is similar because both mental disorder and literature are largely defined at the level of meaning, which inevitably involves our subjective perceptions. For example, there is no objective way of defining whether a book is a love story or whether a person has a low mood. This fact is used by some to suggest that the diagnosis of mental illness is just “made up” or “purely subjective,” but this is clearly rubbish. Although the experience is partly subjective, we can often agree on classifications.
Speaking the same language
How well people can agree on a classification is known as inter-rater reliability and to have a diagnosis accepted, you should ideally demonstrate that different people can use the same definition to classify different cases in the same way. In other words, we want to be sure that we’re all speaking the same language—when one doctor says a patient has “depression,” another should agree. To do this, it’s important to have definitions that are easy to interpret and apply, and that rely on widely recognised features.
To return to our literature example, it’s possible to define romantic fiction in different ways, but if I want to make sure that other people can use my definition it’s important to choose criteria that are clear, concise, and easily applicable. It’s easier to decide whether the book has “a romantic relationship between two of the main characters” than whether the book involves “an exploration of love, loss and the yearning of the heart.” Similarly, “low mood” is easier to detect than a “melancholic temperament.”
Eric Michael Johnson has a master’s degree in evolutionary anthropology focusing on great ape behavioral ecology. He is currently a doctoral student in the history of science at University of British Columbia looking at the interplay between evolutionary biology and politics. He blogs at The Primate Diaries at Scientific American, where this post originally appeared.
“Attachment (with respect to Martin Schoeller),” by Nathaniel Gold
My son will be 3 years old next month and is still breastfeeding. In other words, he is a typical primate. However, when I tell most people about this the reactions I receive run the gamut from mild confusion to serious discomfort. Their concerns are usually that extended breastfeeding could be stunting his independence and emotional development–the “Linus Blanket Syndrome” in the words of Michael Zollicoffer, a pediatrician at the Herman & Walter Samuelson Children’s Hospital at Sinai Hospital in Baltimore. Worse yet, they hint that it might even cause“destructive” psychosexual problems that he will be burdened with throughout his adult life. Could they be right? Was our choice “a prescription for psychological disaster” as Fox News psychiatrist Keith Ablow wrote in response to TIME magazine’s provocative cover article on attachment parenting? Just when is the natural age to stop breastfeeding?
One thing I’ve learned in my research on human evolution is that people are quick to assume that what they do is “natural” simply because they don’t know of other examples where things are done differently. The primate brain is a pattern recognition machine and is adapted to quickly identify regularities in our environment. But when we are presented with the same pattern over and over again it is easy to fall victim to what is known as confirmation bias, or coming to false conclusions because the evidence we use does not come from a broad enough sample. In order to avoid falling for this bias on the question of extended breastfeeding the best way forward would be to draw from the largest sample possible: the entire primate lineage.
In their classic paper, “Life History Variation in Primates” published in the premier scientific journal Evolution, the British zoologists Paul H. Harvey at Oxford and Tim Clutton-Brock at Cambridge published the most comprehensive data then available on the world’s primates. The variables they measured included everything from litter size and age at weaning to adult female body weight and length of the estrous cycle among 135 primate species (including humans). By analyzing the relationships between these variables, using a statistical approach known as a regression analysis, they identified striking patterns that held across primate taxa.
One especially strong correlation was that adult female body weight was closely tied to their offspring’s weaning age, so much so that knowing the first would allow you to predict the second with a 91% success rate. As a result, as anthropologist Katherine A. Dettwyler has shown in her book Breastfeeding: Biocultural Perspectives (co-edited with Patricia Stuart-Macadam), it can be calculated that a young primate’s weaning age in days is equal to 2.71 times their mother’s body weight in grams to the 0.56 power. This calculation predicts, given the range of female body sizes around the world from the !Kung-San of South Africa to the Arctic Inuit, that humans should have an average weaning age of between 2.8 and 3.7 years old.
Delegates to Indiana’s constitutional convention worked under this tree in 1816.
It later succumbed to Dutch elm disease.
Unless you have a weakened immune system or a stubborn case of athlete’s foot, it’s unlikely you spend much time worrying about fungi. And you shouldn’t—fungal diseases are not generally a big problem for a healthy person; common ones like athlete’s foot are annoying but not serious. In terms of infections, it’s bacteria, parasites, and viruses that kill us.
But the rest of nature tells a different story. According to a recent review of fungal diseases in Nature, fungi are responsible for 72% of the local extinctions of animals and 64% among plants. White nose syndrome in bats and Dutch elm disease are two high-profile examples of extremely deadly fungal diseases gaining wider ranges through global trade. While each fungus itself is unique, many fungal pathogens share several special abilities that make them especially lethal.
Unlike viruses and most bacteria, fungi can survive—and survive for years—in dry or frigid environments outside of hosts. All they need to do is make spores: small, hardy reproductive structures containing all the necessary DNA to grow a new fungus. As spores, fungi can tough out adverse conditions and drift thousands of miles in the wind to find more livable settings. Aspergillus sydowii, for example, hitches a ride in dust storms from Africa to the Caribbean, where it infects coral reefs. They’re also ubiquitous in the air; there are one to ten spores in every breath you take. Wheat stem rust, a common fungus that causes $60 billion of crop damage a year, produces up to 1011 spores per hectare, and they can travel 10,000 kilometers through the atmosphere to find new hosts. That’s only taking into account one of its five spore forms, which are produced at different times in its life cycle. For plants in general, fungi are the number one infectious threat, far above bacteria or viruses.
Many fungi are also generalists that use a scorched-earth strategy to parasitize a wide range of hosts. To invade host cells, viruses need to sneak their way in by fitting into specific proteins like a key in a lock. Because viruses need to have this precision, it’s hard for them to jump from one species to another one with a different set of proteins, and it’s a big deal when it does happen. Fungi, on the hand, don’t need to enter cells; like the mold that eats your bread, it squirts its digestives juices and rots everything in sight. While viruses nimbly pick your locks, fungi are like a bomb that will blow up your door—or anyone else’s.
As these plots of bacterial diversity in two subjects over a period of 16 weeks show,
microbiomes vary widely among women and change radically over time.
When John Mayer sang “Your Body is a Wonderland,” he probably wasn’t talking about the trillions of microbes that live all over your skin and inside every orifice you have to offer—but it does pretty much describe things. In the last decade or so, scientists have confirmed that we’re just as much an ecosystem as a rainforest is: full of ecological niches inhabited by countless bacteria, many of which have been evolving with us for millions of years. Our tiny passengers aren’t passive, either. Studies in mice and some in humans have linked these microbial populations, or microbiomes, to the host’s digestion, gut health, behavior, and even mood. A healthy microbiome keeps the host’s systems in good working order and prevents invasion by microbes that mean us harm.
But what is a healthy microbiome, exactly? That’s an important question, since diagnosing and treating illnesses related to microbiome imbalance requires some definition of normal. In the first few studies to try to address this question, scientists have found that there are some patterns: one study suggested that there could be three gut microbiome “types,” similar to blood types. Since proposed treatments for microbiome problems include “transfusions” of bacteria from a healthy microbiome (including “fecal transplants“), this is an attractive analogy. But a new study on the human vaginal microbiome suggests that the real story might be much more complicated.
Studies exploring a healthy micobiome often look at a single sample from each person. But it turns out if you sample someone regularly, at least in the case of the vagina, you can watch the entire microbiome change radically—to the point of becoming unrecognizable—in a matter of days.
Christina Agapakis is a synthetic biologist and postdoctoral research fellow at UCLA who blogs about about biology, engineering, biological engineering, and biologically inspired engineering at Oscillator.
When you factor in the fertilizer needed to grow animal feed and the sheer volume of methane expelled by cows (mostly, though not entirely, from their mouths), a carnivore driving a Prius can contribute more to global warming than a vegan in a Hummer. Given the environmental toll of factory farming it’s easy to see why people get excited about the idea of meat grown in a lab, without fertilizer, feed corn, or burps.
In this vision of the future, our steaks are grown in vats rather than in cows, with layers of cow cells nurtured on complex machinery to create a cruelty-free, sustainable meat alternative. The technology involved is today used mainly to grow cells for pharmaceutical development, but that hasn’t stopped several groups from experimenting with “in vitro meat,” as it’s called, over the last decade. In fact, a team of tissue engineers led by professor Mark Post at Maastricht University in the Netherlands recently announced their goal to make the world’s first in vitro hamburger by October 2012. The price tag is expected to be €250,000 (over $330,000), but we’re assured that as the technology scales up to industrial levels over the next ten years, the cost will scale down to mass-market prices.
Whenever I hear about industrial scaling as a cure-all, my skeptic alarms start going off, because scaling is the deus ex machina of so many scientific proposals, often minimized by scientists (myself included) as simply an “engineering problem.” But when we’re talking about food and sustainability, that scaling is exactly what feeds a large and growing population. Scaling isn’t just an afterthought, it’s often the key factor that determines if a laboratory-proven technology becomes an environmentally and economically sustainable reality. Looking beyond the hype of “sustainable” and “cruelty-free” meat to the details of how cell culture works exposes just how difficult this scaling would be.