Mixed breed. Mongrel. Roadside setter. A something-something. Dogs of uncertain provenance get called a lot of things. When the animal arrives at a shelter, staff usually can make only an educated guess about the dog’s parentage.
Most of the dogs at my local animal control are assessed as “pit mixes” upon arrival — including the three I’ve adopted over the past 2 years. But a pit bull isn’t a breed: it’s just a type of dog characterized by a short coat, muscular frame and broad, oversized head.
All three of my dogs clearly — at least to my eyes — showed signs of specific breeds somewhere in their heritage: Tall and snow white Pullo looks like the breed standard for an American Bulldog. Tyche’s body is svelte like a boxer’s and inky black like some Labs. And lanky, long-limbed Waldo sometimes bays like a hound, especially when treeing squirrels.
Guessing my dogs’ breeds was a fun parlor game, but I wanted more definitive answers. So I turned to science. And, well, let’s just say it’s a good thing I didn’t place any bets on what was in my dogs’ family trees.
A version of this article originally appeared at The Conversation.
There could be a way of predicting – and preventing – which children will go on to have low intelligence, according to the findings of a study researchers at Cardiff University presented on Monday. They discovered that children with two copies of a common gene (Thr92Ala), together with low levels of thyroid hormone are four times more likely to have a low IQ. This combination occurs in about 4% of the UK population.
Importantly, if you had just one of these factors, but not both, there did not appear to be an increased risk of low intelligence. These are early results, but suggest that it might be possible to treat children early with thyroid hormone supplementation to enhance their intelligence. This raises many ethical issues.
A common objection is that being smarter does not make your life better. In this study, researchers were concerned with those with an IQ between 70-85. Below 70 is classified as intellectual disability but an IQ of 70 to 75 is similar to mild intellectual disability.
Even for individuals with an IQ between 75 and 90 there are still significant disadvantages. Job opportunities tend to be the least desirable and least financially rewarding, requiring significant oversight. More than half the people with this IQ level fail to reach the minimum recruitment standards for the US military. Individuals with this lower level of intelligence are at significant risk of living in poverty (16%), being a chronic welfare dependent (17%) and dropping out of school (35%) compared to individuals with average intelligence. Studies show that they also face an increased risk of incarceration and being murdered.
Linda Gottfredson, who’s undertaken much of this research, concludes that at the very least, “an IQ of 75 is perhaps the most important threshold in modern life”. So it is clear that those of low-normal intelligence, although not classified as disabled, are significantly disadvantaged.
If we could enhance their intelligence, say with thyroid hormone supplementation, we should.
By Eliza Strickland
What can you learn from getting your genome sequenced? If you’re a relatively healthy person like me, the answer is, not much… at least not yet.
I embarked on a mission to get myself sequenced for my recent article “The Gene Machine and Me.” The article focused on the sequencing technology that will soon enable a full scan of a human genome for $1000, and to make the story come alive, I decided to go through the process myself. I got my DNA run through the hottest new sequencing machine, the Ion Proton, and had it analyzed by some of the top experts on genome sequencing, a team at Houston’s Baylor College of Medicine.
The Baylor team has been intimately involved in many of the most important advances of genome sequencing over the last decade. And their accomplishments reveal both the astoundingly rapid progress of the technology, and how far we have yet to go. Here’s a synopsis: the story of five genomes.
Carrie Arnold is a freelance science writer in Virginia. She blogs about the science of eating disorders at www.edbites.com, and frequently covers microbiology topics for national magazines.
Conservationists like to think large. Whether it’s identifying hundreds of square miles of Himalayan highlands as a tiger corridor or creating massive marine preserves, these scientists are definitely thinking on the macro scale.
However a small but growing group of scientists are beginning to think smaller when it comes to conservation—much smaller. They have begun to study the microbes living in the soil, and their results are showing just how important microscopic life is in the macrobiotic world. A healthy, diverse population of soil microbes results in a healthy, diverse ecosystem. Changing an ecosystem also changes its microbes, scientists have found, and this may permanently scar the environment.
“Soil is not sterile,” said Noah Fierer, a microbiologist at the University of Colorado at Boulder. “These microbes are crucial to maintaining soil fertility.”
A new toxicology study states that rats eating genetically modified food and the weedkiller Roundup develop huge tumors and die. But many scientists beg to differ, and a close look at the study shows why.
Genetically modified organisms (GMOs) have always been a controversial topic. On the one hand are the many benefits: the higher crop yields from pesticide- and insect-resistant crops, and the nutritional modifications that can make such a difference in malnourished populations. On the other side is the question that concerns many people: We are modifying the genes of our food, and what does that mean for our health? These are important question, but the new study claiming to answer them misses the mark. It has many horrifying pictures of rats with tumors, but without knowledge about the control rats, what do those tumors mean? Possibly, nothing at all.
The recent study, from the Journal of Food and Chemical Toxicology has fueled the worst fears of the GMO debate. The study, by Italian and French groups, evaluated groups of rats fed different concentrations of maize (corn) tolerant to Roundup or Roundup alone, over a two year period, the longest type of toxicology study. (For an example of one performed in the U.S., see here.) The group looked at the mortality rates in the aging rats, as well as the causes of death, and took multiple samples to assess kidney, liver, and hormonal function.
The presented results look like a toxicologist’s nightmare. The authors reported high rates of tumor development in the rats fed Roundup and the Roundup-tolerant maize. There are figures of rats with visible tumors, and graphs showing death rates that appear to begin early in the rats’ lifespan. The media of course picked up on it, and one site in particular has spawned some reports that sound like mass hysteria. It was the first study showing that genetically modified foods could produce tumors at all, let alone the incredibly drastic ones shown in the paper.
Sophie Bushwick (Twitter, Tumblr) is a science journalist and podcaster, and is currently an intern at DISCOVERmagazine.com. She has written for Scientific American, io9, and DISCOVER, and has produced podcasts for 60-Second Science and Physics Central.
Human chromosomes (grey) capped by telomeres (white)
U.S. Department of Energy Human Genome Program
Renowned biologist Elizabeth Blackburn has said that when she was a young post-doc, “Telomeres just grabbed me and kept leading me on.” And lead her on they did—all the way to the Nobel Prize in Medicine in 2009. Telomeres are DNA sequences that continue to fascinate researchers and the public, partially because people with longer telomeres tend to live longer. So the recent finding that older men father offspring with unusually lengthy telomeres sounds like great news. Men of advanced age will give their children the gift of longer lives—right? But as is so often the case in biology, things aren’t that simple, and having an old father may not be an easy route to a long and healthy life.
Every time a piece of DNA gets copied, it can end up with errors in its sequence, or mutations. One of the most frequent changes is losing scraps of information from each end of the strand. Luckily, these strands are capped with telomeres, repeating sequences that do not code for any proteins and serve only to protect the rest of the DNA. Each time the DNA makes a copy, its telomeres get shorter, until these protective ends wear away to nothing. Without telomeres, the DNA cannot make any more copies, and the cell containing it will die.
But sperm are not subject to this telomere-shortening effect. In fact, the telomeres in sperm-producing stem cells not only resist degrading, they actually grow. This may be thanks to a high concentration of the telomere-repairing enzyme telomerase in the testicles; researchers are still uncertain. All they know is that the older the man, the longer the telomeres in his sperm will be.
By Luke Jostins, a postgraduate student working on the genetic basis of complex autoimmune diseases. Jostins has a strong background in informatics and statistical genetics, and writes about genetic epidemiology and sequencing technology on the his blog Genetic Inference. A different version of this post appeared on the group blog Genomes Unzipped.
One of the great hopes for genetic medicine is that we will be able to predict which people will develop certain diseases, and then focus preventative measures to those at risk. Scientists have long known that one of the wrinkles in this plan is that we will only rarely be able to say with certainty whether someone develop a given disease based on their genetics—more often, we can only give an estimate of their disease risk.
This realization came mostly from twin studies, which look at the disease histories of identical and non-identical twins. Twin studies use established models of genetic risk among families and populations, along with the different levels of similarity of identical and non-identical twins, to estimate how much of disease risk comes from genetic factors and how much comes from environmental risk factors. (See this post for more details.) There are some complexities here, and the exact model used can change the results you get, but in general the overall message is the same: genetic risk prediction contains a lot of information, but not enough to give guaranteed predictions of who will and who won’t get certain diseases. This is not only true of genetics either: parallel studies of environmental risk factors usually reveal tendencies and probabilities, not guarantees.
This means that two people with exactly the same weight, height, sex, race, diet, childhood infection exposures, vaccination history, family history, and environmental toxin levels will usually not get the same disease, but they are far more likely to than two individuals who differ in all those respects. To take an extreme example, identical twins, despite sharing the same DNA, socioeconomic background, childhood environment, and (generally) placenta, usually do not die from the same thing—but they are far more likely to than two random individuals. This is a perfect analogy for how well (and badly) risk prediction can work: you will never have a better prediction than knowing the health outcomes of a genetic copy of you. The health outcomes of another version of you will be invaluable, and will help guide you, your doctor, and the health-care establishment, if they use this information properly. But it won’t let them know exactly what will happen to you, because identical twins usually do not die from the same thing.
There is no health destiny: There is always a strong random component in anything that happens to your body. This does not mean that none of these things are important; being aware of your disease risks is one of the most important things you can do for your own future health. But risk is not destiny. And this central fact has been well known to scientists for a while now.
This was the context into which a recent paper in Science Translational Medicine by Bert Vogelstein and colleagues was published, which also used twin study data to ask how well genetics could predict disease. The take-home message from the study (or at least the message that many media outlets have taken home) is that DNA does not perfectly determine which disease or diseases you may get in the future. The paper was generally pretty flawed: many geneticists expressed annoyance at the paper, and Erika Check Hayden carried out a thorough investigation into the paper for the Nature News blog. In short, the study used a non-standard and arbitrary model of genetic risk, and failed to properly model the twin data, handling neither the many environmental confounders nor the large degree of uncertainty associated with studies of twins.
Many geneticists were annoyed that the authors seemed to be unaware of the existing literature on the subject, and that they presented their approach and their results as if they were novel and controversial at a well-attended press release at the American Association for Cancer Research annual meeting. However, what came as more of a shock was how surprised the media as a whole seemed to be at the results, with headlines such as “DNA Testing Not So Potent for Prevention“ and “Your DNA blueprint may disappoint.” No reporter (other than Erika) even mentioned the information that we already had about the limits of genetic risk prediction. As Joe Pickrell pointed out on twitter, we can’t really know whether this was genuine surprise or merely newspapers hyping the message to make it seem more like news, but having talked to a few journalists and members of the public, the surprise appears to be at least in part genuine. The gap between the public perception and the established consensus on genetic risk prediction seemed to us to be unexpected and worrying.
By now you may have heard about Oxford Nanopore’s new whole-genome sequencing technology, which has the promise of taking the enterprise of sequencing an individual’s genome out of the basic science laboratory, and out to the consumer mass market. From what I gather the hype is not just vaporware; it’s a foretaste of what’s to come. But at the end of the day, this particular device is not the important point in any case. Do you know which firm popularized television? Probably not. When technology goes mainstream, it ceases to be buzzworthy. Rather, it becomes seamlessly integrated into our lives and disappears into the fabric of our daily background humdrum. The banality of what was innovation is a testament to its success. We’re on the cusp of the age when genomics becomes banal, and cutting-edge science becomes everyday utility.
Granted, the short-term impact of mass personal genomics is still going to be exceedingly technical. Scientific genealogy nuts will purchase the latest software, and argue over the esoteric aspects of “coverage,” (the redundancy of the sequence data, which correlates with accuracy) and the necessity of supplementing the genome with the epigenome. Physicians and other health professionals will add genomic information to the arsenal of their diagnostic toolkit, and an alphabet soup of new genome-related terms will wash over you as you visit a doctor’s office. Your genome is not you, but it certainly informs who you are. Your individual genome will become ever more important to your health care.
The phylogeny of Prozac yogurt.
Christina Agapakis is a synthetic biologist and postdoctoral research fellow at UCLA who blogs about about biology, engineering, biological engineering, and biologically inspired engineering at Oscillator.
A few weeks ago, I saw a retweet that claimed “biohacking is easier than you think” with a link to a post on a blog accompanying a book called Massively Networked. The post included video of Tuur van Balen’s presentation at the NextNature power show a few months earlier. Van Balen is a designer whose work I’ve followed for a couple years now, and his most recent project imagines how synthetic biology might produce and deliver medicines in the future. He demonstrates—using homemade tools, equipment purchased on eBay, and online resources for finding and synthesizing DNA sequences—how someone could engineer a strain of bacteria to produce Prozac-laced yogurt. While he’s not actually making Prozac, his demonstration does show pretty accurately how someone could get DNA into a bacterium (without, of course, the frustrating months of troubleshooting that almost any experiment inevitably requires). I posted my own version of the story, writing that art projects like this can ask important questions about biological design.
The next day, my post was syndicated on the Huffington Post with a modified title that emphasized Prozac. Then a version appeared on Gizmodo, and it went on from there, spreading across the Internet. By the time its spread was complete, Van Balen, an artist interested in the implications of emerging biotechnologies, had mutated into a bioengineer at the forefront of synthetic biology research, creating Prozac yogurt in five days with just 860 base pairs of DNA. (If you were to actually make Prozac biologically, it would certainly take the action of many enzymes, each encoded by their own sequence of hundreds or thousands of base pairs).
How did an art piece, a design fiction that asks us to think critically about the possibilities opened up by synthetic biology, provoke an unskeptical acceptance of what bioengineering has made possible? Perhaps I should have been clearer in my post, or perhaps it’s the fault of sensationalized click-bait headlines. But I think it may be that we’ve become so accustomed to the hype surrounding the science of genes and DNA, so used to hearing about groundbreaking genetics, from the “gene for dry ear wax” to the “gene for Alzheimer’s” to the “gene for [common human behavior]” that we don’t think twice when we hear about mixing bacteria with the “gene for Prozac” to create antidepressant yogurt.
Every few years it seems that the British biologist Steve Jones declares the death of evolution by natural selection in the human species. The logic here is simple even to a schoolboy: evolution requires variation in fitness, and with declining risk by death during our reproductive years humans have abolished the power of selection. But this confuses the symptom for the disease. Death is simply one way that natural selection can occur. Michelle Duggar has 19 children. The average American woman has around two by the end of her reproductive years. It doesn’t take a math whiz to figure out that Michelle Duggar is more “fit” in the evolutionary sense than the average bear. Even without high rates of death, some people have more children than other people, and if those people who have more children than those who do not are different from each other in inherited traits, evolution must occur. Q.E.D.
But you probably shouldn’t be convinced by logic alone. Science requires theory, experiment, and observation. (If you’re talking humans, you can remove the second from the list of possibilities: there are certain unavoidable ethical obstacles to experimenting on human evolution—plus we take far too long to reproduce.) But humans sometimes have something which bacteria can not boast: pedigrees! Not all humans, of course. Like most of the world’s population I don’t have much of a pedigree beyond my great-grandparents’ generation. But luckily for biologists, the Catholic Church has long taken a great interest in life events such as baptism, marriage, and death, and recorded this info parish by parish. With these basic variables, demographers can infer the the rough life histories of many local populations over the centuries. In many European nations, these databases can go more than 10 generations back. And some aspects of human evolution are revealed by these records.
What aspects am I talking about? Reproduction itself. Not only is variation in fitness one of the primary ways by which evolution occurs, but it is also a trait upon which evolution operates! How else are there rabbits which breed like…rabbits, and pandas…which don’t. There is often variation within species for the odds of multiple births, age at first reproduction, and lifespan, depending upon the differences in selection pressures over a population. And that seems to be exactly what occurs in human beings. There is interesting evidence for evolution of reproductive patterns from populations as diverse African pygmies and Finns, but more recently some researchers have been plumbing the depths of the records of the Roman Catholic Church in Quebec, and they’ve come back with gold.