By Luke Jostins, a postgraduate student working on the genetic basis of complex autoimmune diseases. Jostins has a strong background in informatics and statistical genetics, and writes about genetic epidemiology and sequencing technology on the his blog Genetic Inference. A different version of this post appeared on the group blog Genomes Unzipped.
One of the great hopes for genetic medicine is that we will be able to predict which people will develop certain diseases, and then focus preventative measures to those at risk. Scientists have long known that one of the wrinkles in this plan is that we will only rarely be able to say with certainty whether someone develop a given disease based on their genetics—more often, we can only give an estimate of their disease risk.
This realization came mostly from twin studies, which look at the disease histories of identical and non-identical twins. Twin studies use established models of genetic risk among families and populations, along with the different levels of similarity of identical and non-identical twins, to estimate how much of disease risk comes from genetic factors and how much comes from environmental risk factors. (See this post for more details.) There are some complexities here, and the exact model used can change the results you get, but in general the overall message is the same: genetic risk prediction contains a lot of information, but not enough to give guaranteed predictions of who will and who won’t get certain diseases. This is not only true of genetics either: parallel studies of environmental risk factors usually reveal tendencies and probabilities, not guarantees.
This means that two people with exactly the same weight, height, sex, race, diet, childhood infection exposures, vaccination history, family history, and environmental toxin levels will usually not get the same disease, but they are far more likely to than two individuals who differ in all those respects. To take an extreme example, identical twins, despite sharing the same DNA, socioeconomic background, childhood environment, and (generally) placenta, usually do not die from the same thing—but they are far more likely to than two random individuals. This is a perfect analogy for how well (and badly) risk prediction can work: you will never have a better prediction than knowing the health outcomes of a genetic copy of you. The health outcomes of another version of you will be invaluable, and will help guide you, your doctor, and the health-care establishment, if they use this information properly. But it won’t let them know exactly what will happen to you, because identical twins usually do not die from the same thing.
There is no health destiny: There is always a strong random component in anything that happens to your body. This does not mean that none of these things are important; being aware of your disease risks is one of the most important things you can do for your own future health. But risk is not destiny. And this central fact has been well known to scientists for a while now.
This was the context into which a recent paper in Science Translational Medicine by Bert Vogelstein and colleagues was published, which also used twin study data to ask how well genetics could predict disease. The take-home message from the study (or at least the message that many media outlets have taken home) is that DNA does not perfectly determine which disease or diseases you may get in the future. The paper was generally pretty flawed: many geneticists expressed annoyance at the paper, and Erika Check Hayden carried out a thorough investigation into the paper for the Nature News blog. In short, the study used a non-standard and arbitrary model of genetic risk, and failed to properly model the twin data, handling neither the many environmental confounders nor the large degree of uncertainty associated with studies of twins.
Many geneticists were annoyed that the authors seemed to be unaware of the existing literature on the subject, and that they presented their approach and their results as if they were novel and controversial at a well-attended press release at the American Association for Cancer Research annual meeting. However, what came as more of a shock was how surprised the media as a whole seemed to be at the results, with headlines such as “DNA Testing Not So Potent for Prevention” and “Your DNA blueprint may disappoint.” No reporter (other than Erika) even mentioned the information that we already had about the limits of genetic risk prediction. As Joe Pickrell pointed out on twitter, we can’t really know whether this was genuine surprise or merely newspapers hyping the message to make it seem more like news, but having talked to a few journalists and members of the public, the surprise appears to be at least in part genuine. The gap between the public perception and the established consensus on genetic risk prediction seemed to us to be unexpected and worrying.
By now you may have heard about Oxford Nanopore’s new whole-genome sequencing technology, which has the promise of taking the enterprise of sequencing an individual’s genome out of the basic science laboratory, and out to the consumer mass market. From what I gather the hype is not just vaporware; it’s a foretaste of what’s to come. But at the end of the day, this particular device is not the important point in any case. Do you know which firm popularized television? Probably not. When technology goes mainstream, it ceases to be buzzworthy. Rather, it becomes seamlessly integrated into our lives and disappears into the fabric of our daily background humdrum. The banality of what was innovation is a testament to its success. We’re on the cusp of the age when genomics becomes banal, and cutting-edge science becomes everyday utility.
Granted, the short-term impact of mass personal genomics is still going to be exceedingly technical. Scientific genealogy nuts will purchase the latest software, and argue over the esoteric aspects of “coverage,” (the redundancy of the sequence data, which correlates with accuracy) and the necessity of supplementing the genome with the epigenome. Physicians and other health professionals will add genomic information to the arsenal of their diagnostic toolkit, and an alphabet soup of new genome-related terms will wash over you as you visit a doctor’s office. Your genome is not you, but it certainly informs who you are. Your individual genome will become ever more important to your health care.
Gholson Lyon is on a crusade. It started last November, when he found out that a woman in a research study that he was conducting was pregnant. Lyon’s study had revealed that the woman carried a gene that causes a fatal disease. Yet he couldn’t tell the mother-to-be that she might be carrying a sick child due to the rules governing the study. The mother did give birth to a boy with the disease; he died in the same week that Lyon published his paper on the study, as I reported recently in Nature. Lyon was so disturbed by the situation that he is now trying to find a way for researchers to work within the rules so that they don’t face these same ethical dilemmas. And he is speaking and writing about the issue everywhere he can.
The issue of what to tell patients about their DNA is difficult enough for doctors who are treating patents rather than studying them. But it has become urgent for researchers as well, because genetic sequencing technologies are now cheap and fast enough that scientists are planning to sequence five thousand patients’ genomes this year, and as many as 30,000 next year. The US National Human Genome Research Institute will soon begin a program that will spend tens of millions of dollars to sequence the genomes of patients, like Lyon’s study subjects, who have rare genetic diseases. And researchers are also sequencing thousands of otherwise healthy people across the lifespan, from newborns to old folks.
Inevitably, researchers will find stuff in these thousands of genomes. Most of it will be difficult to understand. Some of it will clearly be linked to disease. Some of it will be newly linked to disease through these studies. The whole point of these studies is to link genes and disease. So it would seem like a good idea to tell the gracious volunteers who have donated their time and blood for these studies that they have certain genetic disease risks, right?
Today it is fashionable to contend that ethnic identity is a social construction. That fashion obviously has some genuine basis in reality. Univision host Jorge Ramos, a blue-eyed Mexican American, is considered a “person of color.” If his name was “George Romans” he would be coded as a white American simply on account of his physical appearance. This is due to the social construction of a Hispanic American identity, which has roots in decisions made by the United States government in ethnic classification in the 1960s. But this model of social construction allowing for plasticity is not universal. As outlined in The Cleanest Racist the North Korean national identity is strongly essentialist, to the point where even genetically close populations such as Japanese could never be part of the nation. Similarly, in Japan itself the native-born ethnic Koreans are still viewed as fundamentally guests in the Japanese nation. Both cases illustrate how social construction can impede rather than enable fluidity. Yet social construction as a total explanatory model has limits. Canada has the term “visible minority” to denote those populations which are distinct in origin from Anglophone and Francophone whites by virtue of their appearance. This is in contrast to groups like Ukrainian Canadians, which are minorities due to their chosen cultural distinctiveness.
When it comes to ethnic difference and conflict we can ascribe the divisions to both social and biological distinctions to varying degrees. In the mid-1990s there was a genocide in Rwanda. That genocide had an ethnic dimension, with conflict between the Tutsis and the Hutus being one cause. The Hutu regime which implemented the genocide against the Tutsis co-opted theories of biological difference and foreign origin pioneered by European scholars in the 19th century. Whereas these distinctions once justified Tutsi domination of the Hutu, now they served to mark off the Tutsi as an alien infestation. After the takeover of Rwanda by a Tutsi dominated rebel movement in the wake of the genocide there was an attempt to elide these deadly distinctions. The rationale is clear. Remove the ostensible basis for genocide, and you remove the risk of genocide. The argument that the Tutsi-Hutu distinction is a purely socially constructed European invention has now crept into the mainstream discourse, such as in the film Hotel Rwanda.