A United States Navy Destroyer is sent to the Arctic and ordered to radio silence for four months. During that time, a mysterious virus – 100 percent fatal and 100 percent contagious – spreads from isolated pockets in Africa and Asia into a pandemic. When radio silence ends and the captain and his 217 crew finally learn what’s going on, 80 percent of the human population is either dead or dying, and all government control has collapsed.
Unrealistic? Perhaps. But this is the setting of the TNT hit series The Last Ship. While that fictional virus may indeed be too lethal and spread too rapidly to be realistic, one thing this nail-biting, apocalyptic story should scare us into doing is to respond faster to viral outbreaks than we’ve been able to do in the past. The real-life models for this are two coronaviruses: Middle East respiratory syndrome coronavirus (MERS-CoV) and severe acute respiratory syndrome coronavirus (SARS-CoV).
First identified in humans in 2012, MERS-CoV has since caused 572 laboratory-confirmed infections, 173 of which have been fatal, and yet clinicians have no drug that targets the virus specifically. The same is true of SARS. Despite some initial, anecdotal reports suggesting that the drug ribavirin might work against this virus, and some modest success with interferon (which has a general inhibitory effect against many viruses), there is no specific anti-SARS agent.
So whether we’re talking about a virus in real life that’s killed hundreds, or the unnamed, fictional virus from The Last Ship that’s killed billions, global and national health organizations can respond via several strategies.
We tend to think of medicine as being all about pills and potions recommended to us by another person—a doctor. But science is starting to reveal that for many conditions another ingredient could be critical to the success of these drugs, or perhaps even replace them. That ingredient is nothing more than your own mind.
Here are six ways to raid your built-in medicine cabinet.
“I talk to my pills,” says Dan Moerman, an anthropologist at the University of Michigan-Dearborn. “I say, ‘Hey guys, I know you’re going to do a terrific job.’”
That might sound eccentric, but based on what we’ve learned about the placebo effect, there is good reason to think that talking to your pills really can make them do a terrific job. The way we think and feel about medical treatments can dramatically influence how our bodies respond.
Simply believing that a treatment will work may trigger the desired effect even if the treatment is inert—a sugar pill, say, or a saline injection. For a wide range of conditions, from depression to Parkinson’s, osteoarthritis and multiple sclerosis, it is clear that the placebo response is far from imaginary. Trials have shown measurable changes such as the release of natural painkillers, altered neuronal firing patterns, lowered blood pressure or heart rate and boosted immune response, all depending on the beliefs of the patient.
It has always been assumed that the placebo effect only works if people are conned into believing that they are getting an actual active drug. But now it seems this may not be true. Belief in the placebo effect itself—rather than a particular drug—might be enough to encourage our bodies to heal.
When Linda May went in to see her obstetrician during her first pregnancy, he told her she probably shouldn’t jump, run, or even walk. But May, an exercise physiologist who studies pregnant women and their babies, knew a thing or two about the positive ways that being active can help a mom-to-be’s health. Women who exercise with baby on board have been known to have, among other things, lower risks of gestational diabetes and pregnancy-induced high blood pressure than those who don’t.
Since then, May and other researchers have discovered even more ways that prenatal exercise benefits not only an expectant mother, but her growing baby, too—sometimes for years into the future—as attendees learned at last week’s Experimental Biology 2014 meeting in San Diego.
Decades ago, many more doctors gave similar advice to May’s obstetrician. Pregnancy was thought to be almost like an illness, a time when women needed to rest to protect themselves and their babies. In 1985, the American Congress of Obstetricians and Gynecologists came out with their first set of guidelines for exercise during pregnancy—guidelines, now considered conservative, that included suggestions like keeping strenuous activities to 15 minutes or less.
Since then, research has turned that idea on its head. Exercise is now thought to be—for most women with healthy pregnancies—a boon for the mother’s health, and for the baby she carries as well. Researchers are now starting to look even more closely at how exercise can influence a baby’s health in the womb and how these effects might translate into protection from future health problems.
It’s long been known that blind people are able to compensate for their loss of sight by using other senses, relying on sound and touch to help them “see” the world. Neuroimaging studies have backed this up, showing that in blind people brain regions devoted to sight become rewired to process touch and sound as visual information.
Now, in the age of Google Glass, smartphones and self-driving cars, new technology offers ever more advanced ways of substituting one sensory experience for another. These exciting new devices can restore sight to the blind in ways never before thought possible.
One approach is to use sound as a stand-in for vision. In a study published in Current Biology, neuroscientists at the Hebrew University of Jerusalem used a “sensory substitution device” dubbed “the vOICe” (Oh, I See!) to enable congenitally blind patients to see using sound. The device translates visual images into brief bursts of music, which the participants then learn to decode.
Over a series of training sessions they learn, for example, that a short, loud synthesizer sound signifies a vertical line, while a longer burst equates to a horizontal one. Ascending and descending tones reflect the corresponding directions, and pitch and volume relay details about elevation and brightness. Layering these sound qualities and playing several in sequence (each burst lasts about one second) thus gradually builds an image as simple as a basic shape or as complex as a landscape.
The concept has tried and true analogs in the animal world, says Dr. Amir Amedi, the lead researcher on the study. “The idea is to replace information from a missing sense by using input from a different sense. It’s just like bats and dolphins use sounds and echolocation to ‘see’ using their ears.”
Some people call left-handers southpaws. Others call them mollydookers or corky dobbers. Scientists still often call lefties sinister, which in Latin originally just meant “left” but later came to be associated with evil.
Wondering about the medical implications of being born a corky dobber? It may surprise you that left-handed women were found to be twice or more likely to develop premenopausal breast cancer than right-handers. And a few researchers believe this effect may be linked to exposure to certain chemicals in utero, affecting your genes and then setting the stage for both left-handedness and cancer susceptibility, thus opening up another probability of nurture changing nature.
When it comes to our hands, feet, and even our eyes, most human beings are right-side dominant. Now, you might think that footedness and handedness are always aligned, but as it turns out that’s not always the case for right-handed people, and it’s even more infrequent for left-handed people. Lots of people aren’t congruent.
In board sports, being left-foot dominant is termed goofy – a goofy-footed surfer stands with her left foot on the back of board instead of her right. There are an amazing number of theories as to why some of us are goofy-footed. But the term itself is often said to have originated with an eight-minute long Walt Disney animated short, called Hawaiian Holiday, that was first released to theaters in 1937. The color cartoon stars the usual suspects: Mickey and Minnie, Pluto and Donald, and, of course, Goofy. During the gang’s vacation in Hawaii, Goofy attempts to surf, and when he finally catches a wave and heads back to shore atop its short-lived crest, he’s standing with his right foot forward and his left foot back.
If you’re wondering if you might be goofy and would like to find out before hitting the beach, then imagine yourself at the bottom of a staircase that you’re about to ascend. Which foot moves first? If you’re taking that first imaginary step with your left foot, then it’s likely that you’re a member of the goofy-footed club. And if you find out that you aren’t goofy, then you’re in the majority.
I tried not to panic. I was floating effortlessly in a pitch-black tank filled with salty, skin-temperature water, wearing earplugs and nothing else. Within minutes I could no longer feel the sponge in my ears or smell the musty scent of water. There was no light, no smell, no touch and – save for the gasping of my breath and drumming of my heart – no sound.
I was trying out North America’s avant garde drug: sensory deprivation. Across the continent “float houses” are increasing in popularity, offering eager psychonauts a chance to explore this unique state of mind. Those running the business are quick to list the health benefits of frequent “floats”, which range from the believable – relaxation, heightened senses, pain management – to the seemingly nonsensical (“deautomatization”, whatever that means). Are these proclaimed benefits backed up by science or are they simply new-age hogwash?
Why would anyone willingly subject him or herself to sensory deprivation? You’ve probably heard the horror stories: the Chinese using restricted stimulation to “brainwash” prisoners of war during the Korean War; prisons employing solitary confinement as psychological torture. Initial research studies into the psychophysical effects of sensory deprivation, carried out in the 1950s at McGill University, further damaged its reputation, reporting slower cognitive processing, hallucinations, mood swings and anxiety attacks among the participants. Some researchers even considered sensory deprivation an experimental model of psychosis.
However, despite popular belief, sensory deprivation is not inherently unpleasant. According to Dr. Peter Suedfeld, a pioneering psychologist in the field, these stories are rubbish. “(The prisoners) were bombarded with overstimulation – loud group harangues, beatings and other physical tortures,” he explained. Similarly, the original studies at McGill University used constant noise and white light – that is, sensory overload – rather than deprivation.
In fact, an analysis in 1997 of well over 1,000 descriptions of sensory deprivation indicated that more than 90% of subjects found it deeply relaxing. To escape the provocative name of “sensory deprivation” and its negative connotations, in the late 1970s Suedfeld’s protégé, Dr. Roderick Borrie, redubbed the experience with a friendlier name: REST, or Restricted Environmental Stimulation Therapy.
Today, the two most frequently used REST methods are chamber REST, which involves the participant lying on a bed in a dark, soundproof room, and flotation REST, which involves floating in buoyant liquid in a light- and sound-proof tank. The latter, first developed by John Lilly in the 1970s and now widely commercialized, is what I decided to experience myself.
A version of this article originally appeared at The Conversation.
There could be a way of predicting – and preventing – which children will go on to have low intelligence, according to the findings of a study researchers at Cardiff University presented on Monday. They discovered that children with two copies of a common gene (Thr92Ala), together with low levels of thyroid hormone are four times more likely to have a low IQ. This combination occurs in about 4% of the UK population.
Importantly, if you had just one of these factors, but not both, there did not appear to be an increased risk of low intelligence. These are early results, but suggest that it might be possible to treat children early with thyroid hormone supplementation to enhance their intelligence. This raises many ethical issues.
A common objection is that being smarter does not make your life better. In this study, researchers were concerned with those with an IQ between 70-85. Below 70 is classified as intellectual disability but an IQ of 70 to 75 is similar to mild intellectual disability.
Even for individuals with an IQ between 75 and 90 there are still significant disadvantages. Job opportunities tend to be the least desirable and least financially rewarding, requiring significant oversight. More than half the people with this IQ level fail to reach the minimum recruitment standards for the US military. Individuals with this lower level of intelligence are at significant risk of living in poverty (16%), being a chronic welfare dependent (17%) and dropping out of school (35%) compared to individuals with average intelligence. Studies show that they also face an increased risk of incarceration and being murdered.
Linda Gottfredson, who’s undertaken much of this research, concludes that at the very least, “an IQ of 75 is perhaps the most important threshold in modern life”. So it is clear that those of low-normal intelligence, although not classified as disabled, are significantly disadvantaged.
If we could enhance their intelligence, say with thyroid hormone supplementation, we should.
Blood samples are an invaluable tool, but often they’re just the tip of the diagnostic iceberg, something that determines whether additional, more sensitive tests and scans might be necessary. But new technology may make it possible to use individual cells in a patient’s blood sample to get far more specific and actionable information. A technique being developed by San Diego–based Epic Sciences can determine whether a cancer patient is an appropriate candidate for a drug, and even whether the drug is losing its efficacy.
In research presented last month at the Personalized Medicine World Conference in Palo Alto, CA, Epic described how their technology can be used to reliably pick out rare cells from a blood sample. In the case of cancer, these rare, circulating tumor cells could one day tell an oncologist not only whether a patient’s cancer has returned, but also whether it’s growing resistant to the current treatment regimen—something only expensive scans and invasive biopsies can do with any accuracy today.
For years, medical researchers have been talking about the day when babies will have their whole genomes sequenced at birth, the day when genomic analysis will allow every patient to be treated not just based on her condition but on which treatment is the best match for her genetic quirks. There will be a day, they say, when we will all carry our genomes around on a thumb drive. But the hurdles, fiscal and otherwise, have proven difficult to overcome.
The DNA of one set of human chromosomes contains 3 billion base pairs—most cells are diploid and have two sets of chromosomes, one from each parent. Sequencing these six billion base pairs, one pair at a time, is unquestionably faster and cheaper than it once was: Since its less-than-humble beginnings almost 15 years ago, human genome sequencing has dropped from $100 million to around $1000. Instead of years, it can now be completed in a day or two.
Yet while that’s incredible progress, it’s not quite enough. Not only is it still too pricey for everyday use, but once that genome has been sequenced it also has to be mapped and analyzed—the process in which the sequenced base pairs are assigned to the correct chromosome and assessed for mutations, something that can take a couple of days or more. What to do with the resulting data is another problem: The genome and its resulting analysis typically occupy about 400GB. (For reference, the 2013 laptop I’m using to write this post has a storage capacity of 250GB—my genome wouldn’t come close to fitting on it.) Securely storing data from 500 or 5000 patients—at about $1 per gigabyte—typically costs hundreds of thousands of dollars per year.
Joan Bennett didn’t believe in sick building syndrome. As a specialist in mold toxins, she had even testified in trials in support of insurance companies denying claims to homeowners who claimed that they had been sickened by toxins from their moldy houses.
Then Hurricane Katrina struck, Bennett’s home was flooded, and she evacuated. “A month later, as a form of psychological sublimation, I decided to travel back and sample my home for mold,” she said. Her house smelled horrendous, worse than any mold she’d ever smelled. She donned a mask and gloves and protective gear, but even so, she felt awful – dizziness, headache, malaise. She walked outside and felt better. Then it struck her: “I think there’s something in this terrible mold I’m smelling.”
But she still believed in her old arguments against the theory. She knew how much mold toxin we ordinarily get exposed to from mold in food, and she still knew that it was far greater than any we could breathe from spores in the air.
But the smell of mold was another matter. Most things we can smell are volatile organic compounds (VOCs), and some VOCs are known to make people sick. “I knew that a minor theory was that sick building syndrome might be caused by the VOCs that make fungi smell moldy,” Bennett says. And then she thought, “Ta da! Maybe there is such a thing as sick building syndrome, and maybe it has nothing to do with the fungus toxins I’ve been studying all my life!”
That moment transformed her research career. Along with her house, she’d lost her entire frozen genetic stock of fungi in the storm, because the power had gone out and everything had defrosted. She had to mostly start over anyway, and now she wanted to prove her new theory.