Howard Brody, MD, PhD, is the John P. McGovern Centennial Chair in Family Medicine and Director of the Institute for the Medical Humanities at the University of Texas Medical Branch, Galveston.
For years, doctors thought that placebos like sugar pills were totally inert, just something to be given out to mollify a demanding patient without any expected health benefits. Gradually, both physicians and medical researchers came to realize that such treatments can sometimes cause substantial improvement of symptoms, even when there’s no chemical or other biomedical explanation for what occurs—a phenomenon called the placebo effect. In a recent commentary in the Journal of Medical Ethics, Cory Harris and Amir Raz of McGill summarize the data from recent surveys of physician use of placebos in clinical practice in several nations.
They find that prescribing drugs like antibiotics or supplements like vitamins as placebos is now a widespread practice. This is happening without any public guidelines or regulations for placebos’ use, which raises an important question: How, exactly, should physicians be using the placebo effect to help patients?
This discussion is necessary because the understanding of the placebo effect is changing, and fast. In the past decade, scientists have used brain-scanning to see just which parts of the brain, and in what order, become active when a patient takes a placebo pill for various conditions. Other investigators have looked more closely at the treatment environment and sorted out what parts of that environment rev up a placebo response. For example, seeing a nurse inject a painkiller into your IV line gives you roughly twice as much pain relief as having the same dose of medicine administered by a hidden pump. Getting acupuncture treatment from a warm and friendly practitioner works better than the same treatment from a cold, distant one. There’s even some preliminary evidence to suggest that patients experience positive placebo effects even when told frankly that the pills they are taking are placebos, with no active chemical ingredients.
This research—and perhaps personal experience—has changed the way doctors view the importance of their patients’ mental states. Surveys from 20–30 years ago found a general belief among physicians that placebos were completely inert and powerless, and that if any good effect occurred, it was only in the patient’s imagination. The newer surveys, one of which I participated in, show a small revolution in physician thinking about mind-body relations. Physicians today generally agree that placebos can actually have a positive effect on the patient’s body, and that mind-body medicine “works.” That’s important, and has not been sufficiently noted.
Emily Willingham (Twitter, Google+, blog) is a science writer and compulsive biologist whose work has appeared at Slate, Grist, Scientific American Guest Blog, and Double X Science, among others. She is science editor at the Thinking Person’s Guide to Autism and author of The Complete Idiot’s Guide to College Biology.
In May, the New York Times Magazine published a piece by Jennifer Kahn entitled, “Can you call a 9-year-old a psychopath?” The online version generated a great deal of discussion, including 631 comments and a column from Amanda Marcotte at Slate comparing psychopathy and autism. Marcotte’s point seemed to be that if we accept autism as another variant of human neurology rather than as a moral failing, should we not also apply that perspective to the neurobiological condition we call “psychopathy”? Some autistic people to umbrage at the association with psychopathy, a touchy comparison in the autism community in particular. Who would want to be compared to a psychopath, especially if you’ve been the target of one?
In her Times piece, Kahn noted that although no tests exist to diagnose psychopathy in children, many in the mental health professions “believe that psychopathy, like autism, is a distinct neurological condition (that) can be identified in children as young as 5.” Marcotte likely saw this juxtaposition with autism and based her Slate commentary on the comparison. But a better way to make this point (and to avoid a minefield), I’d argue, is to stop mentioning autism at all and to say that any person’s neurological make-up isn’t a matter of morality but of biology. If we argue for acceptance of you and your brain, regardless how it works, we should argue for acceptance of people who are psychopaths. They are no more to blame for how they developed than people with other disabilities.
If being compared with a psychopath elicits a whiplash-inducing mental recoil, then you probably have a good understanding of why the autism community responded to Marcotte’s piece (and accompanying tweets) so defensively, even though her point was a good one. At its core, the argument is a logical, even humanistic one. When it comes to psychopathy, our cultural tendencies are to graft moral judgment onto people who exhibit symptoms of psychopathy, a condition once designated as “moral insanity.” We tend collectively to view the psychopath as a cold-hearted, amoral entity walking around in a human’s body, a literal embodiment of evil.
But those grown people whom we think of as being psychopaths were once children. What were our most infamous psychopaths like when they were very young? Was there ever a time when human intervention could have deflected the trajectory they took, turned the path away from the horror, devastation, and tragedy they caused, one that not all psychopaths ultimately follow? Can we look to childhood as a place to identify the traits of psychopathy and, once known, apply early intervention?
Dan Hurley is writing a book about new research into how people can increase their intelligence. His latest article for DISCOVER, published in April, was about how the brain forms memories.
Can you consciously increase your intelligence? That question was the title of an article I wrote in April for the New York Times Magazine, examining studies showing that people who train their working memory with specially designed games show increases in their fluid intelligence, the ability to solve novel problems and identify patterns. In particular, the article focused on a game called the N-back task, in which a participant is challenged to keep track of spoken words or locations on a grid as they continuously pile up.
While some skeptics doubt that anything as profound as intelligence can be increased in as little as a month by playing a silly game, far stranger methods are also being tested. And the results keep getting published in respectable journals, showing significant effects.
Perhaps the most seemingly absurd approach is the use of “first-person shooter” video games, like Call of Duty. Studies by Daphne Bavelier at the University of Rochester have found that practicing the games improved performance on an array of untrained sensory, perceptual, and attentional tasks. Notably, the transfer is broad enough to improve trainees’ ability to distinguish an auditory signal from white noise, despite the fact that no auditory training was involved in the games, and that two distinct brain areas are involved in auditory and visual processing.
“This is not the first kind of activity you’d think is good for the mind,” Bavelier told me. “But there is a whole field of research showing that executive control and the ability to decide whether to attend to something or not is a main determinant of intelligence. In that sense the games are making you smarter. Whether they will make you do better on an exam, I cannot say.”
Derek Lowe is a medicinal chemist who has worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer’s, diabetes, osteoporosis, and other diseases. He has been writing about drug discovery at In the Pipeline, where this post originally appeared, for more than ten years.
Slate recently published one of those assume-the-conclusions articles up on science and technology education in the U.S. It’s right there in the title: “America Needs More Scientists and Engineers.”
Now, I can generally agree that America (and the world) needs more science and engineering. I’d personally like to have researchers who could realize room-temperature superconductors, a commercially feasible way to turn carbon dioxide from the atmosphere into industrial products, and both economically viable fusion power and high-efficiency solar power beamed down from orbit—for starters. We most definitely need better technology and more scientific understanding to develop these things, since none of them (as far as we know) are at all impossible, and we sure don’t have any of them yet.
But to automatically assume that we need lots more scientists and engineers to do that is a tempting, but illogical, conclusion. And it’s one that my currently unemployed readers who are scientists and engineers probably don’t enjoy hearing about very much. I think that the initial fallacies are (1) lumping together all science education into a common substance, and (2) assuming that if you just put more of that into the hopper, more good stuff will come out the other end.
If I had to pick one line from the article that I disagree with the most, it would be this one:
America needs Thomas Edisons and Craig Venters, but it really needs a lot more good scientists, more competent scientists, even more mediocre scientists.
No. I hate to be the one to say it, but mediocre scientists are, in fact, in long supply. Access to them is not a rate-limiting step. (That’s the chemist’s way of saying it’s not the main bottleneck.) Not all the unemployed science and technology folks out there are mediocre—not by a long shot (I’ve seen the CVs that come in)—but a lot of the mediocre ones are finding themselves unemployed, and they’re searching an awful long time for new positions when that happens. Who, exactly, would be clamoring to hire a fresh horde of I-guess-they’ll-do science graduates? Is that what we really need to put things over the top, technologically—more foot soldiers?
Asteroid mining brings up some tricky legal questions.
By Frans von der Dunk, as told to Veronique Greenwood.
Frans von der Dunk is the Harvey and Susan Perlman Alumni and Othmer Professor of Space Law at the University of Nebraska College of Law. In addition, he is the director of a space law and policy consultancy, Black Holes, based in the Netherlands.
Within weeks of the launch of Sputnik I in 1957, after the U.S. made no protest against the satellite flying over its territory, space effectively became recognized as a global commons, free for all. The UN Committee on the Peaceful Uses of Outer Space, charged with codifying existing law and developing it further to apply to space, was brought into being, with all major nations being involved. The fundamental rule of space law they adopted is that no single nation can exercise territorial sovereignty over any part of outer space. American astronauts planting the flag on the moon did not, and never could, thereby turn the moon into U.S. territory.
Now that private companies are making forays into space, though—with SpaceX’s Dragon capsule mission last week only the first of many, and plans to mine asteroids for private profit seeming more and more plausible—we’re facing a sudden need to update the applicable laws. How will we deal with property ownership in space? Who is responsible for safety when private companies begin to ferry public employees, like NASA astronauts, to the International Space Station?
Mark Anderson has an M.S. in astrophysics, is a contributor to Discover, and has written about science and history for many other publications. His new book The Day the World Discovered the Sun: An Extraordinary Story of Scientific Adventure and the Race to Track the Transit of Venus has just been published by Da Capo.
Also see Paul Raeburns’s explanation of what investigating Venus can teach us about our own planet.
The 2004 Venus transit at sunrise
On Tuesday afternoon—for those in North, Central and parts of South America—the planet Venus will pass directly in front of the sun for seven hours. This rare spectacle, called the Venus transit, occurs twice within a decade, then not again for more than a century. But as fleeting as they are, transits of the past provided invaluable information about our place in the solar system—and, astronomers hope, this transit could help us glean more information on planets elsewhere in the galaxy.
In the 1760s, some of the age’s top explorers and scientists collaborated on dozens of expeditions across the planet to observe the Venus transit. These voyages launched the legendary careers of Captain Cook and the surveyors Mason and Dixon. The expeditions also represented the world’s first big science project—forefather to today’s Large Hadron Collider and Human Genome Project, in which an international community of hundreds or thousands collaborates on a single fundamental scientific problem at the frontier of human knowledge.
In the balance hung two of the greatest scientific and technological puzzles of the 18th century: discovering the Sun’s distance from the Earth and finding one’s longitude at sea. Read More