Scott Firestone works as a researcher in evidence-based surgery, and recently started blogging about public health and environmental issues at His Science Is Too Tight, where this post originally appeared. You can find him on Twitter at @scottfirestone.
Kevin Drum from Mother Jones has a fascinating new article detailing the hypothesis that exposure to lead, particularly tetraethyl lead (TEL), explains the rise and fall of violent crime rates from the 1960s through the 1990s—at which point the compound was phased out of gasoline worldwide. It’s a good bit of public health journalism compared to much of what you see, but I’d like to provide a little bit of epidemiology background to the article. There’s so many studies listed that it’s a really good intro to the types of study designs you’ll see in public health. It also illustrates the concept of confirmation bias, and why regulatory agencies seem to drag their feet even in the face of such compelling stories as this one.
Drum correctly notes that the correlation is insufficient to draw any conclusions regarding causality. The research (pdf) published by economist Rick Nevin was simply looking at associations, and saw that the curves were heavily correlated, as you can quite clearly see. When you look at data involving large populations, such as violent crime rates, and compare with an indirect measure of exposure to some environmental risk factor such as levels of TEL in gasoline during that same time, the best you can say is that your alternative hypothesis of there being an association (null hypothesis always being no association) deserves more investigation. This type of design is called a cross-sectional study, and it’s been documented that values for a population do not always match those of individuals when looking at cross-sectional data.
Keith Kloor is a freelance journalist whose stories have appeared in a range of publications, from Science to Smithsonian. Since 2004, he’s been an adjunct professor of journalism at New York University. You can find him on Twitter @KeithKloor.
Last month, a group of Massachusetts residents filed an official complaint claiming that the wind turbine in their town is making them sick. According to the article in the Patriot Ledger, the residents “said they’ve lost sleep and suffered headaches, dizziness and nausea as a result of the turbine’s noise and shadow flicker [flashing caused by shadows from moving turbine blades].” A few weeks later, a story from Wisconsin highlighted similar complaints of health problems associated with wind turbines there.
Anecdotal claims like these are on the rise and not just in the United States. A recent story in the UK’s Daily Mail catalogs a litany of health ailments supposedly caused by wind turbines—everything from memory loss and dizziness to tinnitus and depression.
I expect so. For one thing, the alleged health problem has been adopted by demagogues and parroted on popular climate-skeptic websites. But the bigger problem is that “wind turbine syndrome” is what is known as a “communicated” disease, says Simon Chapman, a professor of public health at the University of Sydney. The disease, which has reached epidemic proportions in Australia, “spreads via the nocebo effect by being talked about, and is thereby a strong candidate for being defined as a psychogenic condition,” Chapman wrote several months ago in The Conversation.
What Chapman is describing is a phenomenon akin to mass hysteria—an outbreak of apparent health problems that has a psychological rather than physical basis. Such episodes have occurred throughout human history; earlier this year, a cluster of teenagers at an upstate New York high school were suddenly afflicted with Tourette syndrome-like symptoms. The mystery outbreak was attributed by some speculation to environmental contaminants.
But a doctor treating many of the students instead diagnosed them with a psychological condition called “conversion disorder,” as described by psychologist Vaughan Bell on The Crux:
A new toxicology study states that rats eating genetically modified food and the weedkiller Roundup develop huge tumors and die. But many scientists beg to differ, and a close look at the study shows why.
Genetically modified organisms (GMOs) have always been a controversial topic. On the one hand are the many benefits: the higher crop yields from pesticide- and insect-resistant crops, and the nutritional modifications that can make such a difference in malnourished populations. On the other side is the question that concerns many people: We are modifying the genes of our food, and what does that mean for our health? These are important question, but the new study claiming to answer them misses the mark. It has many horrifying pictures of rats with tumors, but without knowledge about the control rats, what do those tumors mean? Possibly, nothing at all.
The recent study, from the Journal of Food and Chemical Toxicology has fueled the worst fears of the GMO debate. The study, by Italian and French groups, evaluated groups of rats fed different concentrations of maize (corn) tolerant to Roundup or Roundup alone, over a two year period, the longest type of toxicology study. (For an example of one performed in the U.S., see here.) The group looked at the mortality rates in the aging rats, as well as the causes of death, and took multiple samples to assess kidney, liver, and hormonal function.
The presented results look like a toxicologist’s nightmare. The authors reported high rates of tumor development in the rats fed Roundup and the Roundup-tolerant maize. There are figures of rats with visible tumors, and graphs showing death rates that appear to begin early in the rats’ lifespan. The media of course picked up on it, and one site in particular has spawned some reports that sound like mass hysteria. It was the first study showing that genetically modified foods could produce tumors at all, let alone the incredibly drastic ones shown in the paper.
Neuroskeptic is a neuroscientist who takes a skeptical look at his own field and beyond at the Neuroskeptic blog.
Life is dominated by the Earth’s cycles. Day and night, spring and autumn, change the environment in so many ways that almost all organisms regulate their activity to keep up with time and the seasons. Animals sleep, and many hibernate, moult, and breed only at certain times of the year. Plants time the growth of seeds, leaves, fruit and shoots to make the most of the weather.
But what about humans? We sleep, and women menstruate, but do other biological cycles affect our behavior? The Internet has offered researchers a unique resource for answering this question.
For example, according to research published recently in the Archives of Sexual Behavior from American researchers Patrick and Charlotte Markey, Americans are most likely to search for sex online during the early summer and the winter.
The authors looked at the Google Trends for a selection of naughty words and phrases, and this revealed a pretty marked 6 month cycle for searches originating from the USA, with two yearly peaks in the search volumes. The words were related to three categories: pornography, sex services (e.g. massage parlors), and dating websites.
Google Trends searches for pornography-related words over time
This image shows the graph for pornography searches—the grey line—with an idealized six-month cycle also shown for comparison, the black line. The data show a strong twice-yearly peak. The picture was similar for two other categories of sexual words: prostitution and dating websites.
Jesse Bering, PhD, is regular contributor to Scientific American, Slate, and other publications. He is the author of the recently released book, Why Is the Penis Shaped Like That? And Other Reflections on Being Human and The Belief Instinct, which the American Library Association named one of the “25 Best Books of 2011.” You can find him here.
For the past seven years, I’ve been in an “interpenile relationship”—I, the lesser of the two you might say, am circumcised; my partner is not. This contrast between our members is not exactly at the top of our list of concerns. But it is nonetheless interesting how my prepuce came to disappear into a medical waste bin in a bustling New Jersey hospital on some springtime day in 1975, whereas his, by contrast, has remained a fellow traveler all the long way from that tiny Mexican village where he slipped from his young mother’s womb on a chilly December morning in 1981. That womb, incidentally, belonged to a Roman Catholic. The one that I bathed in, the place in which I had my “bones and sinews knitted together,” in the words of Job, was the property of a Jew. So despite neither of us being particularly patriotic nor, certainly, religious today, the organs dangling so differently between us are nevertheless the very incarnations of our parents’ vast cultural differences.
Whatever the reasons that previous generations may have had for choosing to remove their infant sons’ foreskins, they were almost always unconvincing. All else being equal—and let me reiterate that caveat because it’s likely to go unnoticed, with some readers eagerly pointing out to me those rare cases of congenital defects in which circumcision can legitimately improve the quality of life for some males, which is of course true—all else being equal, any dubious benefits derived from religious, social, hygienic, or aesthetic reasons are clearly outweighed by the costs of male circumcision. Because of some rabbi in Hackensack shaking his head over my intact genitalia, my parents went unblinkingly along with the amputation of a fully operational, perfectly healthy, and probably adaptive body part, all to sacrifice an ounce of their son’s tender flesh to a god that he would never believe in anyway.
Today, however, all is no longer equal, and the balance between the relative risks and benefits of male circumcision has clearly shifted in the other direction. That is, it has according to the American Academy of Pediatrics, which just earlier this week put out its revised position statement on infant male circumcision. Here’s the money quote:
Systematic evaluation of English-language peer-reviewed literature from 1995 through 2010 indicates that preventive health benefits of elective circumcision of male newborns outweigh the risks of the procedure. Benefits include significant reductions in the risk of urinary tract infection in the first year of life and, subsequently, in the risk of heterosexual acquisition of HIV and the transmission of other sexually transmitted infections.
Many of our parents, it seems, may have actually made the right decision for the wrong reasons. Although the task force behind the Academy’s reassessment stopped short of advising “routine” and “universal” removal of the foreskin for all newborn males, and stressed that it remains a personal decision to be made by informed parents, its language represents an increasingly unambiguous endorsement of male circumcision among the world’s leading health organizations (including the World Health Organization and UNAIDS) . By contrast, many of the world’s leading parents remain skeptical of the findings reviewed by the Academy, questioning both the methodologies and the generalizability of studies conducted overwhelmingly with African populations, in which rates of infection are dramatically higher than those in the US. (For more information on this research, as well as a description of the physical factors responsible for the reduction of HIV acquisition in circumcised males, see my earlier discussion at Scientific American.) The more vocal “intactivists,” who’ve long been protesting what they regard as an antiquated, cruel and unnecessary ritual act against little boys that is just as abhorrent as female clitoridectomy, have also responded bitterly to this newest AAP development, seeing fresh strands in an ongoing web of conspiracy between the major health organizations, third-party insurance companies implementing the policy views of these organizations, and greedy practitioners who mislead parents about the benefits of circumcision only to reap insurance payouts for “mutilating” children’s genitals.
Derek Lowe is a medicinal chemist who has worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer’s, diabetes, osteoporosis, and other diseases. He has been writing about drug discovery at In the Pipeline, where this post originally appeared, for more than ten years.
The British Medical Journal says that the “widely touted innovation crisis in pharmaceuticals is a myth.” The British Medical Journal is wrong.
There, that’s about as direct as I can make it. But allow me to go into more detail, because that’s not the the only thing they’re wrong about. This is a new article entitled “Pharmaceutical research and development: what do we get for all that money?”, and it’s by Joel Lexchin (York University) and Donald Light of UMDNJ. And that last name should be enough to tell you where this is all coming from, because Prof. Light is the man who’s publicly attached his name to an estimate that developing a new drug costs about $43 million dollars.
I’m generally careful, when I bring up that figure around people who actually develop drugs, not to do so when they’re in the middle of drinking coffee or working with anything fragile, because it always provokes startled expressions and sudden laughter. These posts go into some detail about how ludicrous that number is, but for now, I’ll just note that it’s hard to see how anyone who seriously advances that estimate can be taken seriously. But here we are again.
Andres Barkil-Oteo is an assistant professor of psychiatry at Yale University School of Medicine, with research interests in systems thinking, global mental health, and experiential learning in medical education. Find him on Google+ here.
Last spring, the American Psychiatric Association (APA) sent out a press release [pdf] noting that the number of U.S. medical students choosing to go into psychiatry has been declining for the past six years, even as the nation faces a notable dearth of psychiatrists. The Lancet, a leading medical journal, wrote that the field had an “identity crisis” related to the fact that it doesn’t seem “scientific enough” to physicians who deal with more tangible problems that afflict the rest of the body. Psychiatry has recently attempted to cope with its identity problem mainly by assuming an evidence-based approach favored throughout medicine. Evidence-based, however, became largely synonymous with medication, with relative disregard for other evidence-based treatments, like some forms of psychotherapy. In the push to become more medically respected, psychiatrists may be forsaking some of the important parts of their unique role in maintaining people’s health.
Over the last 15 years, use of psychotropic medication has increased in all kinds of ways, including off-label use and prescription of multiple drugs in combination. While overall rates of psychotherapy use remained constant during the 1990s, the proportion of the U.S. population using a psychotropic drug increased from 3.4 percent in 1987 to 8.1 percent by 2001. Antidepressants are now the second-most prescribed class of medication in the U.S., preceded only by lipid regulators, a class of heart drugs that includes statins like Lipitor. Several factors have contributed to this increase: direct-to-consumer advertising; development of effective drugs with fewer side effects (e.g., SSRIs); expansion in health coverage for mental illness made possible through the Mental Health Parity Act; and an increase in prescriptions from non-psychiatric physicians.
Unfortunately, not all of these psychiatric drugs are going to good use. Antidepressive drugs are widely used to treat people with mild or even sub-clinical depression, even though drugs tend to be less cost-effective for those people. It may sound paradoxical, but to get more benefit of antidepressants, we need to use them less, and only when needed, for moderate to severe clinically depressed patients. Patients with milder forms should be encouraged to try time-limited, evidence-based psychotherapies; several APA-endorsed clinical guidelines center on psychotherapies (e.g., cognitive behavioral therapy or behavior activation) as a first-line treatment for moderate depression, anxiety, and eating disorders, and as a secondary treatment to go with medication for schizophrenia and bipolar disorder.
Sophie Bushwick (Twitter, Tumblr) is a science journalist and podcaster, and is currently an intern at DISCOVERmagazine.com. She has written for Scientific American, io9, and DISCOVER, and has produced podcasts for 60-Second Science and Physics Central.
Human chromosomes (grey) capped by telomeres (white)
U.S. Department of Energy Human Genome Program
Renowned biologist Elizabeth Blackburn has said that when she was a young post-doc, “Telomeres just grabbed me and kept leading me on.” And lead her on they did—all the way to the Nobel Prize in Medicine in 2009. Telomeres are DNA sequences that continue to fascinate researchers and the public, partially because people with longer telomeres tend to live longer. So the recent finding that older men father offspring with unusually lengthy telomeres sounds like great news. Men of advanced age will give their children the gift of longer lives—right? But as is so often the case in biology, things aren’t that simple, and having an old father may not be an easy route to a long and healthy life.
Every time a piece of DNA gets copied, it can end up with errors in its sequence, or mutations. One of the most frequent changes is losing scraps of information from each end of the strand. Luckily, these strands are capped with telomeres, repeating sequences that do not code for any proteins and serve only to protect the rest of the DNA. Each time the DNA makes a copy, its telomeres get shorter, until these protective ends wear away to nothing. Without telomeres, the DNA cannot make any more copies, and the cell containing it will die.
But sperm are not subject to this telomere-shortening effect. In fact, the telomeres in sperm-producing stem cells not only resist degrading, they actually grow. This may be thanks to a high concentration of the telomere-repairing enzyme telomerase in the testicles; researchers are still uncertain. All they know is that the older the man, the longer the telomeres in his sperm will be.
Steve Silberman (@stevesilberman on Twitter) is a journalist whose articles and interviews have appeared in Wired, Nature, The New Yorker, and other national publications; have been featured on The Colbert Report; and have been nominated for National Magazine Awards and included in many anthologies. Steve is currently working on a book on autism and neurodiversity called NeuroTribes: Thinking Smarter About People Who Think Differently (Avery Books 2013). This post originally appeared on his blog, NeuroTribes.
Photo by Flickr user Noodles and Beef
Your doctor doesn’t like what’s going on with your blood pressure. You’ve been taking medication for it, but he wants to put you on a new drug, and you’re fine with that. Then he leans in close and says in his most reassuring, man-to-man voice, “I should tell you that a small number of my patients have experienced some minor sexual dysfunction on this drug. It’s nothing to be ashamed of, and the good news is that this side effect is totally reversible. If you have any ‘issues’ in the bedroom, don’t hesitate to call, and we’ll switch you to another type of drug called an ACE inhibitor.” OK, you say, you’ll keep that in mind.
Three months later, your spouse is on edge. She wants to know if there’s anything she can “do” (wink, wink) to reignite the spark in your marriage. She’s been checking out websites advertising romantic getaways. No, no, you reassure her, it’s not you! It’s that new drug the doctor put me on, and I hate it. When you finally make the call, your doctor switches you over to a widely prescribed ACE inhibitor called Ramipril.
“Now, Ramipril is just a great drug,” he tells you, “but a very few patients who react badly to it find they develop a persistent cough…” Your throat starts to itch even before you fetch the new prescription. Later in the week, you’re telling your buddy at the office that you “must have swallowed wrong” — for the second day in a row. When you type the words ACE inhibitor cough into Google, the text string auto-completes, because so many other people have run the same search, desperately sucking on herbal lozenges between breathless sips of water.
In other words, you’re doomed. Cough, cough!
Emily Elert is a science journalist and writer. Her work has appeared in DISCOVER, Popular Science, Scientific American, and On Earth Magazine.
Last month, CBS Boston aired a story about a man in Massachusetts who caught fire while operating a grill in his backyard. He wasn’t going crazy with lighter fluid, nor was he being careless with propane. No, the culprit was Banana Boat Sport Performance spray-on sunscreen.
But don’t be too quick to blame the orange bottle. After all, this kind of thing does occasionally happen when people spray flammable substances from aerosol cans in close proximity to burning coals. There are, however, other reasons to be suspicious of the summertime mainstay: several recent reports have raised questions about both the effectiveness and safety of sunscreens.
In fact, the National Cancer Institute, a branch of the NIH, declares on its website that studies on sunscreen use and cancer rates in the general population have provided “inadequate evidence” that sunscreens help prevent skin cancer. What’s more, research suggests that some sunscreens might even promote it.
Those are heavy charges for a product that people have long felt so good about using.