Category: Mind & Brain

Nocebo Doubt About It: "Wind Turbine Syndrome" Is Catching

By Keith Kloor | October 23, 2012 11:22 am

Keith Kloor is a freelance journalist whose stories have appeared in a range of publications, from Science to Smithsonian. Since 2004, he’s been an adjunct professor of journalism at New York University. You can find him on Twitter @KeithKloor.

Shutterstock

Last month, a group of Massachusetts residents filed an official complaint claiming that the wind turbine in their town is making them sick. According to the article in the Patriot Ledger, the residents “said they’ve lost sleep and suffered headaches, dizziness and nausea as a result of the turbine’s noise and shadow flicker [flashing caused by shadows from moving turbine blades].” A few weeks later, a story from Wisconsin highlighted similar complaints of health problems associated with wind turbines there.

Anecdotal claims like these are on the rise and not just in the United States. A recent story in the UK’s Daily Mail catalogs a litany of health ailments supposedly caused by wind turbines—everything from memory loss and dizziness to tinnitus and depression.

I expect so. For one thing, the alleged health problem has been adopted by demagogues and parroted on popular climate-skeptic websites. But the bigger problem is that “wind turbine syndrome” is what is known as a “communicated” disease, says Simon Chapman, a professor of public health at the University of Sydney. The disease, which has reached epidemic proportions in Australia, “spreads via the nocebo effect by being talked about, and is thereby a strong candidate for being defined as a psychogenic condition,” Chapman wrote several months ago in The Conversation.

What Chapman is describing is a phenomenon akin to mass hysteria—an outbreak of apparent health problems that has a psychological rather than physical basis. Such episodes have occurred throughout human history; earlier this year, a cluster of teenagers at an upstate New York high school were suddenly afflicted with Tourette syndrome-like symptoms. The mystery outbreak was attributed by some speculation to environmental contaminants.

But a doctor treating many of the students instead diagnosed them with a psychological condition called “conversion disorder,” as described by psychologist Vaughan Bell on The Crux:

Read More

How Much Detail Should a Presidential Candidate Spill? Your Answer Depends on Your Brain

By Julie Sedivy | October 19, 2012 11:11 am

Julie Sedivy is the lead author of Sold on Language: How Advertisers Talk to You And What This Says About You. She contributes regularly to Psychology Today and Language Log. She is an adjunct professor at the University of Calgary, and can be found at juliesedivy.com and on Twitter/soldonlanguage.

When we tune in to the presidential debates, we want each candidate to tell us what his plan is, and why it will work. But how much information do voters really want? Should Romney unpack his five-point plan and carefully explain the logic behind it? Or should he just reassure us that he knows what he’s doing?

Conventional wisdom has it that too much complexity can mark a candidate for premature political death. History offers up Adlai Stevenson as a prototype of the earnest intellectual who buried his presidential chances under mounds of policy detail—making him a great favorite of the intelligentsia, but too rarely connecting with the average voter. As the story has it, an enthusiastic supporter shouted out during one of his campaigns: “You have the vote of every thinking person.” To which Stevenson allegedly replied (presciently): “That’s not enough, madam. We need a majority.”

The great challenge for candidates during a debate is that they’re not  addressing  “the average voter.” They’re addressing  a mass of citizens with conflicting priorities, beliefs, values, and even different cognitive styles that shape how they evaluate arguments, and just how much detail they want to hear from those who would persuade them.

In the longstanding argument over whether voters are won over by candidates’ style or substance, the answer is undoubtedly: both. All of us rely on fast, intuitive modes of thinking (often called System 1 processing by psychologists) as well as slower, more deliberative evaluation (System 2 thinking). Some situations tilt us more toward one than the other. Anything that limits the sheer computational power we can devote to a task—for instance, watching the debates while at the same time following comments on Twitter—makes us depend more on quick but shallow System 1 processing.

But put different people in the same situation, and some of them will be more likely to fall back on intuitive gut reactions while others will delve into deeper analysis. Some folks, it turns out, simply tend toward more mental activity more than others, and psychologists have found a way to measure this difference using the “Need for Cognition” scale, a questionnaire that contains queries such as: “I really like a task that involves coming up with new solutions to problems” or “I feel relief rather than satisfaction after completing a task that required a lot of mental effort.”

A long line of research (much of it done by Richard Petty, John Cacioppo and their colleagues) shows that people who score high on Need for Cognition respond differently to persuasive messages than those who score lower. When superficial cues (like the attractiveness or apparent expertise of whoever’s making the pitch) are compared against the quality of an argument, these eager thinkers are more likely to ignore the shallow cues in favor of the stronger argument. People who fall lower on the Need for Cognition scale will often find a logically weak argument as persuasive as a strong one, especially if it comes from the lips of an attractive or knowledgeable person.

In the face of such cognitive diversity, a sound strategy for a political  candidate might be to make sure to control his style, body language, and general demeanor, and also to have a good, strong argument, ready to appeal to both System 1 and System 2 thinkers. But a recent study in the Journal of Consumer Research by Philip Fernbach and his colleagues suggests that sometimes, a well-reasoned, complex, detailed argument can actually repel those inclined towards intuition.

Read More

CATEGORIZED UNDER: Mind & Brain, Top Posts

For Stealthy Electric Cars, Auditory Illusions Could Save Lives

By Mark Changizi | September 13, 2012 9:30 am

Mark Changizi is an evolutionary neurobiologist and director of human cognition at 2AI Labs. He is the author of The Brain from 25000 FeetThe Vision Revolution, and his newest book, Harnessed: How Language and Music Mimicked Nature and Transformed Ape to Man.”


The silent purr of an electric car is a selling point over the vroom of a gasoline engine, but it comes with an undesirable side effect: An electric car can pounce on unsuspecting passerbys like a puma on prey. In fact, the NHTSA found that hybrid electric cars are disproportionately dangerous to pedestrians. To deal with this problem, it has been proposed that sound be added to hybrid and electric vehicles, whether it be bird-songs or recordings of someone making “vroom vroom” sounds.

In this light, I wondered whether it might be possible to add “smart sound” to these dangerously quiet cars destined to rule the road in the near future. The solution, I realized, might come from faster-than-light-speed objects on the moon. I’ll get to this crazy-sounding part in a bit.

The Melody of Movement

In setting out to solve this problem, I reasoned that when electric cars are moving very fast they make enough sound to be heard due to the rumblings of the car parts. It’s when they’re moving at lower speeds that they’re most perilous, because at these speeds they’re most silent. Therefore, if electric cars are to be fitted with some sound, it should be designed to work even at lower speeds—or, especially at lower speeds.

Next question was, What sort of sound do we want on slowish, stealthy electric cars? To answer this, it helps to grasp the sorts of cues your auditory system uses for detecting the movement of objects in your midst.

The most obvious auditory cue is that nearer objects are louder, and so when you hear a moving object rising in loudness, you know it’s getting closer.

But that’s not the most important auditory cue. To illustrate why, imagine walking along a curb with traffic approaching and passing you from behind. The important observation here is that when this happens you aren’t in the least worried. Even without seeing the car, you know it’s merely passing you despite the massive crescendo in its sound. Why?

Doppler shift illustration
The Doppler shift changes the observed pitch of the siren as the car moves.

You know the car isn’t going to hit you because of its pitch. Due to the Doppler shift, this car has a falling pitch, and this falling pitch contour tells your brain unambiguously that, although the car is going to get arm-reachably close, it is going to pass you rather than collide with you. If it were going to collide with you, its pitch would be high and constant—that’s the signature of a looming collision.

Read More

CATEGORIZED UNDER: Mind & Brain, Technology, Top Posts

The Internet Pulses With the Rhythms of Human Life

By Neuroskeptic | September 11, 2012 9:30 am

Neuroskeptic is a neuroscientist who takes a skeptical look at his own field and beyond at the Neuroskeptic blog

Life is dominated by the Earth’s cycles. Day and night, spring and autumn, change the environment in so many ways that almost all organisms regulate their activity to keep up with time and the seasons. Animals sleep, and many hibernate, moult, and breed only at certain times of the year. Plants time the growth of seeds, leaves, fruit and shoots to make the most of the weather.

But what about humans? We sleep, and women menstruate, but do other biological cycles affect our behavior? The Internet has offered researchers a unique resource for answering this question.

For example, according to research published recently in the Archives of Sexual Behavior from American researchers Patrick and Charlotte Markey, Americans are most likely to search for sex online during the early summer and the winter.

The authors looked at the Google Trends for a selection of naughty words and phrases, and this revealed a pretty marked 6 month cycle for searches originating from the USA, with two yearly peaks in the search volumes. The words were related to three categories: pornography, sex services (e.g. massage parlors), and dating websites.

Google Trends searches for 'pornography' related words over time
Google Trends searches for pornography-related words over time

This image shows the graph for pornography searches—the grey line—with an idealized six-month cycle also shown for comparison, the black line. The data show a strong twice-yearly peak. The picture was similar for two other categories of sexual words: prostitution and dating websites.

Read More

Psychiatry’s Identity Crisis, and How to Start Fixing It

By Guest Blogger | August 6, 2012 9:30 am

Andres Barkil-Oteo is an assistant professor of psychiatry at Yale University School of Medicine, with research interests in systems thinking, global mental health, and experiential learning in medical education. Find him on Google+ here

Shutterstock

Last spring, the American Psychiatric Association (APA) sent out a press release [pdf] noting that the number of U.S. medical students choosing to go into psychiatry has been declining for the past six years, even as the nation faces a notable dearth of psychiatrists. The Lancet, a leading medical journal, wrote that the field had an “identity crisis” related to the fact that it doesn’t seem “scientific enough” to physicians who deal with more tangible problems that afflict the rest of the body. Psychiatry has recently attempted to cope with its identity problem mainly by assuming an evidence-based approach favored throughout medicine. Evidence-based, however, became largely synonymous with medication, with relative disregard for other evidence-based treatments, like some forms of psychotherapy. In the push to become more medically respected, psychiatrists may be forsaking some of the important parts of their unique role in maintaining people’s health.

Over the last 15 years, use of psychotropic medication has increased in all kinds of ways, including off-label use and prescription of multiple drugs in combination. While overall rates of psychotherapy use remained constant during the 1990s, the proportion of the U.S. population using a psychotropic drug increased from 3.4 percent in 1987 to 8.1 percent by 2001. Antidepressants are now the second-most prescribed class of medication in the U.S., preceded only by lipid regulators, a class of heart drugs that includes statins like Lipitor. Several factors have contributed to this increase: direct-to-consumer advertising; development of effective drugs with fewer side effects (e.g., SSRIs); expansion in health coverage for mental illness made possible through the Mental Health Parity Act; and an increase in prescriptions from non-psychiatric physicians.

Unfortunately, not all of these psychiatric drugs are going to good use. Antidepressive drugs are widely used to treat people with mild or even sub-clinical depression, even though drugs tend to be less cost-effective for those people. It may sound paradoxical, but to get more benefit of antidepressants, we need to use them less, and only when needed, for moderate to severe clinically depressed patients. Patients with milder forms should be encouraged to try time-limited, evidence-based psychotherapies; several APA-endorsed clinical guidelines center on psychotherapies (e.g., cognitive behavioral therapy or behavior activation) as a first-line treatment for moderate depression, anxiety, and eating disorders, and as a secondary treatment to go with medication for schizophrenia and bipolar disorder.

Read More

Are Warnings About Drug Side Effects Actually Making Us Sick?

By Guest Blogger | July 24, 2012 12:53 pm

Steve Silberman (@stevesilberman on Twitter) is a journalist whose articles and interviews have appeared in Wired, Nature, The New Yorker, and other national publications; have been featured on The Colbert Report; and have been nominated for National Magazine Awards and included in many anthologies. Steve is currently working on a book on autism and neurodiversity called NeuroTribes: Thinking Smarter About People Who Think Differently (Avery Books 2013). This post originally appeared on his blog, NeuroTribes.

 

Patient receiving a vaccinePhoto by Flickr user Noodles and Beef

Your doctor doesn’t like what’s going on with your blood pressure. You’ve been taking medication for it, but he wants to put you on a new drug, and you’re fine with that. Then he leans in close and says in his most reassuring, man-to-man voice, “I should tell you that a small number of my patients have experienced some minor sexual dysfunction on this drug. It’s nothing to be ashamed of, and the good news is that this side effect is totally reversible. If you have any ‘issues’ in the bedroom, don’t hesitate to call, and we’ll switch you to another type of drug called an ACE inhibitor.” OK, you say, you’ll keep that in mind.

Three months later, your spouse is on edge. She wants to know if there’s anything she can “do” (wink, wink) to reignite the spark in your marriage. She’s been checking out websites advertising romantic getaways. No, no, you reassure her, it’s not you! It’s that new drug the doctor put me on, and I hate it. When you finally make the call, your doctor switches you over to a widely prescribed ACE inhibitor called Ramipril.

“Now, Ramipril is just a great drug,” he tells you, “but a very few patients who react badly to it find they develop a persistent cough…” Your throat starts to itch even before you fetch the new prescription. Later in the week, you’re telling your buddy at the office that you “must have swallowed wrong” — for the second day in a row. When you type the words ACE inhibitor cough into Google, the text string auto-completes, because so many other people have run the same search, desperately sucking on herbal lozenges between breathless sips of water.

In other words, you’re doomed. Cough, cough!

Read More

Is Autism an “Epidemic” or Are We Just Noticing More People Who Have It?

By Guest Blogger | July 11, 2012 4:37 pm

Emily Willingham (Twitter, Google+, blog) is a science writer and compulsive biologist whose work has appeared at Slate, Grist, Scientific American Guest Blog, and Double X Science, among others. She is science editor at the Thinking Person’s Guide to Autism and author of The Complete Idiot’s Guide to College Biology.

Shutterstock

In March the US Centers for Disease Control and Prevention (CDC) the newly measured autism prevalences for 8-year-olds in the United States, and headlines roared about a “1 in 88 autism epidemic.” The fear-mongering has led some enterprising folk to latch onto our nation’s growing chemophobia and link the rise in autism to “toxins” or other alleged insults, and some to sell their research, books, and “cures.” On the other hand, some researchers say that what we’re really seeing is likely the upshot of more awareness about autism and ever-shifting diagnostic categories and criteria.

Even though autism is now widely discussed in the media and society at large, the public and some experts alike are still stymied be a couple of the big, basic questions about the disorder: What is autism, and how do we identify—and count—it? A close look shows that the unknowns involved in both of these questions suffice to explain the reported autism boom. The disorder hasn’t actually become much more common—we’ve just developed better and more accurate ways of looking for it.

Leo Kanner first described autism almost 70 years ago, in 1944. Before that, autism didn’t exist as far as clinicians were concerned, and its official prevalence was, therefore, zero. There were, obviously, people with autism, but they were simply considered insane. Kanner himself noted in a 1965 paper that after he identified this entity, “almost overnight, the country seemed to be populated by a multitude of autistic children,” a trend that became noticeable in other countries, too, he said.

In 1951, Kanner wrote, the “great question” became whether or not to continue to roll autism into schizophrenia diagnoses, where it had been previously tucked away, or to consider it as a separate entity. But by 1953, one autism expert was warning about the “abuse of the diagnosis of autism” because it “threatens to become a fashion.” Sixty years later, plenty of people are still asserting that autism is just a popular diagnosis du jour (along with ADHD), that parents and doctors use to explain plain-old bad behavior.

Asperger’s syndrome, a form of autism sometimes known as “little professor syndrome,” is in the same we-didn’t-see-it-before-and-now-we-do situation. In 1981, noted autism researcher Lorna Wing translated and revivified Hans Asperger’s 1944 paper describing this syndrome as separate from Kanner’s autistic disorder, although Wing herself argued that the two were part of a borderless continuum. Thus, prior to 1981, Asperger’s wasn’t a diagnosis, in spite of having been identified almost 40 years earlier. Again, the official prevalence was zero before its adoption by the medical community.

And so, here we are today, with two diagnoses that didn’t exist 70 years ago (plus a third, even newer one: PDD-NOS) even though the people with the conditions did. The CDC’s new data say that in the United States, 1 in 88 eight-year-olds fits the criteria for one of these three, up from 1 in 110 for its 2006 estimate. Is that change the result of an increase in some dastardly environmental “toxin,” as some argue? Or is it because of diagnostic changes and reassignments, as happened when autism left the schizophrenia umbrella?

Read More

War Has Deep Roots in Human Nature, But It’s Not Inevitable

By Razib Khan | July 5, 2012 1:35 pm


Thomas Malthus

In the June 2012 issue of Discover, E. O. Wilson authored a piece with the provocative title, “Is War Inevitable?” Derived from his recent book The Social Conquest of Earth, the narrative has a rather simple answer to the question implied in the title: war is inevitable, because it is part of human nature, and, perhaps more provocatively, it shaped human nature. John Horgan, who recently penned The End of War, rebuts Wilson’s argument in a point-by-point fashion in a companion article, “No, War Is Not Inevitable.” I find myself in a curious position: I agree with John Horgan in terms of the conclusion—that war is not inevitable—but not for the same reasons. While Horgan is right that Wilson relies on a particular, controversial group of ethologists to make the assertion that chimps have frequent inter-group conflicts and humans have always had wars, so Horgan leans upon his own preferred group of scholars to make the opposite points. But both of them, I think, miss the crucial part of the answer: the tricky interplay between nature and nurture.

With a strong background in ecology, Wilson assumes a Malthusian paradigm when it comes to human numbers and human resources. In other words, we are subject to a carrying capacity. When there is a surplus of resources population size increase, and “catches up” to the resource base. After a time an equilibrium develops between population and resources. How? The reality is that for solid evolutionary reasons, individuals do not reduce their own reproductive output altruistically. Rather, the population “self-regulates.” In the jargon there is “intra-species competition,” as individuals and groups scramble for finite resources. (There are also, of course, inter-species factors, like predator, prey, and parasites.) The losers die, while the winners reproduce. Each generation is witness to conflicts which check the population and maintain the equilibrium.

Read More

CATEGORIZED UNDER: Mind & Brain, Top Posts

How Doctors Can Ethically Harness the Placebo Effect

By Guest Blogger | June 26, 2012 9:47 am

pills

Howard Brody, MD, PhD, is the John P. McGovern Centennial Chair in Family Medicine and Director of the Institute for the Medical Humanities at the University of Texas Medical Branch, Galveston. 

For years, doctors thought that placebos like sugar pills were totally inert, just something to be given out to mollify a demanding patient without any expected health benefits. Gradually, both physicians and medical researchers came to realize that such treatments can sometimes cause substantial improvement of symptoms, even when there’s no chemical or other biomedical explanation for what occurs—a phenomenon called the placebo effect. In a recent commentary in the Journal of Medical Ethics, Cory Harris and Amir Raz of McGill summarize the data from recent surveys of physician use of placebos in clinical practice in several nations.

They find that prescribing drugs like antibiotics or supplements like vitamins as placebos is now a widespread practice. This is happening without any public guidelines or regulations for placebos’ use, which raises an important question: How, exactly, should physicians be using the placebo effect to help patients?

This discussion is necessary because the understanding of the placebo effect is changing, and fast. In the past decade, scientists have used brain-scanning to see just which parts of the brain, and in what order, become active when a patient takes a placebo pill for various conditions. Other investigators have looked more closely at the treatment environment and sorted out what parts of that environment rev up a placebo response. For example, seeing a nurse inject a painkiller into your IV line gives you roughly twice as much pain relief as having the same dose of medicine administered by a hidden pump. Getting acupuncture treatment from a warm and friendly practitioner works better than the same treatment from a cold, distant one. There’s even some preliminary evidence to suggest that patients experience positive placebo effects even when told frankly that the pills they are taking are placebos, with no active chemical ingredients.

This research—and perhaps personal experience—has changed the way doctors view the importance of their patients’ mental states. Surveys from 20–30 years ago found a general belief among physicians that placebos were completely inert and powerless, and that if any good effect occurred, it was only in the patient’s imagination. The newer surveys, one of which I participated in, show a small revolution in physician thinking about mind-body relations. Physicians today generally agree that placebos can actually have a positive effect on the patient’s body, and that mind-body medicine “works.” That’s important, and has not been sufficiently noted.

Read More

Don’t Call a 9-Year-Old a “Psychopath”

By Guest Blogger | June 20, 2012 10:51 am

Emily Willingham (TwitterGoogle+, blog) is a science writer and compulsive biologist whose work has appeared at Slate, Grist, Scientific American Guest Blog, and Double X Science, among others. She is science editor at the Thinking Person’s Guide to Autism and author of The Complete Idiot’s Guide to College Biology.

Shutterstock

In May, the New York Times Magazine published a piece by Jennifer Kahn entitled, “Can you call a 9-year-old a psychopath?” The online version generated a great deal of discussion, including 631 comments and a column from Amanda Marcotte at Slate comparing psychopathy and autism. Marcotte’s point seemed to be that if we accept autism as another variant of human neurology rather than as a moral failing, should we not also apply that perspective to the neurobiological condition we call “psychopathy”? Some autistic people to umbrage at the association with psychopathy, a touchy comparison in the autism community in particular. Who would want to be compared to a psychopath, especially if you’ve been the target of one?

In her Times piece, Kahn noted that although no tests exist to diagnose psychopathy in children, many in the mental health professions “believe that psychopathy, like autism, is a distinct neurological condition (that) can be identified in children as young as 5.” Marcotte likely saw this juxtaposition with autism and based her Slate commentary on the comparison. But a better way to make this point (and to avoid a minefield), I’d argue, is to stop mentioning autism at all and to say that any person’s neurological make-up isn’t a matter of morality but of biology. If we argue for acceptance of you and your brain, regardless how it works, we should argue for acceptance of people who are psychopaths. They are no more to blame for how they developed than people with other disabilities.

If being compared with a psychopath elicits a whiplash-inducing mental recoil, then you probably have a good understanding of why the autism community responded to Marcotte’s piece (and accompanying tweets) so defensively, even though her point was a good one. At its core, the argument is a logical, even humanistic one. When it comes to psychopathy, our cultural tendencies are to graft moral judgment onto people who exhibit symptoms of psychopathy, a condition once designated as “moral insanity.” We tend collectively to view the psychopath as a cold-hearted, amoral entity walking around in a human’s body, a literal embodiment of evil.

But those grown people whom we think of as being psychopaths were once children. What were our most infamous psychopaths like when they were very young? Was there ever a time when human intervention could have deflected the trajectory they took, turned the path away from the horror, devastation, and tragedy they caused, one that not all psychopaths ultimately follow? Can we look to childhood as a place to identify the traits of psychopathy and, once known, apply early intervention?

Read More

CATEGORIZED UNDER: Mind & Brain, Top Posts
NEW ON DISCOVER
OPEN
CITIZEN SCIENCE
ADVERTISEMENT

Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

The Crux

A collection of bright and big ideas about timely and important science from a community of experts.
ADVERTISEMENT

See More

ADVERTISEMENT
Collapse bottom bar
+

Login to your Account

X
E-mail address:
Password:
Remember me
Forgot your password?
No problem. Click here to have it e-mailed to you.

Not Registered Yet?

Register now for FREE. Registration only takes a few minutes to complete. Register now »