By Wind Goodfriend
This article originally appeared on Dr. Goodfriend’s blog “A Psychologist at the Movies.”
I’m completely obsessed with The Hunger Games. I’m not sure why. Maybe it’s because I have visited North Korea, a real country where millions of people really are dying of hunger. Maybe it’s the ironic meta-experience of watching the movie’s violence on a huge screen, when the movie’s point is that people shouldn’t watch violence on a huge screen. Regardless, The Hunger Games is chock-full of possible psychological analysis. Today I’m focusing on the fascinatingly weird emotions that spark between the The Hunger Games’ two main protagonists, Peeta and Katniss.
At home, Katniss has a boyfriend, a young man named Gale. He has rugged good looks, he’s brave, and they are perfectly matched in many ways. Both Katniss and Gale fight against the system in their own way (which is increasingly seen as the trilogy continues), and he is always successful at making Katniss feel comforted in a world with no comforts.
So why does Katniss later fall for Peeta? Peeta certainly has lovable qualities – he’s smart, nurturing, and can frost a cake like nobody’s business – but he and Katniss are not exactly a natural pair. Their personalities clash, their goals in life are different, and Katniss really isn’t interested in any kind of frivolous romance. Sure, in the first movie she is ambivalent about her feelings for Peeta, the kind-hearted boy with a sexy baby-faced look. But psychology would have predicted their blossoming feelings for each other due to their experiences together in the Hunger Games. It’s all because of a phenomenon called misattribution of arousal. Read More
By Matthew D. Lieberman
Comedian Jerry Seinfeld used to tell the following joke: “According to most studies, people’s number one fear is public speaking. Death is number two. Does this sound right? This means to the average person, if you go to a funeral, you’re better-off in the casket than doing the eulogy.”
The joke is a riff based on a privately conducted survey of 2,500 people in 1973 in which 41 percent of respondents indicated that they feared public speaking and only 19 percent indicated that they feared death. While this improbable ordering has not been replicated in most other surveys, public speaking is typically high on the list of our deepest fears. “Top ten” lists of our fears usually fall into three categories: things associated with great physical harm or death, the death or loss of loved ones, and speaking in public.
What is curious is that the person speaking probably doesn’t know or care about most of the people there. So why does it matter so much what they think? The answer is that it hurts to be rejected.
Ask yourself what have been the one or two most painful experiences of your life. Did you think of the physical pain of a broken leg or a really bad fall? My guess is that at least one of your most painful experiences involved what we might call social pain—pain of a loved one’s dying, of being dumped by someone you loved, or of experiencing some kind of public humiliation in front of others.
Why do we associate such events with the word pain? When human beings experience threats or damage to their social bonds, the brain responds in much the same way it responds to physical pain.
By Gina Perry
It’s one of the most well-known psychology experiments in history – the 1961 tests in which social psychologist Stanley Milgram invited volunteers to take part in a study about memory and learning. Its actual aim, though, was to investigate obedience to authority – and Milgram reported that fully 65 percent of volunteers had repeatedly administered increasing electric shocks to a man they believed to be in severe pain.
In the decades since, the results have been held up as proof of the depths of ordinary people’s depravity in service to an authority figure. At the time, this had deep and resonant connections to the Holocaust and Nazi Germany – so resonant, in fact, that they might have led Milgram to dramatically misrepresent his hallmark findings.
Stanley Milgram framed his research from the get-go as both inspired by and an explanation of Nazi behavior. He mentioned the gas chambers in the opening paragraph of his first published article; he strengthened the link and made it more explicit twelve years later in his book, Obedience to Authority.
At the time Milgram’s research was first published, the trial of high profile Nazi Adolph Eichmann was still fresh in the public mind. Eichmann had been captured in Buenos Aires and smuggled out of the country to stand trial in Israel. The trial was the first of its kind to be televised.
Andrew Grant is an associate editor at DISCOVER. His latest feature, “William Borucki: Planet Hunter,” appears in the December issue of the magazine.
Last night Major League Baseball announced the winners of the Cy Young Award, given to the year’s best pitchers in the American and National leagues. The National League victor was New York Mets pitcher R.A. Dickey. That he won the award is remarkable, and not just because he is a relatively ancient 38 years old or because he plays for the perennial punch line Mets. Dickey is the first Cy Young winner whose repertoire consists primarily of the knuckleball, a baffling pitch whose intricacies scientists are only now beginning to understand.
Most pitchers, including the other Cy Young finalists, try to overwhelm hitters with a combination of speed and movement. They throw the ball hard—the average major league fastball zooms in at around 91 miles per hour—and generate spin (up to 50 rotations a second) that makes the ball break, or deviate from a straight-line trajectory. Dickey does neither of those things. Rather than cock his arm back and fire, he pushes the ball like a dart so that it floats toward the plate between 55 and 80 mph. The ball barely spins at all—perhaps a quarter- or half-turn before reaching the hitter.
Keith Kloor is a freelance journalist whose stories have appeared in a range of publications, from Science to Smithsonian. Since 2004, he’s been an adjunct professor of journalism at New York University. You can find him on Twitter @KeithKloor.
Last month, a group of Massachusetts residents filed an official complaint claiming that the wind turbine in their town is making them sick. According to the article in the Patriot Ledger, the residents “said they’ve lost sleep and suffered headaches, dizziness and nausea as a result of the turbine’s noise and shadow flicker [flashing caused by shadows from moving turbine blades].” A few weeks later, a story from Wisconsin highlighted similar complaints of health problems associated with wind turbines there.
Anecdotal claims like these are on the rise and not just in the United States. A recent story in the UK’s Daily Mail catalogs a litany of health ailments supposedly caused by wind turbines—everything from memory loss and dizziness to tinnitus and depression.
I expect so. For one thing, the alleged health problem has been adopted by demagogues and parroted on popular climate-skeptic websites. But the bigger problem is that “wind turbine syndrome” is what is known as a “communicated” disease, says Simon Chapman, a professor of public health at the University of Sydney. The disease, which has reached epidemic proportions in Australia, “spreads via the nocebo effect by being talked about, and is thereby a strong candidate for being defined as a psychogenic condition,” Chapman wrote several months ago in The Conversation.
What Chapman is describing is a phenomenon akin to mass hysteria—an outbreak of apparent health problems that has a psychological rather than physical basis. Such episodes have occurred throughout human history; earlier this year, a cluster of teenagers at an upstate New York high school were suddenly afflicted with Tourette syndrome-like symptoms. The mystery outbreak was attributed by some speculation to environmental contaminants.
But a doctor treating many of the students instead diagnosed them with a psychological condition called “conversion disorder,” as described by psychologist Vaughan Bell on The Crux:
Julie Sedivy is the lead author of Sold on Language: How Advertisers Talk to You And What This Says About You. She contributes regularly to Psychology Today and Language Log. She is an adjunct professor at the University of Calgary, and can be found at juliesedivy.com and on Twitter/soldonlanguage.
When we tune in to the presidential debates, we want each candidate to tell us what his plan is, and why it will work. But how much information do voters really want? Should Romney unpack his five-point plan and carefully explain the logic behind it? Or should he just reassure us that he knows what he’s doing?
Conventional wisdom has it that too much complexity can mark a candidate for premature political death. History offers up Adlai Stevenson as a prototype of the earnest intellectual who buried his presidential chances under mounds of policy detail—making him a great favorite of the intelligentsia, but too rarely connecting with the average voter. As the story has it, an enthusiastic supporter shouted out during one of his campaigns: “You have the vote of every thinking person.” To which Stevenson allegedly replied (presciently): “That’s not enough, madam. We need a majority.”
The great challenge for candidates during a debate is that they’re not addressing “the average voter.” They’re addressing a mass of citizens with conflicting priorities, beliefs, values, and even different cognitive styles that shape how they evaluate arguments, and just how much detail they want to hear from those who would persuade them.
In the longstanding argument over whether voters are won over by candidates’ style or substance, the answer is undoubtedly: both. All of us rely on fast, intuitive modes of thinking (often called System 1 processing by psychologists) as well as slower, more deliberative evaluation (System 2 thinking). Some situations tilt us more toward one than the other. Anything that limits the sheer computational power we can devote to a task—for instance, watching the debates while at the same time following comments on Twitter—makes us depend more on quick but shallow System 1 processing.
But put different people in the same situation, and some of them will be more likely to fall back on intuitive gut reactions while others will delve into deeper analysis. Some folks, it turns out, simply tend toward more mental activity more than others, and psychologists have found a way to measure this difference using the “Need for Cognition” scale, a questionnaire that contains queries such as: “I really like a task that involves coming up with new solutions to problems” or “I feel relief rather than satisfaction after completing a task that required a lot of mental effort.”
A long line of research (much of it done by Richard Petty, John Cacioppo and their colleagues) shows that people who score high on Need for Cognition respond differently to persuasive messages than those who score lower. When superficial cues (like the attractiveness or apparent expertise of whoever’s making the pitch) are compared against the quality of an argument, these eager thinkers are more likely to ignore the shallow cues in favor of the stronger argument. People who fall lower on the Need for Cognition scale will often find a logically weak argument as persuasive as a strong one, especially if it comes from the lips of an attractive or knowledgeable person.
In the face of such cognitive diversity, a sound strategy for a political candidate might be to make sure to control his style, body language, and general demeanor, and also to have a good, strong argument, ready to appeal to both System 1 and System 2 thinkers. But a recent study in the Journal of Consumer Research by Philip Fernbach and his colleagues suggests that sometimes, a well-reasoned, complex, detailed argument can actually repel those inclined towards intuition.
Steve Silberman (@stevesilberman on Twitter) is a journalist whose articles and interviews have appeared in Wired, Nature, The New Yorker, and other national publications; have been featured on The Colbert Report; and have been nominated for National Magazine Awards and included in many anthologies. Steve is currently working on a book on autism and neurodiversity called NeuroTribes: Thinking Smarter About People Who Think Differently (Avery Books 2013). This post originally appeared on his blog, NeuroTribes.
Photo by Flickr user Noodles and Beef
Your doctor doesn’t like what’s going on with your blood pressure. You’ve been taking medication for it, but he wants to put you on a new drug, and you’re fine with that. Then he leans in close and says in his most reassuring, man-to-man voice, “I should tell you that a small number of my patients have experienced some minor sexual dysfunction on this drug. It’s nothing to be ashamed of, and the good news is that this side effect is totally reversible. If you have any ‘issues’ in the bedroom, don’t hesitate to call, and we’ll switch you to another type of drug called an ACE inhibitor.” OK, you say, you’ll keep that in mind.
Three months later, your spouse is on edge. She wants to know if there’s anything she can “do” (wink, wink) to reignite the spark in your marriage. She’s been checking out websites advertising romantic getaways. No, no, you reassure her, it’s not you! It’s that new drug the doctor put me on, and I hate it. When you finally make the call, your doctor switches you over to a widely prescribed ACE inhibitor called Ramipril.
“Now, Ramipril is just a great drug,” he tells you, “but a very few patients who react badly to it find they develop a persistent cough…” Your throat starts to itch even before you fetch the new prescription. Later in the week, you’re telling your buddy at the office that you “must have swallowed wrong” — for the second day in a row. When you type the words ACE inhibitor cough into Google, the text string auto-completes, because so many other people have run the same search, desperately sucking on herbal lozenges between breathless sips of water.
In other words, you’re doomed. Cough, cough!
Emily Willingham (Twitter, Google+, blog) is a science writer and compulsive biologist whose work has appeared at Slate, Grist, Scientific American Guest Blog, and Double X Science, among others. She is science editor at the Thinking Person’s Guide to Autism and author of The Complete Idiot’s Guide to College Biology.
In May, the New York Times Magazine published a piece by Jennifer Kahn entitled, “Can you call a 9-year-old a psychopath?” The online version generated a great deal of discussion, including 631 comments and a column from Amanda Marcotte at Slate comparing psychopathy and autism. Marcotte’s point seemed to be that if we accept autism as another variant of human neurology rather than as a moral failing, should we not also apply that perspective to the neurobiological condition we call “psychopathy”? Some autistic people to umbrage at the association with psychopathy, a touchy comparison in the autism community in particular. Who would want to be compared to a psychopath, especially if you’ve been the target of one?
In her Times piece, Kahn noted that although no tests exist to diagnose psychopathy in children, many in the mental health professions “believe that psychopathy, like autism, is a distinct neurological condition (that) can be identified in children as young as 5.” Marcotte likely saw this juxtaposition with autism and based her Slate commentary on the comparison. But a better way to make this point (and to avoid a minefield), I’d argue, is to stop mentioning autism at all and to say that any person’s neurological make-up isn’t a matter of morality but of biology. If we argue for acceptance of you and your brain, regardless how it works, we should argue for acceptance of people who are psychopaths. They are no more to blame for how they developed than people with other disabilities.
If being compared with a psychopath elicits a whiplash-inducing mental recoil, then you probably have a good understanding of why the autism community responded to Marcotte’s piece (and accompanying tweets) so defensively, even though her point was a good one. At its core, the argument is a logical, even humanistic one. When it comes to psychopathy, our cultural tendencies are to graft moral judgment onto people who exhibit symptoms of psychopathy, a condition once designated as “moral insanity.” We tend collectively to view the psychopath as a cold-hearted, amoral entity walking around in a human’s body, a literal embodiment of evil.
But those grown people whom we think of as being psychopaths were once children. What were our most infamous psychopaths like when they were very young? Was there ever a time when human intervention could have deflected the trajectory they took, turned the path away from the horror, devastation, and tragedy they caused, one that not all psychopaths ultimately follow? Can we look to childhood as a place to identify the traits of psychopathy and, once known, apply early intervention?
The American Psychiatric Association have just published the latest update of the draft DSM-5 psychiatric diagnosis manual, which is due to be completed in 2013. The changes have provoked much comment, criticism, and heated debate, and many have used the opportunity to attack psychiatric diagnosis and the perceived failure to find “biological tests” to replace descriptions of mental phenomena. But to understand the strengths and weaknesses of psychiatric diagnosis, it’s important to know where the challenges lie.
Think of classifying mental illness like classifying literature. For the purposes of research and for the purposes of helping people with their reading, I want to be able to say whether a book falls within a certain genre—perhaps supernatural horror, romantic fiction, or historical biography. The problem is similar because both mental disorder and literature are largely defined at the level of meaning, which inevitably involves our subjective perceptions. For example, there is no objective way of defining whether a book is a love story or whether a person has a low mood. This fact is used by some to suggest that the diagnosis of mental illness is just “made up” or “purely subjective,” but this is clearly rubbish. Although the experience is partly subjective, we can often agree on classifications.
Speaking the same language
How well people can agree on a classification is known as inter-rater reliability and to have a diagnosis accepted, you should ideally demonstrate that different people can use the same definition to classify different cases in the same way. In other words, we want to be sure that we’re all speaking the same language—when one doctor says a patient has “depression,” another should agree. To do this, it’s important to have definitions that are easy to interpret and apply, and that rely on widely recognised features.
To return to our literature example, it’s possible to define romantic fiction in different ways, but if I want to make sure that other people can use my definition it’s important to choose criteria that are clear, concise, and easily applicable. It’s easier to decide whether the book has “a romantic relationship between two of the main characters” than whether the book involves “an exploration of love, loss and the yearning of the heart.” Similarly, “low mood” is easier to detect than a “melancholic temperament.”
Science journalist Robin Marantz Henig is a contributing writer at The New York Times Magazine. Her next book, co-authored with her daughter Samantha Henig, is called Twentysomething: Why Do Young Adults Seem Stuck? and will be out in November.
Is regret something you accumulate in your life, piling it up as you remember an ever-increasing number of things that really could have gone better? If so, you’d think that young people would have fewer regrets than older ones, since they haven’t lived as long and haven’t missed as many chances—and if they have missed a chance at some adventure or relationship, they’re more likely to think that the chance will come around again.
But a recent study by Stefanie Brassen and her colleagues at University Medical Center Hamburg-Eppendorf in Germany suggests that young people feel more regret than old people, largely because the older people seem to be quashing those nasty feelings before the feelings overtake them. Indeed, they found that the only 60-somethings who experienced regret at the same level as 20-somethings were those who were depressed.
I think it’s worth considering, though, whether the German investigators really were tapping into regret at all, or a different aspect of youth psychology.
Brassen and her colleagues simulated regret by having her subjects play a Let’s Make a Deal-type computer game in which they opened a succession of boxes to earn cash. They could keep opening boxes and keep accumulating cash as long as they stopped before they opened the box containing a pop-out devil. If they got to the devil, the game was over and they had to give back everything they’d earned in that round.
The researchers were less interested in how many boxes the subjects opened than in how they felt about the chances they missed. After the round was over, the investigators revealed the contents of the unopened boxes. The more boxes the subjects could have opened before getting to the devil, the more regret they were expected to feel, since they could have earned even more money if they’d been just a little more daring.