When considering extreme environments it is easy to make assumptions about personality, which on closer examination do not stand up to scrutiny. Take, for example, one of the best-researched personality dimensions: introversion-extraversion. Extraversion as a trait appears in all established psychological models of personality, and there is considerable evidence that it has a biological basis. The concepts of introversion and extraversion long ago escaped the conﬁnes of academic psychology and are widely used in everyday conversation, albeit in ways that do not always reﬂect the psychological deﬁnitions.
Broadly speaking, individuals who score highly on measures of extraversion tend to seek stimulation, whereas those who score low tend to avoid it. When asked to describe a typical extravert, most people tend to think of the lively ‘party animal,’ equating extraversion with a preference for social interactions. However, individuals who score highly for extraversion seek more than just social stimulation: they also tend to gravitate toward other stimulating situations, including active leisure and work pursuits, travel, sex, and even celebrity. Introverts, on the other hand, have a generally lower affinity for stimulation.
They ﬁnd too much stimulation, of whatever type, draining rather than energizing. Contrary to popular belief, introverts are not necessarily shy or fearful about social situations, unless they also score highly on measures of social anxiety and neuroticism.
On this basis, one might assume that extraverts would be drawn to extreme environments, where they could satisfy their desire for stimulating situations, whereas introverts would ﬁnd them unattractive. And yet, extreme environments may also expose people to monotony and solitude — experiences that extraverts would ﬁnd aversive, but which are tolerated or even enjoyed by well-balanced introverts. The point here is that simple assumptions about broad personality traits are unlikely to provide good explanations of why people engage in extreme activities.
This article was originally published on The Conversation.
International airports are a busy place to be. Nearly 140,000 passengers pass through New York’s JFK Airport every day. The internal security of the country depends on effective airport checks.
All departing passengers pass through a series of security procedures before embarking their plane. One such procedure is a short, scripted interview when security personnel must make decisions about passenger risk by looking for behavioral indicators of deception.
These are referred to as “suspicious signs”: signs of nervousness, aggression and an unusual interest in security procedures, for example. However, this approach has never been empirically validated and its continued use is criticized for being based on outdated, unreliable perceptions of how people behave when being deceptive.
Despite these concerns, the suspicious signs approach continues to dominate security screening: the US government spends $200 million yearly on behavior-detection officers, who are tasked with spotting suspicious signs. This is a waste of money.
We tend to think of medicine as being all about pills and potions recommended to us by another person—a doctor. But science is starting to reveal that for many conditions another ingredient could be critical to the success of these drugs, or perhaps even replace them. That ingredient is nothing more than your own mind.
Here are six ways to raid your built-in medicine cabinet.
“I talk to my pills,” says Dan Moerman, an anthropologist at the University of Michigan-Dearborn. “I say, ‘Hey guys, I know you’re going to do a terrific job.’”
That might sound eccentric, but based on what we’ve learned about the placebo effect, there is good reason to think that talking to your pills really can make them do a terrific job. The way we think and feel about medical treatments can dramatically influence how our bodies respond.
Simply believing that a treatment will work may trigger the desired effect even if the treatment is inert—a sugar pill, say, or a saline injection. For a wide range of conditions, from depression to Parkinson’s, osteoarthritis and multiple sclerosis, it is clear that the placebo response is far from imaginary. Trials have shown measurable changes such as the release of natural painkillers, altered neuronal firing patterns, lowered blood pressure or heart rate and boosted immune response, all depending on the beliefs of the patient.
It has always been assumed that the placebo effect only works if people are conned into believing that they are getting an actual active drug. But now it seems this may not be true. Belief in the placebo effect itself—rather than a particular drug—might be enough to encourage our bodies to heal.
In the South Pacific there is a place so remote that few people have ever heard of it, let alone seen it: the Trobriand Islands. The Trobriands are located off the east coast of Papua New Guinea, and no white man had set foot there until the late 1700s. During World War I, however, the islands were visited by a man who would one day become a legend in the field of anthropology, Bronislaw Malinowski. Malinowski was a stork of a man—thin, pale, and balding—often seen wearing a pith helmet and socks up to his knees. He had terrible eyesight, was a hypochondriac, an insomniac, and on top of it all had a strong fear of the tropics—in particular, an abhorrence of the heat and the sultriness; to cope, he gave himself injections of arsenic.
Malinowski was, nonetheless, a keen observer of humankind. And as he watched the Trobriand Islanders go about their lives, he noticed something odd. When the islanders went fishing their behavior changed, depending on where they fished. When they fished close to shore—where the waters were calm, the fishing was consistent, and the risk of disaster was low—superstitious behavior among them was nearly nonexistent.
But when the fishermen sailed for open seas—where they were far more vulnerable and their prospects far less certain—their behavior shifted. They became very superstitious, often engaging in elaborate rituals to ensure success. In other words, a low sense of control had produced a high need for superstition. One, in effect, substituted for the other.
This post was originally published at The American Scholar.
Gestures are simple enough. Right? A spontaneous but well-timed wave can emphasize an idea, brush aside a compliment, or point out a barely obscured bird’s nest to an obtuse friend. We use gestures to help our listeners follow along, and we make ourselves look warm and magnanimous in the process.
But on the other hand—and when you’re talking about hands, the puns come furiously—sometimes gestures seem to have nothing to do with edifying or impressing. We gesture on the phone, and in the dark, and when we talk to ourselves. Blind people gesture to other blind people. A growing body of research suggests that we gesture in part for the cognitive benefits that it affords us. Tell us not to use the letter r, or challenge us to adopt a more obscure vocabulary, and our gesture use jumps.
This article was originally published on The Conversation.
Most office workers send dozens of electronic communications to colleagues in any given working day, through email, instant messaging and intranet systems. So many in fact that you might not notice subtle changes in the language your fellow employees use.
Instead of ending their email with “See ya!”, they might suddenly offer you “Kind regards.” Instead of talking about “us,” they might refer to themselves more. Would you pick up on it if they did?
These changes are important and could hint at a disgruntled employee about to go rogue. Our findings demonstrate how language may provide an indirect way of identifying employees who are undertaking an insider attack.
My team has tested whether it’s possible to detect insider threats within a company just by looking at how employees communicate with each other. If a person is planning to act maliciously to damage their employer or sneak out commercially sensitive material, the way they interact with their co-workers changes.
By Samantha Joel, University of Toronto
People tend to see their own lifestyle as being the ideal lifestyle. A single person may question why anyone would choose to shackle themselves to one partner rather than live it up with the single life. Then there is that smug married couple who pushes for other couples to also tie the knot, so they can similarly bask in wedded bliss.
This phenomenon is called “normative idealization”, which is the tendency to idealize one’s own lifestyle and believe others would benefit from it too.
Where does such insufferable behavior come from? It has been suggested that people might idealize their own relationship status not because they are actually confident that it is ideal, but rather because they are trying to feel better about their own lives.
Psychologists at Stanford University and the University of Waterloo tested whether people were more judgmental of others’ lifestyles when they felt threatened regarding their own. Their results are published in the journal Psychological Science.
By Wind Goodfriend
This article originally appeared on Dr. Goodfriend’s blog “A Psychologist at the Movies.”
I’m completely obsessed with The Hunger Games. I’m not sure why. Maybe it’s because I have visited North Korea, a real country where millions of people really are dying of hunger. Maybe it’s the ironic meta-experience of watching the movie’s violence on a huge screen, when the movie’s point is that people shouldn’t watch violence on a huge screen. Regardless, The Hunger Games is chock-full of possible psychological analysis. Today I’m focusing on the fascinatingly weird emotions that spark between the The Hunger Games’ two main protagonists, Peeta and Katniss.
At home, Katniss has a boyfriend, a young man named Gale. He has rugged good looks, he’s brave, and they are perfectly matched in many ways. Both Katniss and Gale fight against the system in their own way (which is increasingly seen as the trilogy continues), and he is always successful at making Katniss feel comforted in a world with no comforts.
So why does Katniss later fall for Peeta? Peeta certainly has lovable qualities – he’s smart, nurturing, and can frost a cake like nobody’s business – but he and Katniss are not exactly a natural pair. Their personalities clash, their goals in life are different, and Katniss really isn’t interested in any kind of frivolous romance. Sure, in the first movie she is ambivalent about her feelings for Peeta, the kind-hearted boy with a sexy baby-faced look. But psychology would have predicted their blossoming feelings for each other due to their experiences together in the Hunger Games. It’s all because of a phenomenon called misattribution of arousal. Read More
By Matthew D. Lieberman
Comedian Jerry Seinfeld used to tell the following joke: “According to most studies, people’s number one fear is public speaking. Death is number two. Does this sound right? This means to the average person, if you go to a funeral, you’re better-off in the casket than doing the eulogy.”
The joke is a riff based on a privately conducted survey of 2,500 people in 1973 in which 41 percent of respondents indicated that they feared public speaking and only 19 percent indicated that they feared death. While this improbable ordering has not been replicated in most other surveys, public speaking is typically high on the list of our deepest fears. “Top ten” lists of our fears usually fall into three categories: things associated with great physical harm or death, the death or loss of loved ones, and speaking in public.
What is curious is that the person speaking probably doesn’t know or care about most of the people there. So why does it matter so much what they think? The answer is that it hurts to be rejected.
Ask yourself what have been the one or two most painful experiences of your life. Did you think of the physical pain of a broken leg or a really bad fall? My guess is that at least one of your most painful experiences involved what we might call social pain—pain of a loved one’s dying, of being dumped by someone you loved, or of experiencing some kind of public humiliation in front of others.
Why do we associate such events with the word pain? When human beings experience threats or damage to their social bonds, the brain responds in much the same way it responds to physical pain.
By Gina Perry
It’s one of the most well-known psychology experiments in history – the 1961 tests in which social psychologist Stanley Milgram invited volunteers to take part in a study about memory and learning. Its actual aim, though, was to investigate obedience to authority – and Milgram reported that fully 65 percent of volunteers had repeatedly administered increasing electric shocks to a man they believed to be in severe pain.
In the decades since, the results have been held up as proof of the depths of ordinary people’s depravity in service to an authority figure. At the time, this had deep and resonant connections to the Holocaust and Nazi Germany – so resonant, in fact, that they might have led Milgram to dramatically misrepresent his hallmark findings.
Stanley Milgram framed his research from the get-go as both inspired by and an explanation of Nazi behavior. He mentioned the gas chambers in the opening paragraph of his first published article; he strengthened the link and made it more explicit twelve years later in his book, Obedience to Authority.
At the time Milgram’s research was first published, the trial of high profile Nazi Adolph Eichmann was still fresh in the public mind. Eichmann had been captured in Buenos Aires and smuggled out of the country to stand trial in Israel. The trial was the first of its kind to be televised.