We make a huge number of decisions every day. When it comes to eating, for example, we make 200 more decisions than we’re consciously aware of every day. How is this possible? Because, as Daniel Kahneman has explained, while we’d like to think our decisions are rational, in fact many are driven by gut feel and intuition. The ability to reach a decision based on what we know and what we expect is an inherently human characteristic.
The problem we face now is that we have too many decisions to make every day, leading to decision fatigue – we find the act of making our own decisions exhausting. Even more so than simply deliberate different options or being told by others what to do.
Why not allow technology to ease the burden of decision-making? The latest smart technologies are designed to monitor and learn from our behavior, physical performance, work productivity levels and energy use. This is what has been called Era Three of Automation – when machine intelligence becomes faster and more reliable than humans at making decisions.
One day in October 2010, at a school in the Gaibandha district of northwest Bangladesh, a pupil noticed that the label on a packet of crackers she was eating had darkened. Fearing the crackers were contaminated – “the devil’s deed”, as she put it – she almost immediately fell ill, complaining of heartburn, headache and severe abdominal pain.
The condition quickly spread among her fellow pupils, and later to other schools in the area. Yet toxicologists could trace no contaminant, and all those affected were quickly discharged from the hospital after doctors found no trace of illness. The following week, investigators diagnosed “mass sociogenic illness,” otherwise known as mass hysteria. The children, it seemed, had developed their symptoms simply because they had seen their classmates succumb.
Mass hysteria is thought to be an extreme example of a phenomenon that affects us all day-to-day: emotional contagion. Short of living in hermitic isolation, it is hard to escape it; we are vulnerable to the moods and behaviors of others to an extraordinary degree.
Emotional contagion caused the failure of successive banks at the start of the Great Depression in the 1930s, when investors suffered a collective loss of faith in the ability of these institutions to pay out. It is the force behind fuel crises, health scares and the spread of public grief (for example in Britain after the death of Princess Diana in August 1997). It is the reason why you are more likely to be obese if you have obese friends, and depressed if you are living with a depressed roommate.
But emotional contagion is not all bad – far from it. The mechanism behind it – our tendency to mimic each other’s expressions and behaviors – is crucial to social interaction. Without it, anything beyond superficial communication would be impossible.
“If at first the idea is not absurd, then there is no hope for it,” Albert Einstein reportedly said. I’d like to broaden the definition of addiction—and also retire the scientific idea that all addictions are pathological and harmful.
Since the beginning of formal diagnostics more than fifty years ago, the compulsive pursuit of gambling, food, and sex (known as non-substance rewards) have not been regarded as addictions. Only abuse of alcohol, opioids, cocaine, amphetamines, cannabis, heroin, and nicotine have been formally regarded as addictions. This categorization rests largely on the fact that substances activate basic “reward pathways” in the brain associated with craving and obsession and produce pathological behaviors. Psychiatrists work within this world of psychopathology—that which is abnormal and makes you ill. Read More
This article was originally published on The Conversation.
In 1959, John Howard Griffin, a white American writer, underwent medical treatments to change his skin appearance and present himself as a black man. He then traveled through the segregated US south to experience the racism endured daily by millions of black Americans. This unparalleled life experiment provided invaluable insights into how the change in Griffin’s own skin color triggered negative and racist behaviors from his fellow Americans.
But what about the changes that Griffin himself might have experienced? What does it mean to become someone else? How does this affect one’s self? And how can this affect one’s stereotypes, beliefs and racial attitudes? That was the key question that my colleagues and I set out to answer in a series of psychological experiments that looked at the link between our bodies and our sense of who we are.
When considering extreme environments it is easy to make assumptions about personality, which on closer examination do not stand up to scrutiny. Take, for example, one of the best-researched personality dimensions: introversion-extraversion. Extraversion as a trait appears in all established psychological models of personality, and there is considerable evidence that it has a biological basis. The concepts of introversion and extraversion long ago escaped the conﬁnes of academic psychology and are widely used in everyday conversation, albeit in ways that do not always reﬂect the psychological deﬁnitions.
Broadly speaking, individuals who score highly on measures of extraversion tend to seek stimulation, whereas those who score low tend to avoid it. When asked to describe a typical extravert, most people tend to think of the lively ‘party animal,’ equating extraversion with a preference for social interactions. However, individuals who score highly for extraversion seek more than just social stimulation: they also tend to gravitate toward other stimulating situations, including active leisure and work pursuits, travel, sex, and even celebrity. Introverts, on the other hand, have a generally lower affinity for stimulation.
They ﬁnd too much stimulation, of whatever type, draining rather than energizing. Contrary to popular belief, introverts are not necessarily shy or fearful about social situations, unless they also score highly on measures of social anxiety and neuroticism.
On this basis, one might assume that extraverts would be drawn to extreme environments, where they could satisfy their desire for stimulating situations, whereas introverts would ﬁnd them unattractive. And yet, extreme environments may also expose people to monotony and solitude — experiences that extraverts would ﬁnd aversive, but which are tolerated or even enjoyed by well-balanced introverts. The point here is that simple assumptions about broad personality traits are unlikely to provide good explanations of why people engage in extreme activities.
This article was originally published on The Conversation.
International airports are a busy place to be. Nearly 140,000 passengers pass through New York’s JFK Airport every day. The internal security of the country depends on effective airport checks.
All departing passengers pass through a series of security procedures before embarking their plane. One such procedure is a short, scripted interview when security personnel must make decisions about passenger risk by looking for behavioral indicators of deception.
These are referred to as “suspicious signs”: signs of nervousness, aggression and an unusual interest in security procedures, for example. However, this approach has never been empirically validated and its continued use is criticized for being based on outdated, unreliable perceptions of how people behave when being deceptive.
Despite these concerns, the suspicious signs approach continues to dominate security screening: the US government spends $200 million yearly on behavior-detection officers, who are tasked with spotting suspicious signs. This is a waste of money.
We tend to think of medicine as being all about pills and potions recommended to us by another person—a doctor. But science is starting to reveal that for many conditions another ingredient could be critical to the success of these drugs, or perhaps even replace them. That ingredient is nothing more than your own mind.
Here are six ways to raid your built-in medicine cabinet.
“I talk to my pills,” says Dan Moerman, an anthropologist at the University of Michigan-Dearborn. “I say, ‘Hey guys, I know you’re going to do a terrific job.’”
That might sound eccentric, but based on what we’ve learned about the placebo effect, there is good reason to think that talking to your pills really can make them do a terrific job. The way we think and feel about medical treatments can dramatically influence how our bodies respond.
Simply believing that a treatment will work may trigger the desired effect even if the treatment is inert—a sugar pill, say, or a saline injection. For a wide range of conditions, from depression to Parkinson’s, osteoarthritis and multiple sclerosis, it is clear that the placebo response is far from imaginary. Trials have shown measurable changes such as the release of natural painkillers, altered neuronal firing patterns, lowered blood pressure or heart rate and boosted immune response, all depending on the beliefs of the patient.
It has always been assumed that the placebo effect only works if people are conned into believing that they are getting an actual active drug. But now it seems this may not be true. Belief in the placebo effect itself—rather than a particular drug—might be enough to encourage our bodies to heal.
In the South Pacific there is a place so remote that few people have ever heard of it, let alone seen it: the Trobriand Islands. The Trobriands are located off the east coast of Papua New Guinea, and no white man had set foot there until the late 1700s. During World War I, however, the islands were visited by a man who would one day become a legend in the field of anthropology, Bronislaw Malinowski. Malinowski was a stork of a man—thin, pale, and balding—often seen wearing a pith helmet and socks up to his knees. He had terrible eyesight, was a hypochondriac, an insomniac, and on top of it all had a strong fear of the tropics—in particular, an abhorrence of the heat and the sultriness; to cope, he gave himself injections of arsenic.
Malinowski was, nonetheless, a keen observer of humankind. And as he watched the Trobriand Islanders go about their lives, he noticed something odd. When the islanders went fishing their behavior changed, depending on where they fished. When they fished close to shore—where the waters were calm, the fishing was consistent, and the risk of disaster was low—superstitious behavior among them was nearly nonexistent.
But when the fishermen sailed for open seas—where they were far more vulnerable and their prospects far less certain—their behavior shifted. They became very superstitious, often engaging in elaborate rituals to ensure success. In other words, a low sense of control had produced a high need for superstition. One, in effect, substituted for the other.
This post was originally published at The American Scholar.
Gestures are simple enough. Right? A spontaneous but well-timed wave can emphasize an idea, brush aside a compliment, or point out a barely obscured bird’s nest to an obtuse friend. We use gestures to help our listeners follow along, and we make ourselves look warm and magnanimous in the process.
But on the other hand—and when you’re talking about hands, the puns come furiously—sometimes gestures seem to have nothing to do with edifying or impressing. We gesture on the phone, and in the dark, and when we talk to ourselves. Blind people gesture to other blind people. A growing body of research suggests that we gesture in part for the cognitive benefits that it affords us. Tell us not to use the letter r, or challenge us to adopt a more obscure vocabulary, and our gesture use jumps.
This article was originally published on The Conversation.
Most office workers send dozens of electronic communications to colleagues in any given working day, through email, instant messaging and intranet systems. So many in fact that you might not notice subtle changes in the language your fellow employees use.
Instead of ending their email with “See ya!”, they might suddenly offer you “Kind regards.” Instead of talking about “us,” they might refer to themselves more. Would you pick up on it if they did?
These changes are important and could hint at a disgruntled employee about to go rogue. Our findings demonstrate how language may provide an indirect way of identifying employees who are undertaking an insider attack.
My team has tested whether it’s possible to detect insider threats within a company just by looking at how employees communicate with each other. If a person is planning to act maliciously to damage their employer or sneak out commercially sensitive material, the way they interact with their co-workers changes.