Archive for February, 2012

What’s Causing Cheerleader Hysteria? Signs of a Struggle Within the Brain

By Vaughan Bell | February 29, 2012 9:50 am

Vaughan Bell is a clinical and research psychologist based at the Institute of Psychiatry, King’s College London. He’s also working on a book about hallucinations due to be out in 2013.

Cheerleaders from a small town in New York state have been making headlines because several of them began to display tics and involuntary movements that have been diagnosed as conversion disorder—a situation which has often described in the media as being due to “mass hysteria” or a “mystery illness.” I can’t say for sure whether the diagnosis of conversion disorder is accurate or not because I’ve not been clinically involved with the affected people, and if I had, I couldn’t talk about it due to patient confidentiality, but what I can say is that some of the media reporting of conversion disorder, “hysteria” and its related concepts has been highly confused.

Hysteria is used in everyday language to mean “panic” but it has a long history as a medical condition, originating from Hippocrates who thought that a whole range of symptoms could be caused by the womb “wandering” around the body. As you might expect, it was traditionally thought of as a female disease until the French neurologist Jean-Martin Charcot shocked the medical world by reporting the first male cases. Although the connection with a wandering womb was comprehensively disproved, doctors were still puzzled by patients who seemed to have neurological disorders without damage to the brain and nervous system. The core definition of hysteria as neurological symptoms without neurological damage remains with us today.

A student of Charcot’s, Sigmund Freud, became curious about the condition and added another element to the definition, which both made his career and became the basis of psychoanalysis itself. As a neurologist, Freud came to believe that mental energy was equivalent to neural energy and, therefore, our mind obeys something akin to the laws of thermodynamics. The first such law says that energy cannot be created or destroyed, only converted into another form. This is why Freudian psychology is full of mechanical concepts such as “repression” and “conversion” and the idea that all emotional disturbance must be “processed” or “dealt with” (think: a release valve) or else it will express itself in another form (think: a burst or bulging pipe). Many of the theory’s predictions have been disproved but the theory lives on and, to a great extent, it has become what we unfortunately think of as common sense. Nevertheless, Freud applied the same thinking to hysteria, saying that these seemingly neurological symptoms can appear without neurological damage because the unconscious mind is shutting down the body to prevent us from encountering a deep emotional disturbance. A bit like locking the basement in a rushed attempt to deal with a burst pipe—the problem is easier to ignore but not any less serious.

Read More

CATEGORIZED UNDER: Mind & Brain, Top Posts

Is Your Language Making You Broke and Fat? How Language Can Shape Thinking and Behavior (and How It Can’t)

By Julie Sedivy | February 27, 2012 1:53 pm

Julie Sedivy is the lead author of Sold on Language: How Advertisers Talk to You And What This Says About You. She contributes regularly to Psychology Today and Language Log. She is an adjunct professor at the University of Calgary, and can be found at and on Twitter/soldonlanguage.

Keith Chen, an economist from Yale, makes a startling claim in an unpublished working paper: people’s fiscal responsibility and healthy lifestyle choices depend in part on the grammar of their language.

Here’s the idea: Languages differ in the devices they offer to speakers who want to talk about the future. For some, like Spanish and Greek, you have to tack on a verb ending that explicitly marks future time—so, in Spanish, you would say escribo for the present tense (I write or I’m writing) and escribiré for the future tense (I will write). But other languages like Mandarin don’t require their verbs to be escorted by grammatical markers that convey future time—time is usually obvious from something else in the context. In Mandarin, you would say the equivalent of I write tomorrow, using the same verb form for both present and future.

Chen’s finding is that if you divide up a large number of the world’s languages into those that require a grammatical marker for future time and those that don’t, you see an interesting correlation: speakers of languages that force grammatical marking of the future have amassed a smaller retirement nest egg, smoke more, exercise less, and are more likely to be obese. Why would this be? The claim is that a sharp grammatical division between the present and future encourages people to conceive of the future as somehow dramatically different from the present, making it easier to put off behaviors that benefit your future self rather than your present self.

Read More

CATEGORIZED UNDER: Mind & Brain, Top Posts

It’s Not Academic: How Publishers Are Squelching Science Communication

By Mike Taylor | February 21, 2012 9:45 am

Mike Taylor is a computer programmer with Index Data and a dinosaur palaeobiologist with the University of Bristol, UK.  He blogs about palaeontology and open access at and tweets as @SauropodMike.


Everyone involved in academic publishing knows that it’s in a horrible mess. Authors increasingly see publishers as enemies rather than co-workers. And while publishers’ press releases talk about partnership with authors, unguarded comments on blogs tell a different story, revealing that the hostility is mutual. The Cost Of Knowledge boycott is the most obvious illustration of the fractious situation—more than 6000 researchers have declared that they will not write, edit, or review for Elsevier journals. But how did we get into this unhealthy situation? And how can we get out?

The problems all stem from the arrival of the Internet. Or, rather, the Internet has removed problems that used to exist, and this has caused problems for organisations that existed to solve those problems. Which is a problem for them.

Back in the day, it was hard to distribute the results of research. Authors would submit typewritten manuscripts, and publishers took it from there. Editors would fix errors and hone language. Typesetting was an art, especially when it involved equations or graphs. Making multiple copies was costly and time-consuming. And distributing them around the world needed enormous resources. So the researchers of 20 years ago saw publishers as necessary to their work. It’s no wonder that publishers were generally liked and respected.

But just as long-distance telephone networks made telegrams obsolete, so computers mean that most of what publishers do isn’t needed any more. By submitting machine-readable manuscripts and figures, we eliminate nearly all typesetting work. (In maths and physics, authors submit “camera-ready” copy that requires no further typesetting at all.) Printing is no longer needed. Copying is quick, free, and perfect. And worldwide distribution is also free and instantaneous.

You might think that publishers’ response would be to emphasise and increase their editorial role. Instead, surprisingly, they have shed most editorial work. Copyediting is rare, and when it does exist has a reputation for adding more errors than it removes. Most journals have stringent formatting guidelines that authors must follow in submitted manuscripts. (A colleague of mine recently gave up attempts to submit his manuscript to a particular journal after it was three times rejected without review for trivial formatting and punctuation errors, such as using the wrong kind of dash. Seriously.)*

Read More


Why Do African and English Clicks Sound So Different? It’s All in Your Head

By Julie Sedivy | February 13, 2012 2:05 pm

Julie Sedivy is the lead author of Sold on Language: How Advertisers Talk to You And What This Says About You. She contributes regularly to Psychology Today and Language Log. She is an adjunct professor at the University of Calgary, and can be found at and on Twitter/soldonlanguage.

What’s the most exotic, strange-sounding language you’ve ever heard? I recently popped this question to a group of English speakers at a cocktail party. Norwegian and Finnish were strong contenders for the title, but everyone agreed that the prize had to go to African “click languages” like the Bantu language Xhosa (spoken by Nelson Mandela) or the Khoisan language Khoekhoe, spoken in the Kalahari Desert. Conversations in such languages are liberally sprinkled with clicking sounds that are made with a sucking action of the tongue, much like the sounds we might make when spurring on a horse or expressing disapproval. You may have been introduced to one of these click languages spoken by Kalahari Bushmen in the 1980 film The Gods Must be Crazy. Below is an example, and if you’d like to try your hand at making Xhosa click sounds, you can find a quick lesson here.

To English ears, Xhosa speech often comes across a bit like highly-skilled beatboxing, mixing recognizeable speech with what sounds like the clacking of objects striking each other. My cocktail party friends wanted to know “How do they click and talk at the same time?” To a native speaker of Xhosa, this is a really weird question, much like asking “How do they make the consonants t or p and talk at the same time?” The late African singer Miriam Makeba, in introducing this 1979 performance of her famous “Click Song,” put it like this: “Everywhere we go, people often ask me ‘How do you make that noise?’ It used to offend me, because it isn’t a noise, it’s my language.”

If clicks do sound like exotic noises to you, it might surprise you to know that there’s nothing especially difficult about making click sounds in speech—they’re easily mastered by toddlers who still struggle making truly difficult sounds like s and z. And it might really surprise you to learn, as found in a recent study by Melissa Wright at Birmingham City University, that as an English speaker, you likely riddle your own speech with click sounds, using them much more frequently and systematically than just the occasional “tsk” of disapproval. If that’s so, why on earth do the African clicks sound so strange to English speakers, to the point of being un-language-like?

Read More

CATEGORIZED UNDER: Mind & Brain, Top Posts

Komen for the Cure’s Biggest Mistake Is About Science, Not Politics

By Christie Aschwanden | February 10, 2012 12:08 pm

Christie Aschwanden is a 2011 National Magazine Award finalist whose work has appeared in The New York Times, Mother Jones, Reader’s Digest, Men’s Journal, and New Scientist. She’s a contributing editor for Runner’s World and writes about medicine for Slate. Follow her on Twitter @cragcrest or find her online at

This post originally ran on the blog Last Word on Nothing.

Over the week or so, critics have found many reasons to fault Susan G. Komen for the Cure. The scrutiny began with the revelation that the group was halting its grants to Planned Parenthood. The decision seemed like a punitive act that would harm low-income women (the money had funded health services like clinical breast exams), and Komen’s public entry into the culture wars came as a shock to supporters who’d viewed the group as nonpartisan. Chatter on the Internet quickly blamed the move on Komen’s new vice president of Public Policy, Karen Handel, a GOP candidate who ran for governor in Georgia on a platform that included a call to defund Planned Parenthood. Komen’s founder, Ambassador Nancy Brinker, attempted to explain away the decision, and on Tuesdy, Handel resigned her position.

The Planned Parenthood debacle brought renewed attention to other controversies about Komen from recent years—like its “lawsuits for the cure” program that spent nearly $1 million suing groups like “cupcakes for the cure” and “kites for the cure” over their daring attempts to use the now-trademarked phrase “for the cure.” Critics also pointed to Komen’s relentless marketing of pink ribbon-themed products, including a Komen-branded perfume alleged to contain carcinogens, and pink buckets of fried chicken, a campaign that led one rival breast cancer advocacy group to ask, “what the cluck?”

But these problems are minuscule compared to Komen’s biggest failing—its near outright denial of tumor biology. The pink arrow ads they ran in magazines a few months back provide a prime example. “What’s key to surviving breast cancer? YOU. Get screened now,” the ad says. The takeaway? It’s your responsibility to prevent cancer in your body. The blurb below the big arrow explains why. “Early detection saves lives. The 5-year survival rate for breast cancer when caught early is 98%. When it’s not? 23%.”

If only it were that simple. As I’ve written previously here, the notion that breast cancer is a uniformly progressive disease that starts small and only grows and spreads if you don’t stop it in time is flat out wrong. I call it breast cancer’s false narrative, and it’s a fairy tale that Komen has relentlessly perpetuated.

Read More

CATEGORIZED UNDER: Health & Medicine, Top Posts

I, Robopsychologist, Part 2: Where Human Brains Far Surpass Computers

By Andrea Kuszewski | February 9, 2012 10:08 am

Andrea Kuszewski is a behavior therapist and consultant, science writer, and robopsychologist at Syntience in San Francisco. She is interested in creativity, intelligence, and learning, in both humans and machines. Find her on Twitter at @AndreaKuszewski

Before you read this post, please see “I, Robopsychologist, Part 1: Why Robots Need Psychologists.”

A current trend in AI research involves attempts to replicate a human learning system at the neuronal level—beginning with a single functioning synapse, then an entire neuron, the ultimate goal being a complete replication of the human brain. This is basically the traditional reductionist perspective: break the problem down into small pieces and analyze them, and then build a model of the whole as a combination of many small pieces. There are neuroscientists working on these AI problems—replicating and studying one neuron under one condition—and that is useful for some things. But to replicate a single neuron and its function at one snapshot in time is not helping us understand or replicate human learning on a broad scale for use in the natural environment.

We are quite some ways off from reaching the goal of building something structurally similar to the human brain, and even further from having one that actually thinks like one. Which leads me to the obvious question: What’s the purpose of pouring all that effort into replicating a human-like brain in a machine, if it doesn’t ultimately function like a real brain?

If we’re trying to create AI that mimics humans, both in behavior and learning, then we need to consider how humans actually learn—specifically, how they learn best—when teaching them. Therefore, it would make sense that you’d want people on your team who are experts in human behavior and learning. So in this way, the field of psychology is pretty important to the successful development of strong AI, or AGI (artificial general intelligence): intelligence systems that think and act the way humans do. (I will be using the term AI, but I am generally referring to strong AI.)

Basing an AI system on the function of a single neuron is like designing an entire highway system based on the function of a car engine, rather than the behavior of a population of cars and their drivers in the context of a city. Psychologists are experts at the context. They study how the brain works in practice—in multiple environments, over variable conditions, and how it develops and changes over a lifespan.

The brain is actually not like a computer; it doesn’t always follow the rules. Sometimes not following the rules is the best course of action, given a specific context. The brain can act in unpredictable, yet ultimately serendipitous ways. Sometimes the brain develops “mental shortcuts,” or automated patterns of behavior, or makes intuitive leaps of reason. Human brain processes often involve error, which also happens to be a very necessary element of creativity, innovation, and human learning in general. Take away the errors, remove serendipitous learning, discount intuition, and you remove any chance of any true creative cognition. In essence, when it gets too rule-driven and perfect, it ceases to function like a real human brain.

To get a computer that thinks like a person, we have to consider some of the key strengths of human thinking and use psychology to figure out how to foster similar thinking in computers.

Read More

CATEGORIZED UNDER: Mind & Brain, Technology, Top Posts

I, Robopsychologist, Part 1: Why Robots Need Psychologists

By Andrea Kuszewski | February 7, 2012 1:38 pm

Andrea Kuszewski is a behavior therapist and consultant, science writer, and robopsychologist at Syntience in San Francisco. She is interested in creativity, intelligence, and learning, in both humans and machines. Find her on Twitter a @AndreaKuszewski.

“My brain is not like a computer.”

The day those words were spoken to me marked a significant milestone for both me and the 6-year-old who uttered them. The words themselves may not seem that profound (and some may actually disagree), but that simple sentence represented months of therapy, hours upon hours of teaching, all for the hope that someday, a phrase like that would be spoken at precisely the right time. When he said that to me, he was showing me that the light had been turned on, the fire ignited. And he was letting me know that he realized this fact himself. Why was this a big deal?

I began my career as a behavior therapist, treating children on the autism spectrum. My specialty was Asperger syndrome, or high-functioning autism. This 6-year-old boy, whom I’ll call David, was a client of mine that I’d been treating for about a year at that time. His mom had read a book that had recently come out, The Curious Incident of the Dog in the Night-Time, and told me how much David resembled the main character in the book (who had autism), in regards to his thinking and processing style. The main character said, “My brain is like a computer.”

David heard his mom telling me this, and that quickly became one of his favorite memes. He would say things like “I need input” or “Answer not in the database” or simply “You have reached an error,” when he didn’t know the answer to a question. He truly did think like a computer at that point in time—he memorized questions, formulas, and the subsequent list of acceptable responses. He had developed some extensive social algorithms for human interactions, and when they failed, he went into a complete emotional meltdown.

My job was to change this. To make him less like a computer, to break him out of that rigid mindset. He operated purely on an input-output framework, and if a situation presented itself that wasn’t in the database of his brain, it was rejected, returning a 404 error.

Read More

CATEGORIZED UNDER: Mind & Brain, Technology, Top Posts

Why Do We Want Autistic Kids to Have Superpowers?

By Charlie Jane Anders | February 1, 2012 11:13 am

Charlie Jane Anders is the managing editor of Read her novelette Six Months, Three Days here.

Last week saw the debut of Touch, Kiefer Sutherland’s show about a father whose non-neurotypical son turns out to be able to predict future events. This comes on the heels of Alphas, which also gave us Gary, another person who appears to be on the autism spectrum but who has the ability to see hidden energies. And the notion of autistic people as savants or special fixers has been around forever.

Why do we create these fantasies about autistic people having superpowers? We talked to a few experts to try and find out.

In Touch, Sutherland plays Martin Bohm, a man whose wife was killed on 9/11. His “emotionally challenged” son Jake is mute, unable to connect with others, and “shows little emotion.” Jake is obsessed with numbers and discarded cellphones—and then we discover, via Danny Glover’s expert, that Jake can see the threads of invisible energy that bind the entire world together. And Jake sees where they’re broken by our crazy modern world, and needs his dad’s help to fix them.

So basically, it’s New Age spirituality rolled in with “autistic savant” fantasies. Already, it’s gotten some criticism. ThinkProgress’ Alyssa Rosenberg referred to the show as creating “a magical alternative to autism.” Meanwhile, Ellen Seidman at Love That Max was happy to see a special-needs kid on television, but also worried the show would “take the focus away from the amazing reality of our kids.” And she thought maybe some people would think autistic kids really could predict the future. And that could be bad.

Read More

MORE ABOUT: autism, savants, Touch, TV

The Crux

A collection of bright and big ideas about timely and important science from a community of experts.

See More


Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

Collapse bottom bar