Knowing something like the back of your hand supposedly means that you’re very familiar with it. But it could just as well mean that you think it’s wider and shorter than it actually is. As it turns out, our hands aren’t as well known to us as we might imagine. According to Matthew Longo and Patrick Haggard from University College London, we store a mental model of our hands that helps us to know exactly where our limbs are in space. The trouble is that this model is massively distorted.
For any animal, it pays to be able to spot other animals in order to find mates and companions and to avoid predators. Fortunately, many animals move in a distinct way, combining great flexibility with the constraints of a rigid skeleton – that sets them apart from inanimate objects like speeding trains or flying balls. The ability to detect this “biological motion” is incredibly important. Chicks have it. Cats have it. Even two-day-old babies have it. But autistic children do not.
Ami Klim from Yale has found that two-year-old children with autism lack normal preferences for natural movements. This difference could explain many of the problems that they face in interacting with other people because the ability to perceive biological motion – from gestures to facial expressions – is very important for our social lives.
Indeed, the parts of the brain involved in spotting them overlap with those that are involved in understanding the expressions on people’s faces or noticing where they are looking. Even the sounds of human motion can activate parts of the brain that usually only fire in response to sights.
You can appreciate the importance of this “biological motion” by looking at “point-light” animations, where a few points of light placed at key joints can simulate a moving animal. Just fifteen dots can simulate a human walker. They can even depict someone male or female, happy or sad, nervous or relaxed. Movement is the key – any single frame looks like a random collection of dots but once they move in time, the brain amazingly extracts an image from them.
But Klim found that autistic children don’t have any inclination toward point-light animations depicting natural movement. Instead, they were attracted to those where sounds and movements were synchronised – a feature that normal children tend to ignore. Again, this may explain why autistic children tend to avoid looking at people’s eyes, preferring instead to focus on their mouths.
Alim created a series of point-light animations used the type of motion-capture technology used by special effects technicians and video game designers. He filmed adults playing children’s games like “peek-a-boo” and “pat-a-cake” and converted their bodies into mere spots of light. He then showed two animations side-by-side to 76 children, of whom 21 had autism, 16 were developing slowly but were not autistic, and 39 were developing normally.
The nice thing about writing features is that they’re often solicited miles in advance so I can write something, totally forget about it and then be surprised when I open my weekly copy of New Scientist to find my name in a byline.
Following the piece I wrote on FOXP2, this is another of those “the media says this, but here’s what’s really going on” pieces. It’s an exploration of the supposed cultural differences between East Asians and Westerners in the ways they see and think about the world. This is a fairly controversial area and my intention was to shed a bit of light on the debate and go beyond the stereotypes that are so often inaccurately presented by the popular media (and rightfully mocked).
I’d encourage you to read the full piece, but for those who want a taster, the thrust is this:
Psychologists have conducted a wealth of experiments that seem to support popular notions that easterners have a holistic world view… while westerners tend to think more analytically. However, the most recent research suggests that these popular stereotypes are far too simplistic. It is becoming apparent that we are all capable of thinking both holistically and analytically – and we are starting to understand what makes individuals flip between the two modes of thought.
A seemingly endless array of psychological experiments have apparently reinforced the idea of the anlaytic westerner who focuses on prominent objects and uses hard logic, and the holistic easterner, who considers the object’s context and pays special attention to its relationships with its environment. This distinction seems to apply to areas as diverse as perception, attentional biases, use of logic, views of causality and more. Some have suggested that these differences are the result of historical cultural factors harkening all the way back to the relatively independent lives of ancient Greeks versus the more connected existences of the ancient Chinese.
But it seems that it’s a little more complicated than that.
Many of these conclusions are based on limited evidence from a small number of countries, particularly the US, Canada, Japan and China. Factor in people from Europe and other parts of the world and you see more of a continuum rather than a two-sided distinction. And you can find the same distinctions between analytic and holistic thought if you look at a local level rather than focusing on broad sweeps of history or geography.
It’s also possible to evoke one mindset or another.
For example, psychologists have “primed” east Asian volunteers to adopt an individualistic mode of thought simply by getting them to imagine playing singles tennis, circling single-person pronouns or unscrambling sentences containing words such as “unique”, “independence” and “solitude”. In many of the experiments volunteers from a single cultural background – be it eastern or western – show differences in behaviour as large as those you normally get when comparing people from traditionally collectivist and individualist cultures…
What is clear is that the minds of east Asians, Americans or any other group are not wired differently. We are all capable of both analytic and holistic thought. “Different societies make one option seem to make the most sense at any given moment,” says Oyserman. But instead of dividing the world along cultural lines, we might be better off recognising and cultivating our cognitive flexibility.
Obviously, this is a controversial area and it was probably the most difficult thing I’ve had to write yet. I’m pleased with the result though, and Vaughan at Mind Hacks rates it, which is pretty much the highest commendation I could hope for with a neuroscience/psychology piece!
The video above seems completely unremarkable at first – man walks down a corridor, navigating his way around easily visible and conspicuous obstacles. But it’s far from an easy task; in fact, it should be nigh-impossible. The man, known only as TN, is totally blind.
His inability to see stems from a failure in his brain rather than his eyes. Those work normally, but his visual cortex – the part of the brain that processes visual information – is inactive. As a result, TN is completely unaware of the ability to see and in his everyday life, he behaves like a blind person, using a stick to find his way around. Nevertheless, he can clearly make his way through a gauntlet of obstacles without making a single mistake.
TN was a doctor before two successive strokes destroyed his ability to see. The first one severely damaged the occipital lobe on the left side of his brain, which contains the visual cortex. About a month later, a second stroke took out the equivalent area on the right hemisphere. TN is one-of-a-kind, the only known patient with damage like this in the entire medical literature. The fibres that connect the occipital lobes on the right and left halves of the brain have also been severely damaged and tests reveal that no blood flows between these disconnected areas.
It goes without saying that we are capable of noticing changes to our bodies, but it’s perhaps less obvious that the way we perceive our bodies can affect them physically. The two-way nature of this link, between physicality and perception, has been dramatically demonstrated by a new study of people with chronic hand pain. Lorimer Moseley at the University of Oxford found that he could control the severity of pain and swelling in an aching hand by making it seem larger or smaller.
Moseley recruited 10 patients with chronic pain in one of their arms and asked them to perform a series of ten hand movements at a set intensity and to a set pace. The volunteers had to watch their arms as they went through the motions. On some trials, they did so unaided, but on others, they viewed their arms through a pair of binoculars that doubled their size, a pair of clear-glass binoculars that did not magnify at all, or a pair of inverted binoculars that shrunk the image.
On each trial, Moseley asked the recruits to rate their pain on a visual sliding scale. He found that they were in greater pain after they had moved their arms – no surprise there. But the amount of pain they felt depended on how large their arm appeared to them. They experienced the greatest degree of extra pain when they saw magnified views of their arms, and took the longest amount of time to return to normal. Perhaps more surprisingly, the “minified” images actually evoked less pain than normal.
Modern brain-scanning technology allows us to measure a person’s brain activity on the fly and visualise the various parts of their brain as they switch on and off. But imagine being able to literally see what someone else is thinking – to be able to convert measurements of brain activity into actual images.
It’s a scene reminiscent of the ‘operators’ in The Matrix, but this technology may soon stray from the realm of science-fiction into that of science-fact. Kendrick Kay and colleagues from the University of California, Berkeley have created a decoder that can accurately work out the one image from a large set that an observer is looking at, based solely on a scan of their brain activity.
The machine is still a while away from being a full-blown brain-reader. Rather than reconstructing what the onlooker is viewing from scratch, it can only select the most likely fit from a set of possible images. Even so, it’s no small feat, especially since the set of possible pictures is both very large and completely new to the viewer. And while previous similar studies used very simple images like gratings, Kay’s decoder has the ability to recognise actual photos.