Category: Perception

The size of your brain’s visual centre affects how you see the world

By Ed Yong | December 6, 2010 9:00 am

Ebbinghaus_illusion

Look at the image above. Which of the central orange circles looks bigger? Most people would say the one on the right – the one surrounded by the smaller ‘petals’. In truth, the central circles are exactly the same size. This is the Ebbinghaus illusion, named after the German psychologist Hermann Ebbinghaus. It has been around for over a century, but it still continues to expand our understanding of the brain.

Samuel Schwarzkopf from University College London has just discovered that the size of one particular part of the brain, known as primary visual cortex or V1, predicts how likely we are to fall for the illusion. V1 sits at the very back of our brains and processes the visual information that we get from our eyes. It’s extremely variable; one person’s V1 might have three times the surface area of another person’s. While many scientific studies try to average out those differences, Schwarzkopf wanted to explore them.

Read More

MORE ABOUT: Ebbinghaus, illusion, Ponzo, V1

Newborn babies have a preference for the way living things move

By Ed Yong | October 7, 2010 10:00 am

Running_rabbitThis is an old article, reposted from the original WordPress incarnation of Not Exactly Rocket Science. I’m travelling around at the moment so the next few weeks will have some classic pieces and a few new ones I prepared earlier.

From an animal’s point of view, the most important things in the world around it are arguably other animals. They provide mates, food, danger and companionship, so as an animal gazes upon its surroundings, it needs to be able to accurately discern the movements of other animals. Humans are no exception and new research shows that we are so attuned to biological motion that babies just two days old are drawn to extremely simple abstract animations of walking animals.

Animals move with a restrained fluidity that makes them stand out from inanimate objects. Compared to a speeding train or a falling pencil, animals show far greater flexibility of movement but most are nonetheless constrained by some form of rigid skeleton. That gives our visual system something to latch on to.

Read More

Your brain sees your hands as short and fat

By Ed Yong | June 14, 2010 3:00 pm

Hands

Knowing something like the back of your hand supposedly means that you’re very familiar with it. But it could just as well mean that you think it’s wider and shorter than it actually is. As it turns out, our hands aren’t as well known to us as we might imagine. According to Matthew Longo and Patrick Haggard from University College London, we store a mental model of our hands that helps us to know exactly where our limbs are in space. The trouble is that this model is massively distorted.

Read More

Time doesn’t actually slow down in a crisis

By Ed Yong | May 3, 2010 9:00 am

I’m on holiday this week so I’ll be reposting a few articles from the old WordPress incarnation of Not Exactly Rocket Science. Stay with it though – these are five good’uns.

Time doesn’t actually slow down in a crisisIn The Matrix, when an agent first shoots at Neo, his perception of time slows down, allowing him to see and avoid oncoming bullets. In the real world, almost all of us have experienced moments of crisis when time seems to slow to a crawl, be it a crashing car, an incoming fist, or a falling valuable.

Now, a trio of scientists has shown that this effect is an illusion. When danger looms, we don’t actually experience events in slow motion. Instead, our brains just remember time moving more slowly after the event has passed.

Chess Stetson, Matthew Fiesta and David Eagleman demonstrated the illusion by putting a group of volunteers through 150 terrifying feet of free-fall. They wanted to see if the fearful plummet allowed them to successfully complete a task that was only possible if time actually moved more slowly to their eyes.

Read More

How our skin helps us to listen

By Ed Yong | November 25, 2009 1:00 pm

What part of the body do you listen with? The ear is the obvious answer, but it’s only part of the story – your skin is also involved. When we listen to someone else speaking, our brain combines the sounds that our ears pick up with the sight of the speaker’s lips and face, and subtle changes in air movements over our skin. Only by melding our senses of hearing, vision and touch do we get a full impression of what we’re listening to. 

When we speak, many of the sounds we make (such as the English “p” or “t”) involve small puffs of air. These are known as “aspirations”. We can’t hear them, but they can greatly affect the sounds we perceive. For example, syllables like “ba” and “da” are simply versions of “pa” and “ta” without the aspirated puffs. 

If you looked at the airflow produced by a puff, you’d see a distinctive pattern – a burst of high pressure at the start, followed by a short round of turbulence. This pressure signature is readily detected by our skin, and it can be easily faked by clever researchers like Bryan Gick and Donald Derrick from the University of British Columbia.

Gick and Derrick used an air compressor to blow small puffs of air, like those made during aspirated speech, onto the skin of blindfolded volunteers. At the same time, they heard recordings of different syllables – either “pa”, “ba”, “ta” or “da” – all of which had been standardised so they lasted the same time, were equally loud, and had the same frequency.

Gick and Derrick found that the fake puffs of air could fool the volunteers into “hearing” a different syllable to the one that was actually played. They were more likely to mishear “ba” as “pa”, and to think that a “da” was a “ta”. They were also more likely to correctly identify “pa” and “ta” sounds when they were paired with the inaudible puffs.

This deceptively simple experiment shows that our brain considers the tactile information picked up from our skin when it deciphers the sounds we’re listening to. Even parts of our body that are relatively insensitive to touch can provide valuable clues. Gick and Derrick found that their fake air puffs worked if they were blown onto the sensitive skin on the back of the hand, which often pick up air currents that we ourselves create when we speak. But the trick also worked on the back of the neck, which is much less sensitive and unaffected by our own spoken breaths.

While many studies have shown that we hear speech more accurately when it’s paired with visual info from a speaker’s face, this study clearly shows that touch is important too. In some ways, the integration of hearing and touch isn’t surprising – both senses involve detecting the movement of molecules vibrating in the world around us. Gick and Derrick suggest that their result might prove useful in designing aids for people who are hard of hearing.

Reference: Nature doi:10.1038/nature08572

More on perception:

Read More

Infants match human words to human faces and monkey calls to monkey faces (but not quacks to duck faces)

By Ed Yong | October 19, 2009 5:00 pm

Humanmonkeyduck.jpg

From a young age, children learn about the sounds that animals make. But even without teaching aides like Old Macdonald’s farm, it turns out that very young babies have an intuitive understanding of the noises that humans, and even monkeys, ought to make. Athena Vouloumanos from New York University found that at just five months of age, infants match human speech to human faces and monkey calls to monkey faces. Amazingly, this wasn’t a question of experience – the same infants failed to match quacks to duck faces, even though they had more experience with ducks than monkeys.

Voloumanos worked with a dozen five-month-old infants from English- and French-speaking homes. She found that they spent longer looking at human faces when they were paired with spoken words than with monkey or duck calls. They clearly expect human faces, and not animal ones, to produce speech, even when the words in question came from a language – Japanese – that they were unfamiliar with. However, the fact that it was speech was essential; human laughter failed to grab their attention in the same way, and they didn’t show any biases towards either human or monkey faces.

More surprisingly, the babies also understood the types of calls that monkeys ought to make. They spent more time staring at monkey faces that were paired with monkey calls, than those paired with human words or with duck quacks.

Monkeyhumanduck.jpg

That’s certainly unexpected. These babies had no experience with the sight or sounds of rhesus monkeys but they ‘got’ that monkey calls most likely come from monkey faces. Similarly, they appreciated that a human face is an unlikely source of a monkey call even though they could hardly have experienced every possible sound that the human mouth can make.

Perhaps they were just lumping all non-human calls and faces into one category? That can’t be true, for they would have matched the monkey faces to either monkey or duck calls. Perhaps they matched monkeys to their calls because they ruled out a link to more familiar human or duck sounds? That’s unlikely too, for the infants failed to match ducks faces to quacks!

Instead, Vouloumanos believes that babies have an innate ability to predict the types of noises that come from certain faces, and vice versa. Anatomy shapes the sound of a call into a audio signature that’s specific to each species. A human vocal tract can’t produce the same repertoire of noises as a monkey’s and vice versa. Monkeys can produce a wider range of frequencies than humans can, but thanks to innovations in the shape of our mouth and tongue, we’re better at subtly altering the sounds we make within our narrower range.

So the very shape of the face can provide clues about the noises likely to emerge from it, and previous studies have found that infants are very sensitive to these cues. This may also explain why they failed to match duck faces with their quacks – their visages as so vastly different to the basic primate design that they might not even be registered as faces, let alone as potential clues about sound.

If that’s not enough, Vouloumanos has a second possible explanation – perhaps babies use their knowledge of human sounds to set up a sort of “similarity gradient”. Simply put, monkey faces are sort of like human faces but noticeably different, so monkey calls should be sort of like human calls but noticeably different.

Either way, it’s clear that very young babies are remarkably sensitive to the sounds of their own species, particularly those of speech. The five month mark seems to be an important turning point, not just for this ability but for many others. By five months, they can already match faces with voices on the basis of age or emotion, but only after that does their ear for voices truly develop, allowing them to tune in to specific voices, or to the distinct sounds of their native language.

Reference: PNAS doi: 10.1073/pnas.0906049106

More on child development:

Read More

MORE ABOUT: calls, duck, faces, human, monkey, quack

Monkeys fall into the uncanny valley

By Ed Yong | October 13, 2009 9:30 am

In the movie industry, special effects and computer-generated imagery are becoming better and more realistic. As they’d improve, you’d expect moviegoers to more readily accept virtual worlds and characters, but that’s not always the case. It turns out that people are incredibly put off by images or animations of humans that strive for realism, but aren’t quite there yet.

A character like Wall-E is lovable because he’s clearly not human but has many human-like qualities and expressions. In contrast, the more realistic CG-characters of Beowulf or The Polar Express are far closer to reality but somehow less appealing because of they haven’t tripped the finish line. This phenomenon is called the “uncanny valley” – it’s the point where, just before robots or animations look completely real, they trigger revulsion because they almost look human but are just off in subtle but important ways.

This isn’t just a quirk of humans – monkeys too fall into the uncanny valley. Shawn Steckenfiner and Asif Ghazanfar from Princeton University showed five macaques a set of faces of real or CGI faces, pulling a few different expressions. They found that all five macaques consistently reacted more negatively to realistic virtual monkey faces than to either real or completely unrealistic ones. They spent much less time looking at the realistic avatars than the other two options, particularly if the faces were moving rather than static.

Steckenfiner and Ghazanfar say that the simplest explanations is that the monkeys “are also experiencing at least some of the same emotions” as humans. But for the moment, we can’t say if the monkeys felt the queasy disgust that humans do when we fall into the valley, or whether they were simply more attracted to the other two options.

The best way to do that would be to repeat these experiments while looking for possible signs of unease – sweaty skin, dilated pupils or clenched facial muscles, as examples. Steckenfiner and Ghazanfar also want to see if combining fake faces with real voices would put the macaques are greater or lesser ease.

For the moment, the duo hopes that macaques will be able to help us understand why the uncanny valley exists, or what goes on in the brains of people who feel its characteristic revulsion. For a start, it’s not about movement. The realistic virtual faces were only marginally less attractive to the monkeys when they moved compared to when they were static.

Steckenfiner and Ghazanfar’s favoured idea is that the part of our brains (and those of macaques) responsible for dealing with faces fires up when it sees realistic animations. But the features of these avatars fail to meet the expectations that we’ve built up through a lifetime of experience. Perhaps their skin colour is slightly off making them look anaemic. Perhaps their skin is too smooth, or their features out of proportion.  Whatever the clash, the idea is that the increased realism lowers our tolerance for these anomalies.

Reference: doi: 10.1073/pnas.0910063106



MORE ABOUT: Monkeys, uncanny valley

Itch-specific neurons discovered in mice

By Ed Yong | August 7, 2009 12:00 pm

Blogging on Peer-Reviewed ResearchItching is an unpleasant sensation that drives us to scratch reflexively in an effort to remove harmful substances from our body. It’s also how I get most of my physical activity for the day. Not being able to scratch an itch is intensely frustrating and many scientists have long described itch as the milder cousin of pain.

But a team of scientists from Washington University’s Pain Center (I wonder if they have problems with recruitment) have discovered a group of neurons in the spines of mice that are specific to itch but not to pain. Remove them, and mice hardly ever scratch when they’re exposed to itchy chemicals, even though they can still feel pain as well as any normal mouse.

The discovery settles a long-standing debate about whether itch and pain are governed by separate neural systems. It confirms the so-called “labelled line” theory, which says that both sensations depend on different groups of nerve cells. 

Two years ago, Yan-Gang Sun and Zhong-Qiu Zhao discovered an itch-specific gene called GRPR that is activated in a small group of neurons in the spinal cords of mice. Without a working copy of this gene, mice became immune to itching but they still responded normally to heat, pressure, inflammation and the noxious flavour of mustard. The duo even managed to stop mice from scratching by injecting them with a chemical that blocks GRPR.

But neurons that activate an itch-specific gene aren’t necessarily restricted to conveying the sensations of itching – they could also be involved in pain. To test that idea, Sun and Zhao injected mice with a nerve poison called bombesin-saporin, which specifically kills neurons that use GRPR. Without these neurons, the mice resisted a wide variety of substances that cause normal mice to scratch furiously, even though their movements were generally unaffected. Just compare the two mice in the video below – both have been injected with an itching agent but the one on the left lacks any working GRPR neurons.

However, even bereft of GRPR neurons, the mice felt pain just as any other mouse would, reacting normally to heat, pressure and noxious chemicals like mustard oil and capsaicin, the active component of chillies. Clearly, these neurons are specific to itch.

Read More

MORE ABOUT: GRPR, itch, mice, neurons, scratch

Fruit flies have a taste for fizzy drinks

By Ed Yong | July 29, 2009 12:00 pm

This article is reposted from the old WordPress incarnation of Not Exactly Rocket Science.

Blogging on Peer-Reviewed ResearchFizzy drinks like Perrier and Coca-Cola are targeted at a huge range of social groups, but if fruit flies had any capital to spend, they’d be at the top of the list. Unlike posh diners or hyperactive kids, flies have taste sensors that are specially tuned to the flavour of carbonated water.

Humans can pick up five basic tastes – sweet, salty, sour, bitter and umami (savoury). But other animals, with very different diets, can probably expand on this set. And what better place to start looking for these unusual senses than the fruit fly Drosophila, a firm favourite of geneticists worldwide, and an animal with very different taste in food to our own.

Drosophila‘s tongue contains structures that are the equivalent of our own taste buds. They are loaded with taste-sensitive neurons and the activity of specific genes gives these neurons the ability to recognise different tastes.

Other researchers have already isolated the genes that allow Drosophila to tell sweet from bitter. But when Walter Fischler found a group of taste cells that didn’t have either of these genes and connected to a different part of the fly’s brain, he knew he was on to something new.

Read More

Virtual reality illusions produce out-of-body experiences in the lab

By Ed Yong | July 28, 2009 10:30 am

This article is reposted from the old WordPress incarnation of Not Exactly Rocket Science. 

Blogging on Peer-Reviewed ResearchThe idea of an out-of-body experiences seems strange and hokey – certainly not one that would grace one of the world’s top scientific journals. So it may seem surprising that two years ago, Science published not one, but two papers that considered the subject through the lens of scientific scrutiny.

Out-of-body experiences are rooted in malfunctioning brain mechanismsOut-of-body experiences are rare and can be caused by epileptic fits, neurological conditions such as strokes and heavy drug abuse. Clearly, they are triggered when something goes wrong in our brains. And as usual for the brain, something going wrong can tell us a lot about what happens the rest of the time.

Simply put, if we very rarely have an out-of-body experience, why is it that for the most part we have ‘in-body’ experiences? It’s such a fundamental part of our lives that we often take it for granted, but there must be some mental process that ensures that our perceptions of ‘self’ are confined to our own bodies. What is it?

Two groups of scientists have taken steps to answering these questions using illusion and deception. They managed to experimentally induce mild out-of-body experiences in healthy volunteers, by using virtual reality headsets to fool people into projecting themselves into a virtual body.

Read More

NEW ON DISCOVER
OPEN
CITIZEN SCIENCE
ADVERTISEMENT

Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

Not Exactly Rocket Science

Dive into the awe-inspiring, beautiful and quirky world of science news with award-winning writer Ed Yong. No previous experience required.
ADVERTISEMENT

See More

ADVERTISEMENT
Collapse bottom bar
+

Login to your Account

X
E-mail address:
Password:
Remember me
Forgot your password?
No problem. Click here to have it e-mailed to you.

Not Registered Yet?

Register now for FREE. Registration only takes a few minutes to complete. Register now »