Every time you put on some music or listen to a speaker’s words, you are party to a miracle of biology – the ability to hear. Sounds are just waves of pressure, cascading through sparse molecules of air. Your ears can not only detect these oscillations, but decode them to reveal a Bach sonata, a laughing friend, or a honking car.
This happens in three steps. First: capture. The sound waves pass through the bits of your ear you can actually see, and vibrate a membrane, stretched taut across your ear canal. This is the tympanum, or more evocatively, the eardrum. On the other side, the eardrum connects to three tiny well-named bones—the hammer, anvil and stirrup—which link the air-filled outer ear with the fluid-filled inner ear.
The bones perform the second-step: convert and amplify. They transmit all the pressure from the relatively wide eardrum into the much tinier tip of the stirrup, transforming large but faint air-borne vibrations into small but strong fluid-borne ones.
These vibrations enter the inner ear, which looks like a French whisk poking out of a snail shell. Ignore the whisk for now – the shell is the cochlea, a rolled-up tube that’s filled with fluid and lined with sensitive hair cells. These perform the third step: frequency analysis. Each cell responds to different frequencies, and are neatly aligned so that the low-frequency ones are at one end of the tube and the high-frequency ones at another. They’re like a reverse piano keyboard that senses rather than plays. The signals from these cells are passed to the auditory nerve and decoded in the brain. And voila – we hear something.
All mammal ears work in the same way: capture sound; convert and amplify; and analyse frequencies. But good adaptation are rarely wasted on just one part of the tree of life. Different branches often evolve similar solutions to life’s problems. And that’s why, in the rainforests of South America, a katydid—a relative of crickets—hears using the same three-step method that we use, but with ears that are found on its knees.
What part of the body do you listen with? The ear is the obvious answer, but it’s only part of the story – your skin is also involved. When we listen to someone else speaking, our brain combines the sounds that our ears pick up with the sight of the speaker’s lips and face, and subtle changes in air movements over our skin. Only by melding our senses of hearing, vision and touch do we get a full impression of what we’re listening to.
When we speak, many of the sounds we make (such as the English “p” or “t”) involve small puffs of air. These are known as “aspirations”. We can’t hear them, but they can greatly affect the sounds we perceive. For example, syllables like “ba” and “da” are simply versions of “pa” and “ta” without the aspirated puffs.
If you looked at the airflow produced by a puff, you’d see a distinctive pattern – a burst of high pressure at the start, followed by a short round of turbulence. This pressure signature is readily detected by our skin, and it can be easily faked by clever researchers like Bryan Gick and Donald Derrick from the University of British Columbia.
Gick and Derrick used an air compressor to blow small puffs of air, like those made during aspirated speech, onto the skin of blindfolded volunteers. At the same time, they heard recordings of different syllables – either “pa”, “ba”, “ta” or “da” – all of which had been standardised so they lasted the same time, were equally loud, and had the same frequency.
Gick and Derrick found that the fake puffs of air could fool the volunteers into “hearing” a different syllable to the one that was actually played. They were more likely to mishear “ba” as “pa”, and to think that a “da” was a “ta”. They were also more likely to correctly identify “pa” and “ta” sounds when they were paired with the inaudible puffs.
This deceptively simple experiment shows that our brain considers the tactile information picked up from our skin when it deciphers the sounds we’re listening to. Even parts of our body that are relatively insensitive to touch can provide valuable clues. Gick and Derrick found that their fake air puffs worked if they were blown onto the sensitive skin on the back of the hand, which often pick up air currents that we ourselves create when we speak. But the trick also worked on the back of the neck, which is much less sensitive and unaffected by our own spoken breaths.
While many studies have shown that we hear speech more accurately when it’s paired with visual info from a speaker’s face, this study clearly shows that touch is important too. In some ways, the integration of hearing and touch isn’t surprising – both senses involve detecting the movement of molecules vibrating in the world around us. Gick and Derrick suggest that their result might prove useful in designing aids for people who are hard of hearing.
Reference: Nature doi:10.1038/nature08572
More on perception: