Wind power may prove to be a promising source of clean energy, but it can also be deadly to bats. Not only can the animals be sliced by the blades of wind turbines, but the sudden drop in air pressure around the turbines can also cause bats’ lungs to explode. An electromagnetic field emitted near the turbines, however, may help bats steer clear of them, according to a new study published in the Public Library of Science One.
Bat casualties near wind turbines have proven to be significant: In 2004, over the course of six weeks, roughly 1,764 and 2,900 bats were killed at two wind farms in Pennsylvania and West Virginia, respectively [LiveScience]. If wind power continues to become increasingly prevalent, so too might the turbines become a growing threat to bat populations. “Given the growing number of wind turbines worldwide, this is going to be an increasing problem, no question about that,” said [co-author] Paul Racey [LiveScience].
Bats may have a clever way of catching prey, but it turns out the tiger moth has some tricks of its own to avoid becoming a bat’s next meal. According to a study published in Science, the tiger moth disrupts the sound waves the bat uses home in on prey by emitting its own ultrasound blasts.
Researchers knew that the tiger moth emitted ultrasound waves, but they weren’t sure why. Previous studies indicated the moth’s sounds might serve to startle the bats, or warn them that the insects were unpalatable. The new research, however, tested both of these theories. The scientists had so-called big brown bats hunt tiger moths in a chamber fitted with ultrasonic recording equipment and high-speed infrared video. If the moth sound is used to startle bats, then in the chamber the bats should be disrupted on first attack, then learn to ignore the ultrasonic click, the team figured. That didn’t happen. If the moths’ clicks are warnings that the insects taste bad, then the bats should hear the click, bite the moth—and never do so again whenever they hear the sound. That didn’t happen either [National Geographic News].
A noisy Italian disco may not seem like a conducive location for scientific experiments, but for a couple of researchers investigating hearing and language processing it was perfect. The undercover scientists studied clubbers who were trying to talk while the music was pumping, and found that they showed a decided preference for speaking into each other’s right ears. What’s more, when the researchers approached clubbers with a request for a cigarette, they found the unwitting test subjects were much more likely to comply if the petition was made in the right ear.
Previous lab studies have also suggested that humans tend to have a preference for listening to verbal input with their right ears and that given stimulus in both ears, they’ll privilege the syllables that went into the right ear. Brain scientists hypothesize that the right ear auditory stream receives precedence in the left hemisphere of the brain, where the bulk of linguistic processing is carried out [Wired.com]. Researchers say this bias holds true for both lefties and righties.
Scientists have long been impressed with bats’ echolocation calls, the brief bursts of sound that bounce off surrounding objects and allow the bats to navigate in the dark. But now researchers have found a new level of sophistication in those cries. A new study of greater mouse-eared bats proves that bats can distinguish between the calls of different individual bats. Researchers say this could explain how they remain in a group when flying at high speeds in darkness, and how they avoid interference with one another’s echo-location calls [The Guardian].
In the study, published in the journal PLoS Computational Biology, lead researcher Yossi Yovel played the recordings of bat cries back to his test subjects. “Each bat was assigned two others it had to distinguish between,” Dr Yovel explained. “So we trained bat A on a platform, playing a sound from bat B on one side and from bat C on the other. He had crawl to where the ‘correct’ sound was coming from” [BBC News]. For a correct answer, the bat was rewarded with a mealworm.
Powerful sonar causes temporary hearing loss in dolphins, a new study has confirmed, and could explain some incidents of mass stranding of the marine mammals. The impact of sonar on dolphins has been debated for years, but for the first time, researchers have played recordings of actual naval sonar to a marine mammal and tested its hearing after progressive step-ups in intensity over a couple of months [ScienceNews].
Tests were conducted at the Hawaii Institute of Marine Biology on a captive dolphin, whose head was fitted with a suction cup attached to a sensor that monitored brainwaves. The dolphin was then exposed to progressively louder pings of mid-frequency sonar…. When the pings reached 203 decibels and were repeated, the neurological data showed the mammal had become deaf, for its brain no longer responded to sound [AFP].
Stem cells may one day provide a cure for deafness, if scientists can build on recent experiments in which a British research team grew the very delicate hair cells of the inner ear from fetal stem cells. These inner ear cells are crucial for hearing but are also irreplaceable and extremely frail. The new study marks the first time they’ve been grown in a laboratory.
The use of stem cells is promising because they can become any kind of cell in the body, and could thus not only be used to replace the lost hair cells, but also any damaged nerve cells along which the signals generated by the hair cells are transmitted to the brain [BBC]. The researchers grew the hair cells from cochlear stem cells they’d isolated from fetuses, cells which are only produced from 9 to 11 weeks into a pregnancy. “That’s why deafness is permanent, because we don’t have the stem cells to replace damaged cells in the ear” [New Scientist], says Marcelo Rivolta, lead researcher of the study published in the journal Stem Cells. The stem cells were taken from aborted fetuses, with full consent of the women involved.
While that headline may overstate the case slightly for comic effect, researchers say the gist of it is true: Stroke patients with impaired vision who listened to their favorite music showed vastly improved visual processing. Says lead researcher David Soto: “One of the patients chose Kenny Rogers, another Frank Sinatra and the third a country rock band. It’s not a particular kind of music that’s important, as long as the patient enjoys it” [Daily Mail].
Participants in Soto’s study had suffered lesions to their brains’ parietal cortex, a region central to visual and spatial processing. This left them with a condition called visual neglect, in which people lose half their spatial awareness. Victims will sometimes eat food from only one side of their plate, shave one side of their faces, or — as tested in the study — fail to perceive visual prompts on one side of a computer screen [Wired].
When researcher Julian Asher goes to the symphony, he gets a sensory extravaganza. “When I hear a violin, I see something like a rich red wine,” says Asher…. “A cello is more like honey” [New Scientist]. Asher has a condition called synesthesia in which sensory information gets mixed in the brain; in Asher’s particular form, auditory-visual synesthesia, sounds cause him to see colors. Now, a study led by Asher may have uncovered the genetic source of the condition, which synesthetes say can be both a blessing and a curse.
The researchers collected DNA samples from 196 people who had auditory-visual synesthesia running in their families, they explain in the American Journal of Human Genetics [subscription required]. Asher expected to find a single gene associated with the condition, but scanning the genomes revealed that it was linked to four distinct regions, on chromosomes 2, 5, 6, and 12.
The region that was most strongly linked to synesthesia was an area on chromosome 2 that has also been strongly linked to autism. That doesn’t mean that the two conditions are related, per se, explained Ed Hubbard, a cognitive neuroscientist…. Instead, the common gene or genes are likely “more generally involved in how the brain gets built.” The study also pulled out a region on chromosome 6 that contains genes linked to dyslexia — especially interesting, “seeing as phonemes [the units of sound in language] and letters are two of the strongest synesthetic triggers,” Asher said [The Scientist].
Babies just a few days old can already identify a rhythmic pattern, and their brains show surprise when the music skips a beat, according to a new study. Researchers played recordings that used high-hat cymbals, snare drums, and bass drums to make a funky little beat while monitoring the infants‘ brain activity with non-invasive electroencephalogram brain scanners, and found that newborns respond to a skipped beat in the same way that adults do.
The ability to follow a beat is called beat induction. Neither chimpanzees nor bonobos — our closest primate relatives — are capable of beat induction, which is considered both a uniquely human trait and a cognitive building block of music. Researchers have debated whether this is inborn or learned during the few first months of life, calibrated by the rocking arms and lullabies of parents [Wired News]. While the researchers who conducted the new study say their findings are evidence that beat induction in innate, others argue that the newborns could have already learned to identify rhythmic patterns by listening to their mothers’ heartbeats while in the womb.
A mosquito‘s whiny buzz may be one of the most annoying noises to human ears, but for some mosquitoes it’s an intricate love song. A new study of the mosquito Aedes aegypti, which carries the infectious diseases dengue fever and yellow fever, has shown that when males and females mate they adjust the speed of their beating wings until their two buzzes combine to produce a harmonious tone. And this isn’t just gee-whiz science: Researchers say the finding could help in the fight against the disease-carrying insects.
The male mosquito’s buzz, or flight tone, is normally about 600 cycles per second, or 600-Hz. The female’s tone is about 400-Hz. In music, he’s roughly a D, and she’s about a G. So the male brings his tone into phase with the female’s to create a near-perfect duet. Together, the two tones create what musicians call an overtone — a third, fainter tone at 1200-Hz. Only then will the mosquitoes mate [NPR]. Researchers were surprised that the mosquitoes could detect the overtone, because they previously believed that A. aegypi males couldn’t hear frequencies above 800-Hz, and the females were thought to be completely deaf.