Archive for November, 2009

South African wildlife – Kudu

By Ed Yong | November 22, 2009 10:00 am


 Of all of South Africa’s species of antelope, the kudu is my favourite, mainly because of those elegantly spiralling horns. They adorn the logo of the national parks and several street signs (which promise kudus majestically leaping out across highways, but seldom deliver). And they’re pretty tasty too…

This individual is one of the only adult males we saw. The one in the second photo is a juvenile, and his shorter horns have only begun their first turn. The animals in the bottom two photos are hornless females.


The fact that they’re called antelope suggests a relationship with gazelles, impalas and the like, but kudus are more closely related to cows. They belong to the subfamily Bovinae, which includes domestic cattle and wild cow-like beasts including bison, African buffalo (more on them later), gaur, water buffalo and yak. Other antelopes that belong in this group include the nyala, bongo and the largest of them all – the eland.

It just goes to show the problems with the word “antelope“. The term is used ot describe a collection of hooved, plant-eating animals that don’t form an exclusive evolutionary group. The closest you could get to a definition is “any member of the family Bovidae, except cattle, sheep, buffalo, bison or goats”. As if that wasn’t clumsy enough, the pronghorn antelope of North America isn’t really an antelope at all and belongs to a family all of its own. 



 More South African photos:


–> <!–



Leafcutter ants rely on bacteria to fertilise their fungus gardens

By Ed Yong | November 21, 2009 11:30 am

Hardly a natural history documentary goes by without some mention of leafcutter ants. So overexposed are these critters that I strongly suspect they’re holding David Attenborough’s relatives to ransom somewhere. But there is good reason for their fame – these charismatic insects are incredibly successful because of their skill as gardeners.

As their name suggests, the 41 species of leafcutter ants slice up leaves and carry them back to their nests in long columns of red and green. They don’t eat the leaves – they use them to grow a fungus, and it’s this crop that they feed on. It’s an old, successful alliance and the largest leafcutter colonies redefine the concept of a “super-organism”.  They include over 8 million individuals, span more than 20 cubic metres and harvest more than 240 kg of leaves every year. They’re technically plant-eaters, with the fungus acting as the super-organism’s external gut.

But the partnership between ant and fungus depends on other collaborators – bacteria. Some of these microbes help the ants to fertilise their gardens with valuable nitrogen, by capturing it from the atmosphere (a process known as “fixing”). Adrian Pinto-Tomas from the University of Wisconsin-Madison managed to isolate strains of these “nitrogen-fixing bacteria” from the gardens of 80 leafcutter colonies, throughout South and Central America.

Nitrogen is a scarce commodity for leafcutters, and the leaves they cut have too little of this vital element. And yet, they clearly get it from somewhere. The exhausted leaves they chuck into their refuse piles have higher proportions of nitrogen than those in the gardens, which have higher proportions than those that are freshly harvested or in the local leaf litter. Somewhere along the way, the cut leaves become enriched with nitrogen.

To find out how, Pinto-Tomas searched captive colonies of leafcutters for telltale signs of nitrogen-fixing bacteria. These microbes extract nitrogen from the air using an enzyme called, appropriately enough, nitrogenase. The enzyme also speeds up other chemical reactions, including converting acetylene into methane. So the fate of acetylene reveals the presence of nitrogenase, which in turn reveals the presence of nitrogen-fixing bacteria.

And that’s exactly what happened – the test showed that nitrogenase was present and active in the gardens of all the 8 leafcutter species that Pinto-Tomas analysed. The enzyme and the bacteria that wield it are particularly active in the centre of the fungus gardens and not at all on the ants themselves, or the leaves they cut. Around half of the garden’s supply of nitrogen comes from these bacteria.

But finding the bacteria wasn’t enough; Pinto-Tomas had to show that these microbes were actually beneficial partners rather than casual stowaways. He did that by sealing the colonies in airtight chambers and pumped in air containing a relatively rare form of nitrogen called nitrogen-15. He found that after a week, levels of this isotope had increased not just in the fungus, but the worker ants and their larvae too.

The ants were clearly reaping substantial rewards from their bacterial tenants. And by denying the ants access to soil or other food sources, Pinto-Tomas showed that they were indeed getting their nitrogen from these bacteria, and not from other sources.

This joint venture with fungi and bacteria must be a key part to the leafcutters’ undeniable success. It makes them a super-herbivore. The ants don’t fall prey to insecticides produced by plants because the fungus deals with those, and the fungus doesn’t have to cope with anti-fungal countermeasures because the ants break those down before plying it with leaves. As a result, both partners can exploit a massive variety of different plants, rather than specialising one any one type. A lack of nitrogen is the big limiting factor, but the ants can clearly overcome that too, with some bacterial assistance.

The partnership is probably a boon to other plants too. The leaves that the ants discard have 26 times more nitrogen than the surrounding leaf litter and they fertilise the surrounding soil. It’s no coincidence that the diversity of plants tends to skyrocket near a leafcutter garbage dump.

The nitrogen-fixers aren’t the only bacteria that cement the alliance between ant and fungus. A decade ago, Cameron Currie, who was also involved in this study, showed that leafcutters use another type of fungus as a pesticide. Their gardens are plagued by a different species of virulent, parasitic fungus and to protect their monocultures from these weeds, the ants use a type of Streptomyces bacteria. It hitches a lift on the ants’ shell and it secretes antibiotics that halt the growth of the parasite.

These insects really are gardeners par excellence, not only successfully growing a monoculture crop, they also use pesticides and fertilisers. Now if they’d only return David Attenborough’s family…

Reference: Science 10.1126/science.1173036

More on ants:

Images by Jarrod Scott, Cameron Currie and Bandwagonman

Memories can be strengthened while we sleep by providing the right triggers

By Ed Yong | November 20, 2009 9:20 am

In my final year of university, with exam deadlines looming and time increasingly fleeting, I considered recording some of my notes and playing them over while I was asleep. The concept of effectively gaining 6 extra hours of revision was appealing, but the idea didn’t stick – it took too long to record the information and the noise stopped me from sleeping in the first place. And the whole thing had a vague hint of daftness about it. But a new experiment suggests that the idea actually has some merit, showing that you can indeed strengthen individual memories by reactivating them as you snooze.

Sleep is a boon to newborn memories. Several experiments have shown that sleep can act as a mental cement that consolidates fragile memories into stable ones. But John Rudoy from Northwestern University wanted to see if this process could be taken even further by replaying newly learned information while people slept.

He asked a dozen volunteers to remember the positions of 50 different objects as they appeared on a screen. The items, from kittens to kettles, were all accompanied by a relevant noise, like a meow or a whistle. Shortly after, the recruits all had a short nap. As they slept, Rudoy played them the sounds for 25 of the objects, against a background of white noise. When the volunteers woke up, they had to place each of the 50 objects in the right position, and they were marked on how close they came to the actual target.

The results were very clear – the volunteers positioned the objects around 15% more accurately if they’d heard the relevant sounds while they slept. Although the sleep sounds didn’t work for everyone, the majority of the participants – 10 out of 12 – benefited in some way. And none of them knew they heard anything at all while they slept. When they were told this and asked to guess which sounds they heard, they didn’t do any better than chance.  


To show that this isn’t just a general benefit of revision, whether conscious or not, Rudoy did a similar experiment. This time, his volunteers heard the noises after they had first seen the objects but while they were still awake. This group proved to be no better at remembering the items’ locations than those who didn’t hear the second round of sounds.

Finally, to understand what was going on in the brains of the slumbering recruits, Rudoy used electroencephalograms (EEG) to measure the electrical activity in the heads of 12 fresh volunteers. He showed that people who were better at remembering the objects’ positions after their nap were also those who showed the most brain activity when they heard the sounds Rudoy thinks that hearing the sounds during sleep prompted the brain to rehearse and strengthen associations between the objects and their locations.

Some people think that sleep improves memories in a general way, by making our brains more flexible and easing the incorporation of new information. But these simple experiments show that the benefits can be very specific indeed. It’s not only possible to strengthen specific and individual memories by providing the right triggers, but we get the opportunity to do so every single night.

More on sleep:

Read More

MORE ABOUT: Memory, Sleep

Tiny fungi replay the fall of the giant beasts

By Ed Yong | November 19, 2009 2:00 pm

Around 15,000 years ago, North American was home to a wide menagerie of giant mammals – mammoths and mastodons, giant ground sloths, camels, short-faced bears, American lions, dire wolves, and more. But by 10,000 years ago, these “megafauna” had been wiped out. Thirty-four entire genera went extinct, including every species that weighed over a tonne, leaving the bison as the continent’s largest animal.

In trying to explain these extinctions, the scientific prosecution has examined suspects including early human hunters, climate change and even a meteor strike. But cracking the case has proved difficult, because most of these events happened at roughly the same time. To sort out this muddled chronology, Jacquelyn Gill has approached the problem from a fresh angle. Her team have tried to understand the final days of these giant beasts by studying a tiny organism, small enough to be dwarfed by their dung – a fungus called Sporormiella.

Sporormiella grows in the droppings of large plant-eating mammals and birds, and it leaves tell-tale spores in its wake. More spores mean more dung, so Sporormiella acts as a rough indicator of the number of herbivores in a given area. The fall of these beasts is reflected in falling numbers of spores.

Gill counted these spores in the sediment of Indiana’s Appleman Lake, and compared them to counts of fossilised pollen and charcoal from the same soil. That allowed her to match the numbers of plant-eaters at any given time with the local plant species and the frequency of forest fires.

Using this fungal index, Gill has produced a detailed timeline of the changes in the Pleistocene. Her revised history argues against a role of climate change or alien rocks, but fails to clear early humans of the blame. More importantly, it suggests that many events that happened around the same time, such as an upheaval in the local plant communities and a rise in large infernos, were the result of the beasts’ decline, rather than the cause of them.

The spores revealed that the fall of the megafauna began in earnest around 14,800 years ago. By the 13,700 year mark, their numbers had fallen to less than 2% of their former glory. They never recovered, but it clearly took a few more millennia for the stragglers to succumb – the last bones of the great beasts date to around 11,500 years ago. 

Changes in the local vegetation happened after the beasts started disappearing, around 13,700 years ago. Before this point, the environment was open grassland with the odd tree. Fires were a rarity. But without the suppressive mouths of the big plant-eaters, trees grew unchecked, producing a combo of vegetation you just don’t see today. Large numbers of temperate deciduous trees like elm and ash happily coexisted with cold-loving conifers like larch and spruce.

And with them came fires, large infernos that broke out around 14,000 years ago and returned every century or so for the next few millennia. The pollen and charcoal of Appleman Lake tell the story of these changes, and also show that they came after the beasts’ disappearance.

Right away, this timeline rules out the possibility that a collision with a large space object killed the megafauna. The proponents of that theory place the collision at around 13,000 years ago, after the giants had started to decline. And it’s clear that extinctions were long, drawn-out affairs, rather than the relatively rapid annihilations you’d expect from an extraterrestrial impact. 

Likewise, changing climate becomes an unlikelier suspect. The megafaunal extinction predated a rapid, millennium-long chill called the Younger Dryas that took place around 11,500 and 12,800 years ago. When the megafauna started dying, the Earth was going through a warming phase. That might well have affected them, but it didn’t do so through the most obvious method – changing the plants they ate. After all, Gill’s work tells us that the beasts’ disappearance changed the plants, not the other way round. 

What about humans, those pesky slayers of animals? Some scientists believed that North America’s Clovis people specialised in hunting big mammals, causing a “blitzkrieg” of spear-throwing that drove many species to extinction. But these hunters only arrive in North America between 13,300 and 12,900 years ago, around a thousand years after the population crashes had begun.

If people were responsible, they must have been pre-Clovis settlers. There’s growing evidence that such humans were around, but they weren’t common or specialised. They may have contributed to the beasts’ downfall, while Clovis hunting technology delivered a coup de grace to already faltering populati0ons.

By analysing the sediment at Appleman lake – spores, pollen, charcoal and all – Gill has replayed the history of the site, spanning the last 17,000 years. Her data rule out a few theories, but as she says, they “[do] not conclusively resolve the debate” about climate causes versus human ones. It’s possible that similar studies at different sites and other continents will help to provide more clues.

Meanwhile, her study certainly tells us more about what happened in Earth’s recent history, when a large swathe of hefty plant-eaters died off – a change from savannah to woodland, and more fires. This isn’t just a matter of historical interest. The same events might be playing out today, as the largest modern land mammals suppress fires by eating flammable plants, and are facing a very real threat of extinction. History could well repeat itself.  

Reference: Science 10.1126/science.1179504

More on megafauna:


Breaking the inverted pyramid – placing news in context

By Ed Yong | November 18, 2009 9:30 am

News journalism relies on a tried-and-tested model of inverted storytelling. Contrary to the introduction-middle-end style of writing that pervades school essays and scientific papers, most news stories shove all the key facts into the first paragraphs, leaving the rest of the prose to present background, details and other paraphernalia in descending order of importance. The idea behind this inverted pyramid is that a story can be shortened by whatever degree without losing what are presumed to be the key facts.

But recently, several writers have argued that this model is outdated and needs to give way to a new system where context is king, Jason Fry argues that this “upside-down storytelling” is broken and while his piece primarily deals with sports reporting, his arguments equally apply to other areas.  

“Arrive at the latest newspaper story about, say, the health-care debate and you’ll be told what’s new at the top, then given various snippets of background that you’re supposed to use to orient yourself. Which is serviceable if you’ve been following the story (though in that case you’ll know the background and stop reading), but if you’re new you’ll be utterly lost.”

Fry cites an excellent article by Matt Thompson at Nieman Reports, which compares the reading of modern news to “requiring a decoder ring, attainable only through years of reading news stories and looking for patterns, accumulating knowledge”. Both writers make excellent points that are especially problematic for bigger stories, where rolling coverage drives audiences deeper into the latest minutiae and further away from the context needed to make sense of it all. The problem isn’t limited to old media – blogs often send readers on interminable trails of links and archived posts to the start of a debate or topic.

These issues are highly relevant to science journalism. Here, context is vital for placing new findings against the body of research that inspires, supports or contradicts it. It shows you the giant shoulders that each new discovery stands upon.

Take the widely reported news about FOXP2, the so-called “language gene” last week. The human version of FOXP2 encodes a protein that’s just two amino acids away from its chimp counterpart. FOXP2 is an executive gene that controls the activity of many others; a new study in Nature showed that the two changes that separate the human and chimp proteins give FOXP2 control over a different network of minions. This could have been an important step in the evolution of human speech.

Cue the headlines saying that the human speech gene had been found and that one gene prevents chimps from talking. One site even claimed that one gene tweak could make chimps talk. But human speech is a complicated business, involving radical changes to both our brains and our anatomy. FOXP2 may have been an important driver of these changes but the odds of there being a single language gene are about as high as there being a gene for penning fatuous headlines or writing in an inverted-pyramid style. And experiments in mice, birds and even bats have suggested that if it’s a gene for anything, it’s for learning coordinated movements.

When I saw the press copy of the paper, I knew that it was going to be big and that I wanted to cover it. But I wanted to try something different. Last year, I wrote a long feature for New Scientist about the FOXP2 story, from the gene’s discovery to the erosion of its “language gene” moniker. Instead of covering the paper fresh, I decided to re-edit the feature, incorporating the new discoveries (and others that had come out in the last year) into the narrative I’d already crafted. The result is a living story, an up-to-date version of the FOXP2 tale, kitted out in this season’s colours. The new stuff is there, but you hopefully get the nuances that are necessary to appreciate their significance. I’m pleased with the result and I want to do more.

I’ve touched on the idea of living stories in my write-ups of the World Conference of Science Journalists. There Krishna Bharat, founder of Google News, cited the Wikipedia page on swine flu as an example of a “timeless resource”, constantly updated as statistics changed and discoveries were revealed. The page provided a valuable insight into a rapidly developing topic without simply setting new statistics adrift in a barren and featureless sea.  

Fry and Thompson also cite Wikipedia as an example of how it should be done, and they quote an interview with co-founder Jimmy Wales, who notes that the online encyclopaedia is now a major attraction for news-hungry readers. On Wikipedia, the latest goings-on are added, but they’re never allowed to ride shotgun at the expense of context. Clearly, something about the model is working, and “topic pages” are an emerging trend in the world of online news. The New York Times has introduced them. New Scientist has them. The Associated Press are following suit.

That’s not to say that news pieces as we know them are journalistic dinosaurs. After all, people go to Wikipedia for summaries of newsworthy topics after finding out about them through more traditional channels. I doubt that many use the site as their primary news source. At a population level, a mix of approaches seems best – reporting of news alongside living resources that place them within a broader landscape.

This is especially needed when it comes to health-related stories, where new studies about Risk X and Disease Y must be weighed up against others of their ilk. Currently, this is a rarity – the focus on new news paints a picture of rapidly seesawing consensus, when the reality is more like a feather causing a weighted scale to teeter.

On an individual level, writers can also do more within the bounds of a single story, especially in the different environment offered by online media. Some selection pressures are the same – having important keywords in opening paragraphs pleases search engines and editorial conventions alike. But others are more relaxed – the inverted pyramid style may have been essential in a print environment where limited column space could hack a long piece to mere paragraphs but such unnecessary constrictions are irrelevant online. Here, pieces can find room to breathe, and Z-list elements like details and background can find their rightful place at a story’s heart.

This is the approach that I try for in this blog, making news stories read more like mini-features. They’re less inverted-pyramid and more factual oblongs. I try to get the important stuff in early for the attention-deficit among us, but there’s no rush. I try to get a narrative in there without resorting to a straightforward school-essay structure. I hope it works, and I’m happy to take feedback. Meanwhile, I’m also considering adding topic pages for the pet issues that I find myself returning to time and again – horizontal gene transfer, embodied cognition, animal cooperation, transitional fossils… you know, the good stuff.


More on journalism: 

Read More


Elephants and humans evolved similar solutions to problems of gas-guzzling brains

By Ed Yong | November 16, 2009 5:04 pm

At first glance, the African elephant doesn’t look like it has much in common with us humans. We support around 70-80 kg of weight on two legs, while it carries around four to six tonnes on four. We grasp objects with opposable thumbs, while it uses its trunk. We need axes and chainsaws to knock down a tree, but it can just use its head. Yet among these differences, there is common ground. We’re both long-lived animals with rich social lives. And we have very, very large brains (well, mostly).

But all that intelligence doesn’t come cheaply. Large brains are gas-guzzling organs and they need a lot of energy. Faced with similarly pressing fuel demands, humans and elephants have developed similar adaptations in a set of genes used in our mitochondria – small power plants that supply energy to our cells. The genes in question are “aerobic energy metabolism (AEM)” genes – they govern how the mitochondria metabolise nutrients in food, in the presence of oxygen.

We already knew that the evolution of AEM genes has accelerated greatly since our ancestors split away from those of other monkeys and apes. While other mutations were reshaping our brain and nervous system, these altered AEM genes helped to provide our growing cortex with much-needed energy.

Now, Morris Goodman from Wayne State University has found evidence that the same thing happened in the evolution of modern elephants. It’s a good thing too – our brain accounts for a fifth of our total demand for oxygen but the elephant’s brain is even more demanding. It’s the largest of any land mammal, it’s four times the size of our own and it requires four times as much oxygen.

Goodman was only recently furnished with the tools that made his discovery possible – the full genome sequences of a number of oddball mammals, including the lesser hedgehog tenrec (Echinops telfairi). As its name suggests, the tenrec looks like a hedgehog, but it’s actually more closely related to elephants. Both species belong to a major group of mammals called the afrotherians, which also include aardvarks and manatees.

Goodman compared the genomes of 15 species including humans, elephants, tenrecs and eight other mammals and looked for genetic signatures of adaptive evolution. The genetic code is such as that a gene can accumulate many changes that don’t actually affect the structure of the protein it encodes. These are called “synonymous mutations” and they are effectively silent. Some genetic changes do, however, alter protein structure and these “non-synonymous mutations” are more significant and more dramatic, for even small tweaks to a protein’s shape can greatly alter its effectiveness. A high ratio of non-synonymous mutations compared to synonymous ones is a telltale sign that a gene has been the target of natural selection.

And sure enough, elephants have more than twice as many genes with high ratios of non-synonymous mutations to synonymous ones than tenrecs do, particularly among the AEM genes used in the mitochondria. In the same way, humans have more of such genes compared to mice (which are as closely related to us, as tenrecs are to elephants). 

These changes have taken place against a background of less mutation, not more. Our lineage, and that of elephants, has seen slower rates of evolution among protein-coding genes, probably due to the fact that the duration of our lives and generations have increased. Goodman speculates that with lower mutation rates, we’d be less prone to developing costly faults in our DNA every time it was copied anew.

Overall, his conclusion was clear – in the animals with larger brains, a suite of AEM genes had gone through an accelerated burst of evolution compared to our mini-brained cousins. Six of our AEM genes that appear to have been strongly shaped by natural selection even have elephant counterparts that have gone through the same process.

Of course, humans and elephants are much larger than mice and tenrecs. But our genetic legacy isn’t just a reflection of our bigger size, for Goodman confirmed that AEM genes hadn’t gone through a similar evolutionary spurt in animals like cows and dogs.

Goodman’s next challenge is to see what difference the substituted amino acids would have made to us and elephants and whether they make our brains more efficient at producing aerobic energy. He also wants to better understand the specific genes that have been shaped the convergent evolution of human and elephant brains over the course of evolution. That task should certainly become easier as more and more mammal genomes are published.

Reference: PNAS doi:10.1073/pnas.0911239106

More on elephants:

Read More

Some housekeeping

By Ed Yong | November 15, 2009 6:13 pm

Hi folks,

A couple of housekeeping issues:

  • ScienceBlogs have developed a set of funky widgets that allow you to share the headlines from your favourite blogs on other websites. You can find the one for Not Exactly Rocket Science here – just click Share, and then Install outside Netvibes.
  • The deadline is looming for this year’s Open Laboratory compilation of the science blogosphere’s best offerings. If any posts in this blog have tickled your fancy, stretched your brain or stoked your loins (heaven forbid, but there are some strange people on the internet), submit them for consideration here. For full disclosure purposes, I am helping to judge this year’s competition, but I will obviously not be judging my own work except in a non-competition, self-critical, tortured-soul, writery sort of way.
  • Recently, due to overwhelming demand (n=2), I’ve changed the way that posts appear so that the full shebang appears above the fold rather than teasing readers and making you click for the payoff. It makes the front page a bit messier, but I’m told this is easier for people reading on phones. There hasn’t been a noticeable drop in traffic. Is everyone happy with this change, are you for some reason against it, or have you actually failed to notice any difference whatsoever?



Cooperating bacteria are vulnerable to slackers

By Ed Yong | November 15, 2009 10:00 am

As a species, we hate cheaters. Just last month, I blogged about our innate desire to punish unfair play but it’s a sad fact that cheaters are universal. Any attempt to cooperate for a common good creates windows of opportunity for slackers. Even bacteria colonies have their own layabouts. Recently, two new studies have found that some bacteria reap the benefits of communal living while contributing nothing in return.

Cooperating bacteria are vulnerable to slackersBacteria may not strike you as expert co-operators but at high concentrations, they pull together to build microscopic ‘cities’ called biofilms, where millions of individuals live among a slimy framework that they themselves secrete. These communities provide protection from antibiotics, among other benefits, and they require cooperation to build.

This only happens once a colony reaches a certain size. One individual can’t build a biofilm on its own so it pays for a colony to be able to measure its own size. To do this, they use a method ‘quorum sensing’, where individuals send out signalling molecules in the presence of their own kind.

When another bacterium receives this signal, it sends out some of its own, so that once a population reaches a certain density, it sets off a chain reaction of communication that floods the area with chemical messages.

These messages provide orders that tell the bacteria to secrete a wide range of proteins and chemicals. Some are necessary for building biofilms, others allow them to infect hosts, others make their movements easier and yet others break down potential sources of food. They tell bacteria to start behaving cooperatively and also when it’s worth doing so.

Steve Diggle and colleagues from the Universities of Nottingham and Edinburgh have found that bacterial slackers can exploit this system. They studied an opportunistic species called Pseudomonas aeruginosa, that preys on the weak. It’s a major cause of hospital infections, setting up shop in burn victims, cystic fibrosis patients and others with weakened immune systems.

The bug’s success hinges on quorum sensing, which allows it to thrive in limited environments by cooperating. When Diggle cultured the bacteria in a nutritionally poor liquid containing only proteins as the only food source, they grew happily nonetheless. That’s because the chemical signals exchanged as part of quorum sensing also triggers the release of proteases, enzymes that can digest proteins.

Pseudomonas aeruginosaDiggle then tested two mutant forms of P.aeruginosa that are commonly found in nature. The first – the ‘signal-negative’ version – can’t produce signalling molecules but can react to them. It obeys orders to secretes the right chemicals but never passes the orders along. As such, it doesn’t secrete enough proteases and grows poorly in a protein-only solution. However, it picked up the pace if it was artificially doused in signalling molecules to cope with its deficiency.

The ‘signal-blind’ mutant is even more of a slacker – it can’t react to signals at all, so it doesn’t help or communicate. This strain also grew poorly in the protein-only liquid and only matched the normal strain if it was artificially given extra proteases.

If quorum sensing provides such obvious benefits, you might expect all bacteria to take part. But there is a catch – it’s also quite draining. Making signals and proteases takes up energy, and when Diggle placed the different strains in a rich, nutritious solution, the mutants vastly outgrew the normal strain. With an abundance of easily digested food, it was every bacterium for itself and the mutants, that weren’t busy making expensive signals and proteases, did better.

Quorum sensing may be good for the group, but for each individual bacterium, it pays to sit back and let your peers do all the work. Diggle demonstrated this by allowed the normal and mutant strains to compete in the protein-only liquid, in a real-time experiment in evolution. The mutant strains were engineered with luminescent genes so that the team could track their growth by the light they gave off.

At first, the signal-blind cheats made up just 1% of the population but after 2 days, they accounted for 45% of it. In a separate culture, the proportion of signal-negative cheats went from 3% to 66%. Among P.aeruginosa, cheaters can indeed prosper and then some – they outgrew their cooperating cousins by 60 to 80 times.

In a separate study with the same species, Kelsi Sandoz from Oregon State University found that cheaters evolve naturally. Like Diggle, she grew a normal strain of P.aeruginosa in conditions where they needed to make proteases to survive.

After 12 days, she managed to isolate specific colonies that weren’t pulling their weight. All of them had developed mutations in a key gene involved in quorum sensing which meant that they were only secreting a very small amount of protease. Within 20 days, these cheats made up 40% of the cultures. 

Cheaters prosper

Why then, do any individuals bother cooperating at all? If slacking is so profitable, why doesn’t everyone do it? For a start, cheating pays fewer dividends if you do it at the expense of your relatives who share your genes. This is especially true for bacteria colonies that reproduce asexually and spawn genetically identical clones.

In this case, helping your neighbour pays off because it ensures that your genes are passed on to the next generation. Diggle found that when the bacteria were very closely related to each other, mutants were much less likely to gain a foothold in a population of co-operators.

There is another reason though, and it’s probably more important. Both studies found that as the proportion of cheaters increased, their growth rate dropped because the value of cheating diminished.

Slackers only prosper if they can cadge of a hard-working population – if every bacterium took the easy way, there would be no proteases and no food. Sandoz found that when this happened, the entire population suffered and overall growth plummeted. If there were enough cheaters, the signalling molecules became too dilute, the ‘quorum’ fell apart and the population crashed.

However, Sandoz also found that the bacteria usually evolved compensatory measures in time to stop this from happening. During her study, she saw that many cheaters developed further mutations that restored protease production. Faced with a sinking ship, there was strong evolutionary pressure for them to swap sides and start cooperating again.


Diggle, S., Griffin, A., Campbell, G., West , S. (2007). Cooperation and conflict in quorum-sensing bacterial populations. Nature, 450, 411-415.

Sandoz, K., Mitzimberg, S., Schuster, M. (2007). Social cheating in Pseudomonas aeruginosa quorum sensing. . Proceedings of the National Academy of Sciences, 104(40), 15876-15881.

More on cooperation: 

Read More

CATEGORIZED UNDER: Altruism, Bacteria, Cooperation

South African wildlife – Tyson the leopard

By Ed Yong | November 14, 2009 12:00 pm

This is Tyson, a male leopard and one of the last animals we saw on our South African safari. We only took headshots of him but immediately, you can see that he’s stockier and more powerfully built than Safari, the female leopard that I showed photos of a few weeks back. Tyson, earning his name, probably weighs around 80kg or so.

And yet while we watched, he pulled off a languid stretch that made him look for all the world like a giant house cat – paws outstretched, maw agape and back arched in a graceful curve.

As he walked off, he marked his territory with a scent gland on his rump. I’m told that leopard scent markings smell rather a lot like popcorn, leading our guide to advise us, “If you smell popcorn during the drive, please tell us and stay inside the jeep.”

Travels with dopamine – the chemical that affects how much pleasure we expect

By Ed Yong | November 12, 2009 12:00 pm

How would you fancy a holiday to Greece or Thailand? Would you like to buy an iPhone or a new pair of shoes? Would you be keen to accept that enticing job offer? Our lives are riddled with choices that force us to imagine our future state of mind. The decisions we make hinge upon this act of time travel and a new study suggests that our mental simulations of our future happiness are strongly affected by the chemical dopamine.

Dopamine is a neurotransmitter, a chemical that carries signals within the brain. Among its many duties is a crucial role in signalling the feelings of enjoyment we get out of life’s pleasures. We need it to learn which experiences are rewarding and to actively seek them out. And it seems that we also depend on it when we imagine the future.

Tali Sharot from University College London found that if volunteers had more dopamine in their brains as they thought about events in their future, they would imagine those events to be more gratifying. It’s the first direct evidence that dopamine influences how happy we expect ourselves to be.


When we learn about new experiences, neurons that secrete dopamine seem to record the difference between the rewards we expect and the ones we actually receive. In encoding the gap between hope and experience, these neurons help us to repeat rewarding actions.

This was clearly demonstrated in 2006, when Mathias Passiglione showed that people’s ability to learn about rewards could be improved by giving them a drug called L-DOPA. It’s a precursor to dopamine, a sort of parent molecule that can increase the concentrations of its offspring. Passiglione asked volunteers to learn links between different symbols and different financial rewards. He found that under the influence of L-DOPA, they were better at picking the symbols that earned them the most cash.

Passiglione’s study was important, but his volunteers were forced to make a fairly artificial choice between two virtual symbols in a constrained lab setting. What happens in real life, when choices are complex and our decisions hinge on our ability to think about the future?

To answer that, Sharot recruited 61 volunteers and asked them to say how happy they’d feel if they visited one of 80 holiday destinations, from Greece to Thailand. All of the recruits were given a vitamin C supplement as a placebo and 40 minutes later, they had to imagine themselves on holiday at half of the possible locations. After this bout of fanciful daydreaming, they had to take another pill but this time, half of them were given L-DOPA instead of the placebo. Again, they had to imagine themselves in various holiday spots.

The next day, Sharot brought the volunteers back. By this time, they would have broken down all the L-DOPA in their system. She asked them to choose which of two destinations they’d like to go to, from the set that they had thought about the day before. Finally, they rated each destination again.

By the end of the experiments, they perceived their imaginary holidays to be more enjoyable if they had previously thought about the locations under the influence of L-DOPA (while vitamin C, as predicted, had no effect). The implication is clear: think about the future with more dopamine in the noggin and you’ll imagine that you have a better time.

Critically, this wasn’t because they were feeling happier in the actual moment. All the recruits filled in questionnaires about their emotional state every time they took a pill and these revealed that the dopamine boost didn’t actually affect the present state of mind. All it did was change their predictions of their future state of mind. These happier predictions affected their choices too – more often than not, they chose to travel to destinations that they had envisioned through dopamine-tinted goggles.

How dopamine has its way is unclear. Sharot suggests that it could boost how much we want something when we imagine it. Its effects could also tie into its role in learning. When we imagine the future, this chemical strengthens the link between what we think about and any feelings of enjoyment we might gain from it. This model fits with the fact that some neurons in the striatum become more active the more pleasure we expect from an experience.

Either way, it’s clear that our knowledge of dopamine’s myriad roles is just beginning. Broadening that knowledge is important for understanding our own behaviour, which, as Sharot says, “is largely driven by estimations of future pleasure and pain”.


Reference: Current Biology 10.1016/j.cub.2009.10.025

More on Sharot’s work and dopamine: 



Read More


Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

Not Exactly Rocket Science

Dive into the awe-inspiring, beautiful and quirky world of science news with award-winning writer Ed Yong. No previous experience required.

See More

Collapse bottom bar