By directing the evolution of a worm, scientists have confirmed answers to the age-old question: “What is the point of having sex with someone else?” For most people, that would hardly be a tricky query but it’s no reflection on the lives of evolutionary scientists that sex has been one of biology’s oldest puzzles.
The problem is this: many creatures can reproduce by fertilising themselves instead of getting someone else to do it, and at first glance they should do much better individuals that cross-fertilise. For a start, they’d ensure that all of their genes reach the next generation, while mating with another individual reproduction halves their genetic legacy. And without having to find males, self-fertilising females should be able to produce twice as many offspring. This is the “two-fold cost of males”.
And yet, cross-fertilisation is the more common strategy in the animal world, so it must have advantages that compensate for its cons. Scientists typically name two. The first is that by shuffling the genes of two parents, cross-fertilisation deals the next generation with a fresh genetic hand, better equipping it to rapidly adapt to changing environments, predators and parasites. The second is that having sex with someone else prevents harmful mutations from building up (the genetic defects that plague inbred families would be even worse in lineages that only ever have sex with themselves). They’re the same reasons why sex itself is usually a better long-term solution than asexual cloning.
The problem is that both of these explanations have proven very difficult to test. But that didn’t stop Levi Morran and colleagues from the University of Oregon, who demonstrated that both justifications are correct, by manipulating the evolution of the nematode worm Caenorhabditis elegans.
Like humans, C.elegans has two sexes but unlike us, they are males and hermaphrodites (with males making up just one in every two thousand individuals). Equipped with both sets of genitals, hermaphrodites worms can fertilise themselves without male help – far from being rude, telling C.elegans to go &$&! itself is a feasible lifestyle suggestion. Hermaphrodites could also mate with males, but they do that on less than one in 20 occasions.
However, the genetics of this animal are so well-known that Morran managed to use two mutations to create strains of C.elegans that either always had sex with themselves, or always had sex with other worms. Morran subjected these two engineered strains, as well as a normal one, to two challenges.
Some were exposed to a chemical called ethyl methanesulphonate (EMS) that quadruples the normal mutation rate of DNA, riddling their genomes with potentially harmful genetic changes. To make matters worse, they were placed in a new environment that should weed out all but the fittest individuals.
Despite these challenges, the strain that always mated with others were still successful after 50 generations and fared much better than the strain that only had sex with itself. Even the normal worms moved towards a cross-fertilising strategy under these harsh conditions. These are all signs that having sex with others else provides a way of purging harmful mutations from a population. By contrast, Morran estimated that the genetic burden carried by the worms that only ever mated with themselves would drive them to extinction after a few hundred generations.
Morran also exposed some of his worms to a bacterium called Serratia marcescens, a bacterium so virulent that it kills 80% of the worms it infects. It was a test of their ability to rapidly adapt to new challenges, and one that the cross-fertilisers passed with flying colours. They quickly evolved to resist the bacterium, while the populations that only mated with themselves did not. As before, the normal worms shifted towards a cross-fertilisation strategy when faced with the new threat.
So both theories are correct – compared to having sex with yourself, doing it with someone else provides a way of resisting harmful mutations and adapting quickly to new challenges. Morran says that species that evolve to always self-fertilise become trapped in an “evolutionary dead-end” and are “ultimately doomed to extinction”.
Reference: Nature doi:10.1038/nature08496
More on sex and reproduction:
Image: C.elegans by Bob Goldstein
Cast your mind back to June, when a stunning fossil animal called Darwinius (alternatively Ida or “The Link”) was unveiled to the world to tremendous pomp and circumstance. Hyperbolic ads declared the day of Ida’s discovery as the most important for 47 million years. A press release promised that she would “change everything”, headlines proclaimed her a “missing link in evolution” and the scientists behind the discovery billed her as “the closest thing we can get to a direct ancestor“.
And according to a new study, none of that is true. Mere months later, Erik Seiffert from Stony Brook University has done a comprehensive analysis of the bones of 117 primates, both living and extinct, which throws Ida’s supposed direct line of ancestry to humans into serious doubt.
Central to this new work is a new fossil called Afradapis, a member of the same group of extinct primates – the adapids – that Darwinius belonged to. The two were closely related but separated by around 10 million years. Like its more famous cousin, Afradapsis‘s jaw and teeth contain features that are similar to those of anthropoids – monkeys, apes and humans. But far from being a sign of direct ancestry, Seiffert thinks that these features represent convergent evolution – the two groups evolved them independently.
His team compared and contrasted 360 features in the bones of over 117 living and extinct primates. Among them were 24 adapids, including Darwinius, Afradapis and eight other that had not been previously analysed. This comprehensive set of data revealed the group’s family tree, charting their relationships using their overall anatomy as a guide.And it clearly shows that adapids (and Ida among them) were more closely related to modern lemurs than to anthropoids (monkeys, apes and humans). The two groups sit on a different branches of the evolutionary tree.
The analysis also reveals that even though the adapids were a successful and widespread group, they left no living descendants. For all the hype, Ida turns out to be the ancestor of bugger all.
To those who followed the criticisms of the Darwinius hype, this volte face shouldn’t come as a surprise. The the paper describing the fossil was criticised for juggling the structure of the primate family tree to shift Ida’s branch closer to ours. To recap, there are three groups vying for position as the ancestors of the anthropoids: the bizarre, large-eyed tarsiers, the related and extinct omomyids, and the equally extinct adapids. The general consensus places the first two groups closest to us; Ida’s discoverers think the adapids should be there instead.
To support that view, they looked at 30 traits that might help to settle the question and noted whether Ida had them or not, and concluded that placed the adapids next to the anthropoids on the basis of this single species. That approach seems positively minimalist compared to the one that Seiffert took, which included 12 times as many anatomical features and 117 times as many animals!
Seiffert’s tree places the tarsiers and omomyids as the closest relatives of the anthropoids – this is the so-called haplorrhine group. The adapids, however, are part of the strepsirrhine dynasty, the group that includes lemurs, lorises and bushbabies. This is the sort of analysis that was sorely lacking in the Darwinius paper.
There is no doubt that Ida is a beautiful fossil, but Seiffert questions its worth in understanding the evolution of primates. Not only was she a growing youngster, but most of her bones have been crushed or distorted in ways that obscure important body parts. Much was made of the fact that Ida lacked a toothcomb (a set of flattened, forward-facing incisors) and a grooming claw (a special ankle bone). These are two features that modern lemurs possess and modern anthropoids don’t – their absence in Darwinius was presented as evidence of a close tie to anthropoids but not lemurs. But Seiffert thinks that these body parts – the ankle and teeth – have been damaged enough that analysing them is difficult.
Afradapsis, ironically, poses no such problems. While most of its skeleton has yet to be recovered, its teeth and jaws are in excellent condition. Like those of Darwinius and some other adapids, these teeth bear a suite of features typically found in living and extinct anthropoids. The joint between the two jawbones is fused and the part of the jaw containing the teeth is deep, as is the crater in the jawbone where the chewing muscles attach. The main cusp of its upper molars – the hypocone – is very large. It’s missing the second premolar, but the third has become bigger with an edge that sharpens its matching canine.
But this doesn’t mean that Afradapis is an ancestor, or even a close relative, of the anthropoids. For a start, the most primitive fossil anthropoids, such as Biretia and Proteopithecus, lack these traits. If adapids were their ancestors, the early anthropoids must have jettisoned these adaptations, only to re-evolve them at a later stage. The more plausible explanation, and certainly the one Seiffert subscribes to, is that both groups evolved independently, and happened to converge on the same adaptations.
The price of hype
The arrival of a paper like this was almost inevitable given the interest that Ida stirred up. Obviously, Seiffert’s analysis isn’t the final word on the subject (although his study looks more convincing to me) and I’m sure that there will be a healthy debate for days to come. But what of the public impact?
Jorn Hurum, one of the key ringleaders in the Ida circus, famously said, “Any pop band is doing the same. We have to start thinking the same way in science.” The key differences, of course, are that pop music is impossible to analyse objectively and its quality depends on personal taste. The same cannot be said of scientific truth, and that changes the extent to which you can use marketing tactics to promote a discovery.
Hurum and his colleagues have played a dangerous game – they may claim to have been marketing science but they were, in fact, marketing their opinions and ones that may not stand the test of time. It’s debate by media, and it’s fantastically dangerous.
Consider the fact that for all the interest that the new paper will undoubtedly instigate, there will still be a book, website and documentary out there firmly enshrining the increasingly dubious view that Ida is our direct ancestor. Consider also that contradicting that view now makes the scientific establishment look like buffoons, given all the publicity and to-do a few months back. When John Hurum makes grandiose statements, he gains in the eyes of the public. When those statements are later shown to be dodgy, it’s science as a whole that takes a beating.
It’s also worth noting how the different publishers handled the two papers. This time, Nature made the paper available to reporters several days ahead of its publication, giving us time to analyse the paper, prepare our stories and, if necessary, contact experts for their views. The situation with the original Darwinius paper couldn’t have been more different.
As Mark Henderson notes, select journalists were allowed to see the paper at a specific location and under non-disclosure contracts that prevented them from seeking further opinions. PLoS ONE admitted to rushing the publication of the paper in time for John Hurum’s press conference, and indeed, it became publicly available mere minutes before said conference kick-started a blitzkrieg of media attention. In rushing the publication of the paper, the journal allowed itself to be held hostage to hype and actively hindered science writers who were trying to do their job responsibly.
Reference: Nature doi:10.1038/nature08429
More on Ida: Darwinius changes everything
In the forests of South Africa lurks an arachnophobe’s nightmare – Nephila kowaci, the largest web-spinning spider in the world. The females of this newly discovered species have bodies that are 3-4 centimetres in length (1.5 inches) and legs that are each around 7.5cm long (3 inches).
This new species is the largest of an already massive family. There are 15 species of Nephila – the golden orb weavers – and at least 10 of them have bodies that are over an inch long. Many spin webs that are over a metre in diameter.
The first of these giants was discovered by Linnaeus himself in 1767 and the most recent one was described 140 years ago in 1879. Thousands of specimens have been collected and grace the displays and drawers of the world’s natural history museums. But every attempt at finding a new Nephila species since 1879 (and there are more than 150 suggested scientific names on record) has been a dead end – the “new species” are always repeats of known ones.
All of that changed in 1978, when a new Nephila spider was collected at Sodwana Bay in South Africa. The unusual spider caught the attention of Matjaz Kuntner and Jonathan Coddington from the Smithsonian Institution. The duo launched several expeditions to capture the elusive spider but all of them failed. They were beginning to think that had found a hybrid, or a species that had become extinct since its brief flirtation with discovery.
Then, their fortunes changed in 2003, when they found a second specimen in an Austrian museum, taken from Madagascar. This was no hybrid. A few years later, three more surfaces – a female and a male collected in Tembe Elephant Park in South Africa. N.komaci was far from extinct, and clearly a new golden orb-weaver species, the first to be described for over a century. The duo named it after Kuntner’s best friend, Andrej Komac, who died while these discoveries were made.
This is not the new species – it’s Nephila clavipes. Unfortunately, no photos fo N.komaci were available. Look at the size difference between the female and the male though…
Like other Nephila spiders, N.komaci‘s females are the giant ones. The male, by comparison, has a body that is less than a centimetre long and has legs that are 4cm long, no bigger than a large house spider.
With measurements of this new species, Kuntner and Coddington reconstructed the evolutionary history of this family of eight-legged giants. By building a family tree of the golden orb-weavers and related families of spiders, the duo showed that the females became increasingly large as the group diverged and evolved. N.komaci is the epitome of that evolutionary enlargement and is 7 times larger than the group’s ancestor probably was.
While the giant females all cluster around one part of the family tree, the males show no such patterns in terms of their size. Species with large males aren’t any more closely related than they are to those with small males. These patterns strongly suggest that male and female Nephila have massive size differences between them because the females grew big rather than because the males shrank.
This family has much to teach us about the evolution of size differences between the two sexes – a trend that is commonplace throughout the animal world. And yet, its discovery comes with a familiar warning. It may well be already endangered. With only five individuals ever seen, it’s hard to say, but the spider has only been found in two areas – part of South Africa and Madagascar – that are hotspots of endangered wildlife.
Reference: PLoS ONE 10.1371/journal.pone.0007516
A gallery of incredible spiders
This is the story of a Turkish boy, who became the first person to have a genetic disorder diagnosed by thoroughly sequencing his genome. He is known only through his medical case notes as GIT 264-1 but for the purposes of this tale, I’m going to call Baby T.
At a mere five months of age, Baby T was brought to hospital dehydrated and in poor health. In some ways, this wasn’t surprising. His parents were blood relatives and they had suffered through two miscarriages and the death of one premature baby. Baby T himself was born prematurely at 30 weeks.
Baby T’s family history suggested that he was suffering from a genetic disorder, and his doctor’s best guess was Bartter syndrome. This rare but life-threatening disease is caused by mutations in genes that help to transport ions across the cells of the kidney. Without this ability, babies lose salt, potassium and water and they tend to urinate excessively. The resulting dehydration could kill them if they aren’t regularly topped up with fluids.
The symptoms certainly fit the bill, but his doctors weren’t sure. To play it safe, they took a sample of the boy’s blood and sent it thousands of miles away to a laboratory at Yale University for genetic testing. A thorough scan of the boy’s genome revealed the true cause of his illness – a different genetic disease called congenital chloride-losing diarrhoea. The condition is caused by a single faulty gene that leaves carriers unable to absorb important ions like chloride into the cells of their intestines. It’s a problem that causes foetuses to be delivered prematurely and infants to be severely dehydrated.
This is the first time that a disease has been diagnosed based on sequencing a person’s genome, and it marks a dramatic first outing for a new genetic technology called “whole exome sequencing”, developed by Murim Choi and other Yale researchers.
Rather than sequencing the entire human genome, the new technique shines its spotlight only on the small proportion that contains genes that code for proteins. This so-called “exome” represents just 1% of our full set of DNA, but their minority status belies their true importance. Around 85% of the genetic changes that strongly affect our risk of diseases are found within these sequences. Focusing on the exome could be an efficient strategy for finding new variants linked to diseases or diagnosing genetic disorders, not least because our current knowledge of sequences that don’t code for proteins is relatively limited.
When Choi’s team received Baby T’s blood sample, they analysed it using their new method. They found that he carried two copies of the same sequences across substantial swathes of his genome, which you’d expect given his closely related parents. The team reasoned that the within these pairs lay the mutation that was causing the baby’s condition. But how to hunt it down?
For a start, they looked for sequences within these genes where Baby T differed from that of the standard human genome by a single base pair (a DNA ‘letter’). They found around 1,500 of these. They then focused on letters that are the same whether you’re looking at the gene of a human or that of a fly. These are called “conserved” sequences and their persistence over the course of evolution shows that they can’t be tinkered with without disrupting something critical.
By this point, they had focused their search so tightly that one mutation stood out like a beacon – a single change in a gene with the snappy name of SLC26A3. The mutation in question alters the gene at a place that is exactly the same in the genomes of humans, cows, mice, chicken, frogs, flies and worms. Clearly, this is an important part of an important gene, and changing it wrecks the encoded protein. In humans, faulty copies of SLC26A3 cause congenital chloride-losing diarrhoea (CLD), a condition that fit with all of Baby T’s symptoms.
The Turkish doctors were quick to confirm the diagnosis suggested by the exome sequencing. Their initial diagnosis – Bartter syndrome – is a kidney disease, but CLD affects the intestines. Indeed, a follow-up with Baby T showed that his dehydration came from losing water not from his kidneys, but from his gut in the form of diarrhoea. Second time round, the right disease had been identified.
The initial mistake was understandable for the two diseases present in a similar way. That became abundantly clear when Choi’s team screened 39 other patients with suspected Bartter syndrome, but who didn’t have any of the normal genetic markers for the disease. In fact, five of these people had CLD instead and all of them carried mutations in SLC26A3. One shared the same flaw that affected Baby T, while the others had different mutations that had never been seen before.
This story provides vivid evidence of the benefits of exome-sequencing in both diagnosis genetic diseases and identifying mutations that cause those diseases. In focusing on just 1% of the genome, it’s probably more efficient and Choi estimates that it’s 10-20 times cheaper than sequencing an entire genome. His team aren’t the only ones exploiting this technology. Just last month, a team of Seattle scientists found the gene behind a rare genetic disorder called Freeman-Sheldon syndrome by sequencing a dozen human exomes.
As Choi writes, “We can envision a future in which such information will become part of the routine clinical evaluation of patients with suspected genetic diseases in whom the diagnosis is uncertain.”
Reference: PNAS 10.1073/pnas.0910672106
More on genomics:
This article is reposted from the old WordPress incarnation of Not Exactly Rocket Science.
Imagine you get a bad cold, but you decide to put on a brave face and go into work anyway. Instead of jokingly covering their mouths and making jibes about staying away from you, your colleagues act perfectly normally and some even and start rubbing up against you. It’s a weird scenario, but not if you were an ant.
With their large colonies and intense co-operation, ants are some of the most successful animals on the planet. But like all social insects and animals, their large group sizes make them vulnerable breeding grounds for parasites and infections. A infectious disease in a tightly knit colony spells trouble and it’s no surprise that social insects have evolved ways of stopping the spread of infections.
Some are sticklers for hygiene and meticulously clean their peers while others quarantine infected individuals in colony sick chambers. Some termites even warn their peers to stay away through head-banging. And bees kill off a heat-resistant bacteria by gathering in an infected part of the colony and raising its temperature, effectively setting off a ‘colony fever’.
Now, scientists from the University of Copenhagen have found that some ants use a form of collective immunity, where infected individuals trigger resistance in those around them through contact.
Line Ugelvig and Sylvia Cremer looked at how ants deal with infections by setting up groups of garden ants (Lasius neglectus) including five workers and three larvae in a separate chamber. They then introduced a sixth adult that was either healthy, dusted with live fungal spores, or dusted with inert spores whose DNA had been wrecked by ultraviolet radiation bombardment and could no longer infect.
The infected newcomers spent about half as much time in the brood chamber than the non-infected ones, presumably to avoid spreading the spores to the young. The existing workers also noticed the presence of live spores and cared for the larvae more intensely than normal.
This change in behaviour didn’t actually depend on the health of the new ant, as all the parties concerned reacted accordingly before the spores had a chance to germinate. And amazingly, the ants seems to be able to sense the presence of spores, and they can even tell the difference between those that can infect and those that can’t.
Even though the ants can detect live spores, the five workers spent as much time in contact with the sixth ant in all situations, infected or not. That seems strange – surely it would benefit the adults to avoid infection just as it would the larvae?
To see if rubbing up against an infected peer had any benefit, Ugelvig and Cremer applied the fungus spores to all of the adults in the group after five days. They saw that ants with no previous experience with the spores were about 50-70% more likely to die than those which had made contact with an infected newcomer.
Ugelvig and Cremer believe that the ants are using a sort of social immunity, where an infected individual passes on a form of protection to its nestmates. Another study in 2005 showed that termites may also use the same trick. They were more likely to develop a strong immune response to a fungus if they were kept in groups than in solitary confinement.
Social immunity could work in two ways. The spore-ridden ant could pass the spores themselves onto others, triggering immune responses in the rest of the group, or it could pass on immune chemicals, like antibodies.
Whatever the case, it’s a powerful strategy for protecting the health of the colony. If a parasite enters the colony, contact-based immunity would rapidly create a ring of resistant individuals around the infected one, shielding the rest of the colony from the spread of disease.
Reference: Ugelvig & Cremer. 2007. Social prophylaxis: group interaction promotes collective immunity in ant colonies. Curr Biol 17: 1-5.
Our one and only sighting of the spotted hyena, an animal that is far more beautiful than its reputation might suggest. Hyenas are powerful predators too; as much if not more of their meat comes from their own kills as it does from scavenging.
I know this shot is blurry but I quite like it nonetheless.
A hyena mansion. In Sabi Sands, spotted hyenas make their homes in termite mounds, taking over and enlarging burrows and entrances previously created by aardvarks. They’re sturdy lodgings but not exactly luxurious ones – they are infested with parasites.
The placebo effect – the phenomenon where fake medicines sometimes work if a patient believes that they should – is a boon to quacks the world over. Why it happens is still a medical mystery but thanks to a new study, we have confirmation that the spine is involved.
Frank Eippert from the University Medical Center Hamburg-Eppendorf used a technique caled functional magnetic resonance imaging (fMRI) to scan the backbones of volunteers as they experienced the placebo effect. Eippert heated the recruits’ forearms to the point of pain and he gave them cream to soothe the sting. The creams were all shams with no pain-relieving properties, but only half of the recruits were told this. The others were told that they’d been given lidocaine, an anaesthetic.
Sure enough, the volunteers who used the alleged “anaesthetic” felt about a quarter less pain than those who were aware that they were using an ordinary cream – the placebo effect in action. But Eippert also found that the activity of neurons in the spine (specifically an area near the back called the “dorsal horn”) was also strongly allayed.
The dorsal horn is the gateway of pain. It controls the passage of pain impulses from our senses into our central nervous system. Eippert’s results provide direct evidence that our belief in the effectiveness of a fake medicine can close this gate, blocking pain signalling in this all-important area.
Of course, it’s still unclear how this happens, but the answer probably lies with opioids, natural pain-relieving chemicals used by the brain that have been linked to the placebo effect for over 30 years. These chemicals are probably responsible for closing the dorsal horn’s gate.
Eippert’s results build upon, and support, earlier research linking the spine to the placebo effect. Nonetheless, it’s very unlikely that this is the only explanation behind the mysterious placebo effect. For a start, people who suffer from fibromyalgia – a condition characterised by long-term pain all over the body – can still experience in the placebo effect but in a way that doesn’t involve the spine.
Reference: Science 10.1126/science.1180142
In the 1990s, Colombia reintegrated five left-wing guerrilla groups back into mainstream society after decades of conflict. Education was a big priority – many of the guerrillas had spent their entire lives fighting and were more familiar with the grasp of a gun than a pencil. Reintegration offered them the chance to learn to read and write for the first time in their lives, but it also offered Manuel Carreiras a chance to study what happens in the human brain as we become literate.
Of course, millions of people – children – learn to read every year but this new skill arrives in the context of many others. Their brains grow quickly, they learn at a tremendous pace, and there’s generally so much going on that their developing are next to useless for understanding the changes wrought by literacy. Such a quest would be like looking for a snowflake on a glacier. Far better to study what happens when fully-grown adults, whose brains have gone past those hectic days of development, learn to read.
To that end, Carreiras scanned the brains of 42 adult ex-guerrillas, 20 of whom had just completed a literacy programme in Spanish. The other 22, who had shared similar ages, backgrounds and mental abilities, had yet to start the course. The scans revealed a neural signature of literacy, changes in the brain that are exclusive to reading.
These changes affected both the white matter – the brain’s wiring system consisting of the long arms of nerve cells, and the grey matter, consisting of the nerve cells’ central bodies. Compared to their illiterate peers, the newly literate guerrillas had more grey matter in five regions towards the back of their brains, such as their angular gyri. Some are thought to help us process the things we see, others help to recognise words and others process the sounds of language.
The late-literate group also had more white matter in the splenium. This part of the brain is frequently damaged in patients with alexia, who have excellent language skills marred only by a specific inability to read.
All of these areas are connected. Using a technique called diffusion tensor imaging that measures the connections between different parts of the brain, Carreiras showed that the grey matter areas on both sides of the brain (particularly the angular gyri and dorsal occipital gyri) are linked to one another via the splenium.
Learning to read involves strengthening these connections. Carreiras demonstrated this by comparing the brain activity of 20 literate adults as they either read the names of various objects or named the objects from pictures. The study showed that reading, compared to simple object-naming, involved stronger connections between the five gray matter areas identified in the former guerrillas, particularly the dorsal occipital gyri (DOCC, involved in processing images) and the supramarginal gyri (SMG, involved in processing sounds).
Meanwhile, the angular gyrus, which deals with the meanings of words, exerts a degree of executive control over the other areas. Learning to read also involves more cross-talk between the angular gyri on both sides of the brain, and Carreiras suggests that this crucial area helps us to discriminate between words that look similar (such as chain or chair), based on their context.
These changes are a neural signature of literacy. Carreiras’s evidence is particularly strong because he homed in on the same part of the brain using three different types of brain-scanning techniques, and because he worked with people who learned to read as adults and as children.
The lessons from this study should be a boon to researchers working on dyslexia. Many other studies have shown that dyslexics have less grey matter in key regions at the back of their brain, and less white matter in the splenium connecting these areas. But this insights gained from the Colombians suggests that these deficits aren’t the cause of reading difficulties, they are a result of them.
Reference: Nature 10.1038/nature08461
Image: By Sgiraldoa
More on language
In the movie industry, special effects and computer-generated imagery are becoming better and more realistic. As they’d improve, you’d expect moviegoers to more readily accept virtual worlds and characters, but that’s not always the case. It turns out that people are incredibly put off by images or animations of humans that strive for realism, but aren’t quite there yet.
A character like Wall-E is lovable because he’s clearly not human but has many human-like qualities and expressions. In contrast, the more realistic CG-characters of Beowulf or The Polar Express are far closer to reality but somehow less appealing because of they haven’t tripped the finish line. This phenomenon is called the “uncanny valley” – it’s the point where, just before robots or animations look completely real, they trigger revulsion because they almost look human but are just off in subtle but important ways.
This isn’t just a quirk of humans – monkeys too fall into the uncanny valley. Shawn Steckenfiner and Asif Ghazanfar from Princeton University showed five macaques a set of faces of real or CGI faces, pulling a few different expressions. They found that all five macaques consistently reacted more negatively to realistic virtual monkey faces than to either real or completely unrealistic ones. They spent much less time looking at the realistic avatars than the other two options, particularly if the faces were moving rather than static.
Steckenfiner and Ghazanfar say that the simplest explanations is that the monkeys “are also experiencing at least some of the same emotions” as humans. But for the moment, we can’t say if the monkeys felt the queasy disgust that humans do when we fall into the valley, or whether they were simply more attracted to the other two options.
The best way to do that would be to repeat these experiments while looking for possible signs of unease – sweaty skin, dilated pupils or clenched facial muscles, as examples. Steckenfiner and Ghazanfar also want to see if combining fake faces with real voices would put the macaques are greater or lesser ease.
For the moment, the duo hopes that macaques will be able to help us understand why the uncanny valley exists, or what goes on in the brains of people who feel its characteristic revulsion. For a start, it’s not about movement. The realistic virtual faces were only marginally less attractive to the monkeys when they moved compared to when they were static.
Steckenfiner and Ghazanfar’s favoured idea is that the part of our brains (and those of macaques) responsible for dealing with faces fires up when it sees realistic animations. But the features of these avatars fail to meet the expectations that we’ve built up through a lifetime of experience. Perhaps their skin colour is slightly off making them look anaemic. Perhaps their skin is too smooth, or their features out of proportion. Whatever the clash, the idea is that the increased realism lowers our tolerance for these anomalies.
Reference: doi: 10.1073/pnas.0910063106