Many human languages achieve great diversity by combining basic words into compound ones – German is a classic example of this. We’re not the only species that does this. Campbell’s monkeys have just six basic types of calls but they have combined them into one of the richest and most sophisticated of animal vocabularies.
By chaining calls together in ways that drastically alter their meaning, they can communicate to each other about other falling trees, rival groups, harmless animals and potential threats. They can signal the presence of an unspecified threat, a leopard or an eagle, and even how imminent the danger is. It’s a front-runner for the most complex example of animal “proto-grammar” so far discovered.
Many studies have shown that the chirps and shrieks of monkeys are rich in information, ever since Dorothy Cheney and Robert Seyfarth’s seminal research on vervet monkeys. They showed that vervets have specific calls for different predators – eagles, leopards and snakes – and they’ll take specific evasive manoeuvres when they hear each alarm.
Campbell’s monkeys have been equally well-studied. Scientists used to think that they made two basic calls – booms and hacks – and that the latter were predator alarms. Others then discovered that the order of the calls matters, so adding a boom before a hack cancels out the predator message. It also turned out that there were five distinct types of hack, including some that were modified with an -oo suffix. So Campbell’s monkeys not only have a wider repertoire of calls than previously thought, but they can also combine them in meaningful ways.
Now, we know that the males make six different types of calls, comically described as boom (B), krak (K), krak-oo (K+), hok (H), hok-oo (H+) and wak-oo (W+). To decipher their meaning, Karim Ouattara spent 20 months in the Ivory Coast’s Tai National Park studying the wild Campbell’s monkeys from six different groups. Each consists of a single adult male together with several females and youngsters. And it’s the males he focused on.
With no danger in sight, males make three call sequences. The first – a pair of booms – is made when the monkey is far away from the group and can’t see them. It’s a summons that draws the rest of the group towards him. Adding a krak-oo to the end of the boom pair changes its meaning. Rather than “Come here”, the signal now means “Watch out for that branch”. Whenever the males cried “Boom-boom-krak-oo”, other monkeys knew that there were falling trees or branches around (or fighting monkeys overhead that could easily lead to falling vegetation).
Interspersing the booms and krak-oos with some hok-oos changes the meaning yet again. This call means “Prepare for battle”, and it’s used when rival groups or strange males have showed up. In line with this translation, the hok-oo calls are used far more often towards the edge of the monkeys’ territories than they are in the centre. The most important thing about this is that hok-oo is essentially meaningless. The monkeys never say it in isolation – they only use it to change the meaning of another call.
But the most complex calls are reserved for threats. When males know that danger is afoot but don’t have a visual sighting (usually because they’ve heard a suspicious growl or an alarm from other monkeys), they make a few krak-oos.
If they know it’s a crowned eagle that endangers the group, they combine krak-oo and wak-oo calls. And if they can actually see the bird, they add hoks and hok-oos into the mix – these extra components tell other monkeys that the peril is real and very urgent. Leopard alarms were always composed of kraks, and sometimes krak-oos. Here, it’s the proportion of kraks that signals the imminence of danger – the males don’t make any if they’ve just heard leopard noises, but they krak away if they actually see the cat.
The most important part of these results is the fact that calls are ordered in very specific ways. So boom-boom-krak-oo means a falling branch, but boom-krak-oo-boom means nothing. Some sequences act as units that can be chained together to more complicated ones – just as humans use words, clauses and sentences. They can change meaning by adding meaningless calls onto meaningful ones (BBK+ for falling wood but BBK+H+ for neighbours) or by chaining meaningful sequences together (K+K+ means leopard but W+K+ means eagle).
It’s tempting to think that monkeys have hidden linguistic depths to rival those of humans but as Ouattara says, “This system pales in contrast to the communicative power of grammar.” They monkeys’ repertoire may be rich, but it’s still relatively limited and they don’t take full advantage of their vocabulary. They can create new meanings by chaining calls together, but never by inverting their order (e.g. KB rather than BK). Our language is also symbolic. I can tell you about monkeys even though none are currently scampering about my living room, but Ouattara only found that Campbell’s monkeys “talk” about things that they actually see.
Nonetheless, you have to start somewhere, and the complexities of human syntax probably have their evolutionary origins in these sorts of call combinations. So far, the vocabulary of Campbell’s monkeys far outstrips those of other species, but this may simply reflect differences in research efforts. Other studies have started to find complex vocabularies in other forest-dwellers like Diana monkeys and putty-nosed monkeys. Ouattara thinks that forest life, with many predators and low visibility, may have provided strong evolutionary pressures for monkeys to develop particularly sophisticated vocal skills.
And there are probably hidden depths to the sequences of monkey calls that we haven’t even begun to peer into yet. For instance, what calls do female Campbell’s monkeys make? Even for the males, the meanings in this study only become apparent after months of intensive field work and detailed statistical analysis. The variations that happen on a call-by-call basis still remain a mystery to us. The effect would be like looking at Jane Austen’s oeuvre and concluding, “It appears that these sentences signify the presence of posh people”.
Reference: PNAS doi:10.1073/pnas.0908118106
More on monkey business (clearly, I need more headline variation):
Today, a new paper published in Nature adds another chapter to the story of FOXP2, a gene with important roles in speech and language. The FOXP2 story is a fascinating tale that I covered in New Scientist last year. It’s one of the pieces I’m proudest of so I’m reprinting it here with kind permission from Roger Highfield, and with edits incorporating new discoveries since the time of writing.
The FOXP2 Story (2009 edition)
Imagine an orchestra full of eager musicians which, thanks to an incompetent conductor, produces nothing more than an unrelieved cacophony. You’re starting to appreciate the problem faced by a British family known as KE. About half of its members have severe difficulties with language. They have trouble with grammar, writing and comprehension, but above all they find it hard to coordinate the complex sequences of face and mouth movements necessary for fluid speech.
Thanks to a single genetic mutation, the conductor cannot conduct, and the result is linguistic chaos. In 2001, geneticists looking for the root of the problem tracked it down to a mutation in a gene they named FOXP2. Normally, FOXP2 coordinates the expression of other genes, but in affected members of the KE family, it was broken.
It had long been suspected that language has some basis in genetics, but this was the first time that a specific gene had been implicated in a speech and language disorder. Overeager journalists quickly dubbed FOXP2 “the language gene” or the “grammar gene”. Noting that complex language is a characteristically human trait, some even speculated that FOXP2 might account for our unique position in the animal kingdom. Scientists were less gushing but equally excited – the discovery sparked a frenzy of research aiming to uncover the gene’s role.
Several years on, and it is clear that talk of a “language gene” was premature and simplistic. Nevertheless, FOXP2 tells an intriguing story. “When we were first looking for the gene, people were saying that it would be specific to humans since it was involved in language,” recalls Simon Fisher at the University of Oxford, who was part of the team that identified FOXP2 in the KE family. In fact, the gene evolved before the dinosaurs and is still found in many animals today: species from birds to bats to bees have their own versions, many of which are remarkably similar to ours. “It gives us a really important lesson,” says Fisher. “Speech and language didn’t just pop up out of nowhere. They’re built on very highly conserved and evolutionarily ancient pathways.”
Two amino acids, two hundred thousand years
The first team to compare FOXP2 in different species was led by Wolfgang Enard from the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany. In 2001, they looked at the protein that FOXP2 codes for, called FOXP2, and found that our version differs from those of chimpanzees, gorillas and rhesus macaques by two amino acids out of a total of 715, and from that of mice by three. This means that the human version of FOXP2 evolved recently and rapidly: only one amino acid changed in the 130 million years since the mouse lineage split from that of primates, but we have picked up two further differences since we diverged from chimps, and this seems to have happened only with the evolution of our own species at most 200,000 years ago.
The similarity between the human protein FOXP2 and that of other mammals puts it among the top 5 per cent of the most conserved of all our proteins. What’s more, different human populations show virtually no variation in their FOXP2 gene sequences. Last year, Enard’s colleague Svante Pääbo made the discovery that Neanderthals also had an identical gene, prompting questions over their linguistic abilities (see “Neanderthal echoes below).
“People sometimes think that the mutated FOXP2 in the KE family is a throwback to the chimpanzee version, but that’s not the case,” says Fisher. The KEs have the characteristically human form of the gene. Their mutation affects a part of the FOXP2 protein that interacts with DNA, which explains why it has trouble orchestrating the activity of other genes.
There must have been some evolutionary advantage associated with the human form of FOXP2, otherwise the two mutations would not have spread so quickly and comprehensively through the population. What this advantage was, and how it may have related to the rise of language, is more difficult to say. Nevertheless, clues are starting to emerge as we get a better picture of what FOXP2 does – not just in humans but in other animals too.
During development, the gene is expressed in the lungs, oesophagus and heart, but what interests language researchers is its role in the brain. Here there is remarkable similarity across species: from humans to finches to crocodiles, FOXP2 is active in the same regions. With no shortage of animal models to work with, several teams have chosen songbirds due to the similarities between their songs and human language: both build complex sequences from basic components such as syllables and riffs, and both forms of vocalisation are learned through imitation and practice during critical windows of development.
All bird species have very similar versions of FOXP2. In the zebra finch, its protein is 98 per cent identical to ours, differing by just eight amino acids. It is particularly active in a part of the basal ganglia dubbed “area X”, which is involved in song learning. Constance Scharff at the Max Planck Institute for Molecular Genetics in Berlin, Germany, reported that finches’ levels of FOXP2 expression in area X are highest during early life, which is when most of their song learning takes place. In canaries, which learn songs throughout their lives, levels of the protein shoot up annually and peak during the late summer months, which happens to be when they remodel their songs.
So what would happen to a bird’s songs if levels of the FOXP2 protein in its area X were to plummet during a crucial learning window? Scharff found out by injecting young finches with a tailored piece of RNA that inhibited the expression of the FOXP2 gene. The birds had difficulties in developing new tunes and their songs became garbled: they contained the same component “syllables” as the tunes of their tutors, but with syllables rearranged, left out, repeated incorrectly or sung at the wrong pitch.
The cacophony produced by these finches bears uncanny similarities to the distorted speech of the afflicted KE family members, making it tempting to pigeonhole FOXP2 as a vocal learning gene – influencing the ability to learn new communication sounds by imitating others. But that is no more accurate than calling it a “language gene”. For a start, songbird FOXP2 has no characteristic differences to the gene in non-songbirds. What’s more, among other species that show vocal learning, such as whales, dolphins and elephants, there are no characteristic patterns of mutation in their FOXP2 that they all share.
Instead, consensus is emerging that FOXP2 probably plays a more fundamental role in the brain. Its presence in the basal ganglia and cerebellums of different animals provides a clue as to what that role might be. Both regions help to produce precise sequences of muscle movements. Not only that, they are also able to integrate information coming in from the senses with motor commands sent from other parts of the brain. Such basic sensory-motor coordination would be vital for both birdsong and human speech. So could this be the key to understanding FOXP2?
New work by Fisher and his colleagues supports this idea. In 2008, his team engineered mice to carry the same FOXP2 mutation that affects the KE family, rendering the protein useless. Mice with two copies of the dysfunctional FOXP2 had shortened lives, characterised by motor disorders, growth problems and small cerebellums. Mice with one normal copy of FOXP2 and one faulty copy (as is the case in the affected members of the KE family) seemed outwardly healthy and capable of vocalisation, but had subtle defects.
For example, they found it difficult to acquire new motor skills such as learning to run faster on a tilted running wheel. An examination of their brains revealed the problem. The synapses connecting neurons within the cerebellum, and those in a part of the basal ganglia called the striatum in particular, were severely flawed. The signals that crossed these synapses failed to develop the long-term changes that are crucial for memory and learning. The opposite happened when the team engineered mice to produce a version of FOXP2 with the two characteristically human mutations. Their basal ganglia had neurons with longer outgrowths (dendrites) that were better able to strengthen or weaken the connections between them.
A battery of over 300 physical and mental tests showed that the altered mice were generally healthy. While they couldn’t speak like their cartoon equals, their central nervous system developed in different ways, and they showed changes in parts of the brain where FOXP2 is usually expressed (switched on) in humans.
Their squeaks were also subtly transformed. When mouse babies are moved away from their nest, they make ultrasonic distress calls that are too high for us to hear, but that their mothers pick up loudly and clearly. The altered Foxp2 gene subtly changed the structure of these alarm calls. We won’t know what this means until we get a better understanding of the similarities between mouse calls and human speech.
For now, the two groups of engineered mice tentatively support the idea that human-specific changes to FOXP2 affect aspects of speech, and strongly support the idea that they affect aspects of learning. “This shows, for the first time, that the [human-specific] amino-acid changes do indeed have functional effects, and that they are particularly relevant to the brain,” explains Fisher. “FOXP2 may have some deeply conserved role in neural circuits involved in learning and producing complex patterns of movement.” He suspects that mutant versions of FOXP2 disrupt these circuits and cause different problems in different species.
Pääbo agrees. “Language defects may be where problems with motor coordination show up most clearly in humans, since articulation is the most complex set of movements we make in our daily life,” he says. These circuits could underpin the origins of human speech, creating a biological platform for the evolution of both vocal learning in animals and spoken language in humans.
Holy diversity, Batman
The link between FOXP2 and sensory-motor coordination is bolstered further by research in bats. Sequencing the gene in 13 species of bats, Shuyi Zhang and colleagues from the East China Normal University in Shanghai discovered that it shows incredible diversity. Why would bats have such variable forms of FOXP2 when it is normally so unwavering in other species?
Zhang suspects that the answer lies in echolocation. He notes that the different versions seem to correspond with different systems of sonar navigation used by the various bat species. Although other mammals that use echolocation, such as whales and dolphins, do not have special versions of FOXP2, he points out that since they emit their sonar through their foreheads, these navigation systems have fewer moving parts. Furthermore, they need far less sensory-motor coordination than flying bats, which vocalise their ultrasonic pulses and adjust their flight every few milliseconds, based on their interpretation of the echoes they receive.
These bats suggest that FOXP2 is no more specific to basic communication than it is to language, and findings from other species tell a similar tale. Nevertheless, the discovery that this is an ancient gene that has assumed a variety of roles does nothing to diminish the importance of its latest incarnation in humans.
Since its discovery, no other gene has been convincingly implicated in overt language disorders. FOXP2 remains our only solid lead into the genetics of language. “It’s a molecular window into those kinds of pathways – but just one of a whole range of different genes that might be involved,” says Fisher. “It’s a starting point for us, but it’s not the whole story.” He has already used FOXP2 to hunt down other key players in language.
The executive’s minions
FOXP2 is a transcription factor, which activates some genes while suppressing others. Identifying its targets, particularly in the human brain, is the next obvious step. Working with Daniel Geschwind at the University of California, Los Angeles, Fisher has been trying to do just that, and their preliminary results indicate just what a massive job lies ahead. On their first foray alone, the team looked at about 5000 different genes and found that FOXP2 potentially regulates hundreds of these.
Some of these target genes control brain development in embryos and its continuing function in adults. Some affect the structural pattern of the developing brain and the growth of neurons. Others are involved in chemical signalling and the long-term changes in neural connections that enable to learning and adaptive behaviour. Some of the targets are of particular interest, including 47 genes that are expressed differently in human and chimpanzee brains, and a slightly overlapping set of 14 targets that have evolved particularly rapidly in humans.
Most intriguingly, Fisher says, “we have evidence that some FOXP2 targets are also implicated in language impairment.” Last year, Sonja Vernes in his group showed that FOXP2 switches off CNTNAP2, a gene involved in not one but two language disorders – specific language impairment (SLI) and autism. Both affect children, and both involve difficulties in picking up spoken language skills. The protein encoded by CNTNAP2 is deployed by nerve cells in the developing brain. It affects the connections between these cells and is particularly abundant in neural circuits that are involved in language.
Verne’s discovery is a sign that the true promise of FOXP2’s discovery is being fulfilled – the gene itself has been overly hyped, but its true worth lies in opening a door for more research into genes involved in language. It was the valuable clue that threw the case wide open. CNTNAP2 may be the first language disorder culprit revealed through FOXP2 and it’s unlikely to be the last.
Most recently, Dan Geschwind compared the network of genes that are targeted by FOXP2 in both chimps and humans. He found that the two human-specific amino acids within this executive protein have radically altered the set of genetic minions that it controls.
The genes that are directed by human FOXP2 are a varied cast of players that influence the development of the head and face, parts of the brain involved in motor skills, the growth of cartilage and connective tissues, and the development of the nervous system. All those roles fit with the idea that our version of FOXP2 has been a lynchpin in evolving the neural circuits and physical structures that are important for speech and language.
The FOXP2 story is far from complete, and every new discovery raises fresh questions just as it answers old ones. Already, this gene has already taught us important lessons about evolution and our place in the natural world. It shows that our much vaunted linguistic skills are more the result of genetic redeployment than out-and-out innovation. It seems that a quest to understand how we stand apart from other animals is instead leading to a deeper appreciation of what unites us.
Box – Neanderthal echoes
The unique human version of the FOXP2 gives us a surprising link with one extinct species. Last year, Svante Pääbo’s group at the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany, extracted DNA from the bones of two Neanderthals, one of the first instances of geneticists exploring ancient skeletons for specific genes. They found that Neanderthal FOXP2 carries the same two mutations as those carried by us – mutations accrued since our lineage split from chimps between 6 and 5 million years ago.
Pääbo admits that he “struggled” to interpret the finding: the Neanderthal DNA suggests that the modern human’s version of FOXP2 arose much earlier than previously thought. Comparisons of gene sequences of modern humans with other living species had put the origins of human FOXP2 between 200,000 and 100,000 years ago, which matches archaeological estimates for the emergence of spoken language. However, Neanderthals split with humans around 400,000 years ago, so the discovery that they share our version of FOXP2 pushes the date of its emergence back at least that far.
“We believe there were two things that happened in the evolution of human FOXP2,” says Pääbo. “The two amino acid changes – which happened before the Neanderthal-human split – and some other change which we don’t know about that caused the selective sweep more recently.” In other words, the characteristic mutations that we see in human FOXP2 may indeed be more ancient than expected, but the mutated gene only became widespread and uniform later in human history. While many have interpreted Pääbo’s findings as evidence that Neanderthals could talk, he is more cautious. “There’s no reason to assume that they weren’t capable of spoken language, but there must be many other genes involved in speech that we yet don’t know about in Neanderthals.”
Telling the difference between a German and French speaker isn’t difficult. But you may be more surprised to know that you could have a good stab at distinguishing between German and French babies based on their cries. The bawls of French newborns tend to have a rising melody, with higher frequencies becoming more prominent as the cry progresses. German newborns tend to cry with a falling melody.
These differences are apparent just three days out of the womb. This suggests that they pick up elements of their parents’ language before they’re even born, and certainly before they start to babble themselves.
Birgit Mampe from the University of Wurzburg analysed the cries of 30 French newborns and 30 German ones, all born to monolingual families. She found that the average German cry reaches its maximum pitch and intensity at around 0.45 seconds, while French cries do so later, at around 0.6 seconds.
These differences match the melodic qualities of each respective language. Many French words and phrases have a rising pitch towards the end, capped only by a falling pitch at the very end. German more often shows the opposite trend – a falling pitch towards the end of a word or phrase.
These differences in “melody contours” become apparent as soon as infants start making sounds of their own. While Mampe can’t rule out the possibility that the infants learned about the sounds of their native tongue the few days following their birth, she thinks it’s more likely that they start tuning into the own language in the womb.
In some ways, this isn’t surprising. Features like melody, rhythm and intensity (collectively known as prosody) travel well across the wall of the stomach and they reach the womb with minimum disruption. We know that infants are very sensitive to prosodic features well before they start speaking themselves, which helps them learn their own mother tongue.
But this learning process starts as early as the third trimester. We know this because newborns prefer the sound of their mother’s voice compared to those of strangers. And when their mums speak to them in the saccharine “motherese”, they can suss out the emotional content of those words through analysing their melody.
Mampe’s data show that not only can infants sense the qualities of their native tongue, they can also imitate them in their first days of life. Previously, studies have found that babies can imitate the vowel sounds of adults only after 12 weeks of life, but clearly other features like pitch can be imitated much earlier. They’re helped by the fact that crying only requires them to coordinate their breathing and vocal cord movements, while making speech sounds requires far more complex feats of muscular gymnastics that are only possible after a few months.
Reference: Current Biology doi:10.1016/j.cub.2009.09.064
More on child development:
As Eddie Izzard notes in the video above, the English, within our cosy, post-imperialist, monolingual culture, often have trouble coping with the idea of two languages or more jostling about for space in the same head. “No one can live at that speed!” he suggests. And yet, bilingual children seem to cope just fine. In fact, they pick up their dual tongues at the same pace as monolingual children attain theirs, despite having to cope with two sets of grammar and vocabulary. At around 12 months, both groups produce their first words and after another six months, they know around 50.
Italian psychologists Agnes Melinda Kovacs and Jacques Mehler have found that part of their skill lies in being more flexible learners than their monolingual peers. Their exposure to two languages at an early point in their lives trains them to extract patterns from multiple sources of information.
Kovacs and Mehler demonstrated that by sitting a group of year-old infants in front of a computer screen and playing them a three-syllable word. The infants could use the word’s structure to divine where a cuddly toy would appear on the screen – if the first and last syllables were the same (“lo-vu-lo”), it would show up on the right, but if the first and second syllables matched (“lo-lo-vu”), it appeared on the left. By watching where they were looking, the duo could tell if they were successfully predicting the toy’s position.
Success depended on learning two separate linguistic structures over the course of the experiment. The infants had to discern the difference between ‘AAB’ words and ‘ABA’ words and linking them to one of the two possible toy locations. After 36 trials where they got to grips with the concept, Kovacs and Mehler tested the infants with eight different words.
Talking with someone comes so naturally that we forget sometimes how skilful it is. Rhythms of conversation and cues of grammar need to be judged so that people can take their turns at talking without cutting off their partner or without leaving pregnant pauses. The former is rude, the latter awkward.
That’s certainly how things are usually conducted in English, but a new study suggests that this pattern of turn-taking applies across human cultures. By studying 10 languages from all over the world, Tanya Stivers from the Max Planck Institute for Psycholinguistics discovered a universally consistent pattern of avoiding overlaps and minimising pauses.
There are small variations certainly, but they are far smaller than stereotypes might suggest. Anecdotes and academic literature alike often claim that different cultures have radically different preferences for the tempo of conversations, from the reputed long pauses of Scandinavian speakers to the almost “simultaneous speech” of Jewish New Yorkers. But until now, no one had analysed these potential differences across a broad spectrum of languages and cultures.
Stivers did so by collecting video recordings of conversations in ten different languages from five continents – from English to Korean, and from Tzeltal (a Mayan language spoken in Mexico) to Yeli-Dyne (a language of just 4,000 speakers used in Papua New Guinea). In terms of grammar or sound, the tongues couldn’t be more different and their speakers vary from hunter-gatherers in Namibia to city-dwellers in Japan.
Discriminating against people who do not speak your language is a big problem. A new study suggests that the preferences that lead to these problems are hard-wired at a very young age. Even five-month-old infants, who can’t speak themselves, have preferences for native speakers and native accents.
The human talent for language is one of our crowning evolutionary achievements, allowing us to easily and accurately communicate with our fellows. But as the Biblical story of the Tower of Babel relates, linguistic differences can serve to drive us apart and act as massive barriers between different social groups.
These barriers can give rise to linguistic discrimination, a far more insidious problem that it seems at first. Language-based prejudices have led to horrific acts of human abuse, and even civil wars. Genocide often finds itself paired with linguicide, since a race can be killed off more thoroughly if their language follows them.
Even today, people in a linguistic minority can find themselves denied access to healthcare, or at a disadvantage when looking for jobs. The issue cuts to the heart of several ongoing debates, from the role of second languages in education to whether immigrants must become fluent in the tongue of their host country.
It should therefore be unsurprising to learn that we have strong preferences for our own language and for those who speak it. But Katherine Kinzler and colleagues from Harvard University, have found that we develop these preferences from an incredibly young age, before we can speak ourselves, and well before we can even hope to understand the social issues at stake.
When Walt Disney created Mickey Mouse in 1928, he understood the draw that anthropomorphic mice would have. But even Walt’s imagination might have struggled to foresee the events that have just taken place in a German genetics laboratory. There, a group of scientists led by Wolfgang Enard have “humanising” a gene in mice to study its potential relevance for human evolution.
The gene in question is the fascinating FOXP2, which I have written extensively about before, particularly in a feature for New Scientist. FOXP2 was initially identified as the gene behind an inherited disorder that affected language and grammar skills. Subsequently hailed as a “language gene”, it proved to be anything but. The gene, and its encoded protein, is incredibly conserved among animals, even among those without sophisticated communication skills. The chimp version differs from our own by just two amino acids; the mouse adds a single change on top of that.
The two amino acids that have cropped up since our split from chimps are unique to us and there’s plenty of evidence that they’re the result of intense natural selection. There has always been the tantalising possibility that these changes were crucial for the evolution of our speech and language skills but until now, no one really understood their purpose. No human has ever been found with mutations at these crucial positions. Obviously, genetically manipulating humans or chimps is out of the question, but the fact that the mouse version is so similar gave Enard a unique opportunity.
He tweaked the mouse Foxp2 so that it produced a protein with the two human-specific amino acids. The resulting mice couldn’t speak like their cartoon equals, but their calls were subtly altered, their central nervous system developed in different ways, and they showed changes in parts of the brain where FOXP2 is usually expressed (switched on) in humans. Simon Fisher, who first discovered the important role of FOXP2 and contributed to the study, says, “This shows, for the first time, that the [human-specific] amino-acid changes do indeed have functional effects, and that they are particularly relevant to the brain.”
Learning a new language as an adult is no easy task but infants can readily learn two languages without obvious difficulties. Despite being faced with two different vocabularies and sets of grammar, babies pick up both languages at the same speeds as those who learn just one. Far from becoming confused, it seems that babies actually develop superior mental skills from being raised in a bilingual environment.
By testing 38 infants, each just seven months old, Agnes Melinda Kovacs and Jacques Mehler have found that those who are raised in bilingual households have better “executive functions“. This loose term includes a number of higher mental abilities that allow us to control more basic ones, like attention and motor skills, in order to achieve a goal. They help us to plan for the future, focus our attention, and block out instinctive behaviours that would get in the way. Think of them as a form of mental control.
The role of these abilities in learning multiple languages is obvious – they allow us to focus on one language, while preventing the other from interfering. Indeed, children and adults who learn to use two languages tend to develop better executive functions. Now, Kovacs and Mehler have found that even from a very young age, before they can actually speak, children develop stronger executive functions if they grow up to the sound of two mother tongues. They show a degree of mental control that most people their age would struggle to match.
Kovacs and Mehler worked with 14 babies who heard two languages from birth, and 14 who had experienced just one. The babies saw a computer screen with two white squares and heard a short, made-up word. After that, a puppet appeared in one of the squares. There were nine words in total, and each time the puppet appeared in the same place. As the test went on, all the babies started focusing on the correct square more frequently, showing that they had learned to anticipate the puppet’s appearance. That’s a simple task that doesn’t require much in the way of executive function.
The next nine trials used a different puppet that appeared in the other square. The infants’ job was to learn that the link between word and puppet had changed, but only the bilingual ones were good at this. Unlike their monolingual peers, they learned to switch their attention to the other square. To Kovacs and Mehler, this is a sign of superior mental control – they had to override what they had previously learned in order to pick up something new. The monolingual infants, however, behaved as babies their age usually do – they stick with responses that had previously paid off, even if situations change.
Most of us could easily distinguish between spoken English and French. But could you tell the difference between an English and a French speaker just by looking at the movements of their lips? It seems like a difficult task. But surprising new evidence suggest that babies can meet this challenge at just a few months of age.
Young infants can certainly tell the difference between the sounds of different languages. Whitney Weikum and colleagues from the University of British Columbia decided to test their powers of visual discrimination.
They showed 36 English babies silent video clips of bilingual French-English speakers reading out the same sentence in one of the two languages. When they babies had become accustomed to these, Weikum showed them different clips of the same speakers reading out new sentences, some in English and some in French.
When the languages of the new sentences matched those of the old ones, the infants didn’t react unusually. But when the language was switched, they spent more time looking at the monitors. This is a classic test for child psychologists and it means that the infants saw something that drew their attention. They noticed the language change.
Babies can say volume without saying a single word. They can wave good-bye, point at things to indicate an interest or shake their heads to mean “No”. These gestures may be very simple, but they are a sign of things to come. Year-old toddlers who use more gestures tend to have more expansive vocabularies several years later. And this link between early gesturing and future linguistic ability may partially explain by children from poorer families tend to have smaller vocabularies than those from richer ones.
Vocabulary size tallies strongly with a child’s academic success, so it’s striking that the lexical gap between rich and poor appears when children are still toddlers and can continue throughout their school life. What is it about a family’s socioeconomic status that so strongly affects their child’s linguistic fate at such an early age?
Obviously, spoken words are a factor. Affluent parents tend to spend more time talking to their kids and use more complicated sentences with a wider range of words. But Meredith Rowe and Susan Goldin-Meadow from the University of Chicago found that actions count too.
Children gesture before they learn to speak and previous studies have shown that even among children with similar spoken skills, those who gesture more frequently during early life tend to know more words later on. Rowe and Goldin-Meadow have shown that differences in gesturing can partly explain the social gradient in vocabulary size.