This article is reposted from the old WordPress incarnation of Not Exactly Rocket Science.
Attention-deficit hyperactivity disorder is the most common developmental disorder in children, affecting anywhere between 3-5% of the world’s school-going population. As the name suggests, kids with ADHD are hyperactive and easily distracted; they are also forgetful and find it difficult to control their own impulses.
While some evidence has suggested that ADHD brains develop in fundamentally different ways to typical ones, other results have argued that they are just the result of a delay in the normal timetable for development.
Now, Philip Shaw, Judith Rapaport and others from the National Institute of Mental Health have found new evidence to support the second theory. When some parts of the brain stick to their normal timetable for development, while others lag behind, ADHD is the result.
The idea isn’t new; earlier studies have found that children with ADHD have similar brain activity to slightly younger children without the condition. Rapaport’s own group had previously found that the brain’s four lobes developed in very much the same way, regardless of whether children had ADHD or not.
But looking at the size of entire lobes is a blunt measure that, at best, provides a rough overview. To get an sharper picture, they used magnetic resonance imaging to measure the brains of 447 children of different ages, often at more than one point in time.
At over 40,000 parts of the brain, they noted the thickness of the child’s cerebral cortex, the brain’s outer layer, where its most complex functions like memory, language and consciousness are thought to lie. Half of the children had ADHD and using these measurements, Shaw could work out how their cortex differed from typical children as they grew up.
Telling the difference between a German and French speaker isn’t difficult. But you may be more surprised to know that you could have a good stab at distinguishing between German and French babies based on their cries. The bawls of French newborns tend to have a rising melody, with higher frequencies becoming more prominent as the cry progresses. German newborns tend to cry with a falling melody.
These differences are apparent just three days out of the womb. This suggests that they pick up elements of their parents’ language before they’re even born, and certainly before they start to babble themselves.
Birgit Mampe from the University of Wurzburg analysed the cries of 30 French newborns and 30 German ones, all born to monolingual families. She found that the average German cry reaches its maximum pitch and intensity at around 0.45 seconds, while French cries do so later, at around 0.6 seconds.
These differences match the melodic qualities of each respective language. Many French words and phrases have a rising pitch towards the end, capped only by a falling pitch at the very end. German more often shows the opposite trend – a falling pitch towards the end of a word or phrase.
These differences in “melody contours” become apparent as soon as infants start making sounds of their own. While Mampe can’t rule out the possibility that the infants learned about the sounds of their native tongue the few days following their birth, she thinks it’s more likely that they start tuning into the own language in the womb.
In some ways, this isn’t surprising. Features like melody, rhythm and intensity (collectively known as prosody) travel well across the wall of the stomach and they reach the womb with minimum disruption. We know that infants are very sensitive to prosodic features well before they start speaking themselves, which helps them learn their own mother tongue.
But this learning process starts as early as the third trimester. We know this because newborns prefer the sound of their mother’s voice compared to those of strangers. And when their mums speak to them in the saccharine “motherese”, they can suss out the emotional content of those words through analysing their melody.
Mampe’s data show that not only can infants sense the qualities of their native tongue, they can also imitate them in their first days of life. Previously, studies have found that babies can imitate the vowel sounds of adults only after 12 weeks of life, but clearly other features like pitch can be imitated much earlier. They’re helped by the fact that crying only requires them to coordinate their breathing and vocal cord movements, while making speech sounds requires far more complex feats of muscular gymnastics that are only possible after a few months.
Reference: Current Biology doi:10.1016/j.cub.2009.09.064
More on child development:
From a young age, children learn about the sounds that animals make. But even without teaching aides like Old Macdonald’s farm, it turns out that very young babies have an intuitive understanding of the noises that humans, and even monkeys, ought to make. Athena Vouloumanos from New York University found that at just five months of age, infants match human speech to human faces and monkey calls to monkey faces. Amazingly, this wasn’t a question of experience – the same infants failed to match quacks to duck faces, even though they had more experience with ducks than monkeys.
Voloumanos worked with a dozen five-month-old infants from English- and French-speaking homes. She found that they spent longer looking at human faces when they were paired with spoken words than with monkey or duck calls. They clearly expect human faces, and not animal ones, to produce speech, even when the words in question came from a language – Japanese – that they were unfamiliar with. However, the fact that it was speech was essential; human laughter failed to grab their attention in the same way, and they didn’t show any biases towards either human or monkey faces.
More surprisingly, the babies also understood the types of calls that monkeys ought to make. They spent more time staring at monkey faces that were paired with monkey calls, than those paired with human words or with duck quacks.
That’s certainly unexpected. These babies had no experience with the sight or sounds of rhesus monkeys but they ‘got’ that monkey calls most likely come from monkey faces. Similarly, they appreciated that a human face is an unlikely source of a monkey call even though they could hardly have experienced every possible sound that the human mouth can make.
Perhaps they were just lumping all non-human calls and faces into one category? That can’t be true, for they would have matched the monkey faces to either monkey or duck calls. Perhaps they matched monkeys to their calls because they ruled out a link to more familiar human or duck sounds? That’s unlikely too, for the infants failed to match ducks faces to quacks!
Instead, Vouloumanos believes that babies have an innate ability to predict the types of noises that come from certain faces, and vice versa. Anatomy shapes the sound of a call into a audio signature that’s specific to each species. A human vocal tract can’t produce the same repertoire of noises as a monkey’s and vice versa. Monkeys can produce a wider range of frequencies than humans can, but thanks to innovations in the shape of our mouth and tongue, we’re better at subtly altering the sounds we make within our narrower range.
So the very shape of the face can provide clues about the noises likely to emerge from it, and previous studies have found that infants are very sensitive to these cues. This may also explain why they failed to match duck faces with their quacks – their visages as so vastly different to the basic primate design that they might not even be registered as faces, let alone as potential clues about sound.
If that’s not enough, Vouloumanos has a second possible explanation – perhaps babies use their knowledge of human sounds to set up a sort of “similarity gradient”. Simply put, monkey faces are sort of like human faces but noticeably different, so monkey calls should be sort of like human calls but noticeably different.
Either way, it’s clear that very young babies are remarkably sensitive to the sounds of their own species, particularly those of speech. The five month mark seems to be an important turning point, not just for this ability but for many others. By five months, they can already match faces with voices on the basis of age or emotion, but only after that does their ear for voices truly develop, allowing them to tune in to specific voices, or to the distinct sounds of their native language.
Reference: PNAS doi: 10.1073/pnas.0906049106
More on child development:
Domestic dogs are very different from their wolf ancestors in their bodies and their behaviour. They’re more docile for a start. But man’s best friend has also evolved a curious sensitivity to our communication signals – a mental ability that sets them apart from wolves and that parallels the behaviour of human infants. Dogs and infants are even prone to making the same mistakes of perception.
Like infants less than a year old, dogs fail at a seemingly easy exercise called the “object permanence task”. It goes like this: if you hide an object somewhere(say a ball under a cup) and let the baby retrieve it a few times, they will continue to search for it there even if you hide it somewhere else (say behind the sofa) and even if you do so in front of their eyes. Piaget, the legendary psychologist who discovered this behaviour, thought that it reflected a wildly different way of seeing the world.
More recently, Jozsef Topal suggested that it’s the influence of the adult experimenter that’s the key. By repeatedly pointing at the ball in the first hiding place, the adult enshrines a generalised rule in the infant’s mind. And infants, being programmed to learn from communicative signals, come to believe the adult’s instructions over the evidence of their own eyes (some people apparently never grow out of this, but I digress). Topal demonstrated this by showing that infants were much better at the task if the experimenters avoided social cues like calling the child’s name or eye contact.
And the same is true for domestic dogs. Topal tested a dozen adult dogs with a version of the hidden-object challenge, concealing toy behind one of two possible screens. If he called to the dogs by name, made eye contact and waved, the animals made the same errors that infants make on 75% of the trials. Without any of these signals, their scores improved and they only failed to realise the ball’s new location on 39% of the trials. Their error rate dropped even lower in completely non-social situations, where the ball was moved by pulling on a transparent string.
These results suggest that dogs and infant share a social mindset where certain cues prepare them to learn from humans. It’s not the case that the gestures and facial signs were just distracting for that would lead the animals or infants to search both hiding places equally – instead, they both preferred the one that the object was initially hidden behind.
Dogs, it seems, have a particular breed of social smarts even as inexperienced puppies and some scientists have suggested that these skills are adaptations that have developed over the last 10,000 years to allow dogs to better interact with their two-legged partners.
As Eddie Izzard notes in the video above, the English, within our cosy, post-imperialist, monolingual culture, often have trouble coping with the idea of two languages or more jostling about for space in the same head. “No one can live at that speed!” he suggests. And yet, bilingual children seem to cope just fine. In fact, they pick up their dual tongues at the same pace as monolingual children attain theirs, despite having to cope with two sets of grammar and vocabulary. At around 12 months, both groups produce their first words and after another six months, they know around 50.
Italian psychologists Agnes Melinda Kovacs and Jacques Mehler have found that part of their skill lies in being more flexible learners than their monolingual peers. Their exposure to two languages at an early point in their lives trains them to extract patterns from multiple sources of information.
Kovacs and Mehler demonstrated that by sitting a group of year-old infants in front of a computer screen and playing them a three-syllable word. The infants could use the word’s structure to divine where a cuddly toy would appear on the screen – if the first and last syllables were the same (“lo-vu-lo”), it would show up on the right, but if the first and second syllables matched (“lo-lo-vu”), it appeared on the left. By watching where they were looking, the duo could tell if they were successfully predicting the toy’s position.
Success depended on learning two separate linguistic structures over the course of the experiment. The infants had to discern the difference between ‘AAB’ words and ‘ABA’ words and linking them to one of the two possible toy locations. After 36 trials where they got to grips with the concept, Kovacs and Mehler tested the infants with eight different words.
We all know them – supremely confident, arrogant people with inflated views of themselves. They strut and swagger, seemingly impervious to critical opinions, threats of failure or the glare of self-awareness. You may be able to tell that I don’t like such people very much, which is why new research from Sander Thomaes from Utrecht University makes me smirk.
Thomaes found that people with unrealistically inflated opinions of themselves, far from proving more resilient in the face of social rebuffs, actually suffer more because of it. Some psychologists hold that “positive illusions” provide a mental shield that buffers its bearers from the threats of rejection or criticism. But according to Thomaes, realistic self-awareness is a much healthier state of mind.
He studied a group of 206 children aged 9-12, a point in life when popularity and acceptance among your peers seems all-important. Every child rated how much they liked each one of their classmates on a scale from zero (not at all) to three (very much). They also predicted the rating that each classmate would give them. The two scores were only moderately related to one another (a correlation of 0.52), and the difference between them provided a measure of each child’s self-awareness. Kids with inflated egos had positive differences while those with negative scores thought worse of themselves than their peers did had.
Two weeks later, Thomaes brought back all the children for an experiment. They were told that they would be taking part in the Survivor Game -an online popularity contest where groups of four players had to complete a personal profile, and a panel of peers would vote out the person they liked the least. The game was a front – in reality, half of the children were randomly told that they were least liked and voted out, while the other half were simply told that this dishonour had befallen someone else.
Discriminating against people who do not speak your language is a big problem. A new study suggests that the preferences that lead to these problems are hard-wired at a very young age. Even five-month-old infants, who can’t speak themselves, have preferences for native speakers and native accents.
The human talent for language is one of our crowning evolutionary achievements, allowing us to easily and accurately communicate with our fellows. But as the Biblical story of the Tower of Babel relates, linguistic differences can serve to drive us apart and act as massive barriers between different social groups.
These barriers can give rise to linguistic discrimination, a far more insidious problem that it seems at first. Language-based prejudices have led to horrific acts of human abuse, and even civil wars. Genocide often finds itself paired with linguicide, since a race can be killed off more thoroughly if their language follows them.
Even today, people in a linguistic minority can find themselves denied access to healthcare, or at a disadvantage when looking for jobs. The issue cuts to the heart of several ongoing debates, from the role of second languages in education to whether immigrants must become fluent in the tongue of their host country.
It should therefore be unsurprising to learn that we have strong preferences for our own language and for those who speak it. But Katherine Kinzler and colleagues from Harvard University, have found that we develop these preferences from an incredibly young age, before we can speak ourselves, and well before we can even hope to understand the social issues at stake.
The autism spectrum disorders (ASDs), including autism and its milder cousin Asperger syndrome, affect about 1 in 150 American children. There’s a lot of evidence that these conditions have a strong genetic basis. For example, identical twins who share the same DNA are much more likely to both develop similar autistic disorders than non-identical twins, who only share half their DNA.
But the hunt for mutations that predispose people to autism has been long and fraught. By looking at families with a history of ASDs, geneticists have catalogued hundreds of genetic variants that are linked to the conditions, each differing from the standard sequence by a single ‘letter’. But all of these are rare. Until now, no one has discovered a variant that affects the risk of autism and is common in the general population. And with autistic people being so different from one another, finding such mutations seemed increasingly unlikely. Some studies have come tantalisingly close, narrowing down the search to specific parts of certain chromosomes, but they’ve all stopped short of actually pinning down individual variants.
This week, American scientists from over a dozen institutes have overcome this final hurdle. By looking all over the genomes of over 10,000 people, the team narrowed their search further and further until they found not one but six common genetic variants tied to ASDs. This sextet probably affects the activity of genes that connect nerve cells together in the developing human brain.
Learning a new language as an adult is no easy task but infants can readily learn two languages without obvious difficulties. Despite being faced with two different vocabularies and sets of grammar, babies pick up both languages at the same speeds as those who learn just one. Far from becoming confused, it seems that babies actually develop superior mental skills from being raised in a bilingual environment.
By testing 38 infants, each just seven months old, Agnes Melinda Kovacs and Jacques Mehler have found that those who are raised in bilingual households have better “executive functions“. This loose term includes a number of higher mental abilities that allow us to control more basic ones, like attention and motor skills, in order to achieve a goal. They help us to plan for the future, focus our attention, and block out instinctive behaviours that would get in the way. Think of them as a form of mental control.
The role of these abilities in learning multiple languages is obvious – they allow us to focus on one language, while preventing the other from interfering. Indeed, children and adults who learn to use two languages tend to develop better executive functions. Now, Kovacs and Mehler have found that even from a very young age, before they can actually speak, children develop stronger executive functions if they grow up to the sound of two mother tongues. They show a degree of mental control that most people their age would struggle to match.
Kovacs and Mehler worked with 14 babies who heard two languages from birth, and 14 who had experienced just one. The babies saw a computer screen with two white squares and heard a short, made-up word. After that, a puppet appeared in one of the squares. There were nine words in total, and each time the puppet appeared in the same place. As the test went on, all the babies started focusing on the correct square more frequently, showing that they had learned to anticipate the puppet’s appearance. That’s a simple task that doesn’t require much in the way of executive function.
The next nine trials used a different puppet that appeared in the other square. The infants’ job was to learn that the link between word and puppet had changed, but only the bilingual ones were good at this. Unlike their monolingual peers, they learned to switch their attention to the other square. To Kovacs and Mehler, this is a sign of superior mental control – they had to override what they had previously learned in order to pick up something new. The monolingual infants, however, behaved as babies their age usually do – they stick with responses that had previously paid off, even if situations change.
For all appearances, this looks like the skull of any human child. But there are two very special things about it. The first is that its owner was clearly deformed; its asymmetrical skull is a sign of a medical condition called craniosynostosis
that’s associated with mental retardation. The second is that the skull is about half a million years old. It belonged to a child who lived in the Middle Pleistocene period.
The skull was uncovered in Atapuerca, Spain by Ana Gracia, who has named it Cranium 14. It’s a small specimen but it contains enough evidence to suggest that the deformity was present from birth and that the child was about 5-8 years old. The remains of 28 other humans have been recovered from the same site and none of them had any signs of deformity.
These facts strongly suggest that prehistoric humans cared for children with physical and mental deformities that would almost have certainly prevented them from caring for themselves. Without such assistance, it’s unlikely that the child would have survived that long.