Category: Learning

How acquiring The Knowledge changes the brains of London cab drivers

By Ed Yong | December 8, 2011 12:00 pm

London is not a good place for fans of right angles. People who like the methodical grid system of Manhattan will whimper and cry at the baffling knot of streets of England’s capital. In this bewildering network, it’s entirely possible to take two right turns and end up in the same place. Or in Narnia. Even with a map, some people manage to get lost. And yet, there are thousands of Londoners who have committed the city’s entire layout to memory – cab drivers.

Piloting London’s distinctive black cabs (taxis to everyone else) is no easy feat. To earn the privilege, drivers have to pass an intense intellectual ordeal, known charmingly as The Knowledge. Ever since 1865, they’ve had to memorise the location of every street within six miles of Charing Cross – all 25,000 of the capital’s arteries, veins and capillaries. They also need to know the locations of 20,000 landmarks – museums, police stations, theatres, clubs, and more – and 320 routes that connect everything up.

It can take two to four years to learn everything. To prove their skills, prospective drivers make “appearances” at the licencing office, where they have to recite the best route between any two points. The only map they can use is the one in their head. They even have to narrate the details of their journey, complete with passed landmarks, road names, junctions, turns and maybe even traffic lights. Only after successfully doing this, several times over, can they earn a cab driver’s licence.

Given how hard it is, it shouldn’t be surprising that The Knowledge changes the brains of those who acquire it. And for the last 11 years, Eleanor Maguire from University College London has been studying those changes.

Read More

Brain-training games get a D at brain-training tests

By Ed Yong | April 20, 2010 1:00 pm

Braintrain.jpgYou don’t have to look very far to find a multi-million pound industry supported by the scantiest of scientific evidence. Take “brain-training”, for example. This fledgling market purports to improve the brain’s abilities through the medium of number problems, Sudoku, anagrams and the like. The idea seems plausible and it has certainly made bestsellers out of games like Dr Kawashima’s Brain Training and Big Brain Academy. But a new study by Adrian Owen from Cambridge University casts doubt on the claims that these games can boost general mental abilities.

Owen recruited 11,430 volunteers through a popular science programme on the BBC called “Bang Goes the Theory”. He asked them to play several online games intended to improve an individual skill, be it reasoning, memory, planning, attention or spatial awareness. After six weeks, with each player training their brains on the games several times per week, Owen found that the games improved performance in the specific task, but not in any others.

That may seem like a victory but it’s a very shallow one. You would naturally expect people who repeatedly practice the same types of tests to eventually become whizzes at them. Indeed, previous studies have found that such improvements do happen. But becoming the Yoda of Sudoku doesn’t necessarily translate into better all-round mental agility and that’s exactly the sort of boost that the brain-training industry purports to provide. According to Owen’s research, it fails.

All of his recruits sat through a quartet of “benchmarking” tests to assess their overall mental skills before the experiment began. The recruits were then split into three groups who spent the next six weeks doing different brain-training tests on the BBC Lab UK website, for at least 10 minutes a day, three times a week. For any UK readers, the results of this study will be shown on BBC One tomorrow night (21 April) on Can You Train Your Brain?

Read More

Travels with dopamine – the chemical that affects how much pleasure we expect

By Ed Yong | November 12, 2009 12:00 pm

How would you fancy a holiday to Greece or Thailand? Would you like to buy an iPhone or a new pair of shoes? Would you be keen to accept that enticing job offer? Our lives are riddled with choices that force us to imagine our future state of mind. The decisions we make hinge upon this act of time travel and a new study suggests that our mental simulations of our future happiness are strongly affected by the chemical dopamine.

Dopamine is a neurotransmitter, a chemical that carries signals within the brain. Among its many duties is a crucial role in signalling the feelings of enjoyment we get out of life’s pleasures. We need it to learn which experiences are rewarding and to actively seek them out. And it seems that we also depend on it when we imagine the future.

Tali Sharot from University College London found that if volunteers had more dopamine in their brains as they thought about events in their future, they would imagine those events to be more gratifying. It’s the first direct evidence that dopamine influences how happy we expect ourselves to be.

Boat.jpg

When we learn about new experiences, neurons that secrete dopamine seem to record the difference between the rewards we expect and the ones we actually receive. In encoding the gap between hope and experience, these neurons help us to repeat rewarding actions.

This was clearly demonstrated in 2006, when Mathias Passiglione showed that people’s ability to learn about rewards could be improved by giving them a drug called L-DOPA. It’s a precursor to dopamine, a sort of parent molecule that can increase the concentrations of its offspring. Passiglione asked volunteers to learn links between different symbols and different financial rewards. He found that under the influence of L-DOPA, they were better at picking the symbols that earned them the most cash.

Passiglione’s study was important, but his volunteers were forced to make a fairly artificial choice between two virtual symbols in a constrained lab setting. What happens in real life, when choices are complex and our decisions hinge on our ability to think about the future?

To answer that, Sharot recruited 61 volunteers and asked them to say how happy they’d feel if they visited one of 80 holiday destinations, from Greece to Thailand. All of the recruits were given a vitamin C supplement as a placebo and 40 minutes later, they had to imagine themselves on holiday at half of the possible locations. After this bout of fanciful daydreaming, they had to take another pill but this time, half of them were given L-DOPA instead of the placebo. Again, they had to imagine themselves in various holiday spots.

The next day, Sharot brought the volunteers back. By this time, they would have broken down all the L-DOPA in their system. She asked them to choose which of two destinations they’d like to go to, from the set that they had thought about the day before. Finally, they rated each destination again.

By the end of the experiments, they perceived their imaginary holidays to be more enjoyable if they had previously thought about the locations under the influence of L-DOPA (while vitamin C, as predicted, had no effect). The implication is clear: think about the future with more dopamine in the noggin and you’ll imagine that you have a better time.

Critically, this wasn’t because they were feeling happier in the actual moment. All the recruits filled in questionnaires about their emotional state every time they took a pill and these revealed that the dopamine boost didn’t actually affect the present state of mind. All it did was change their predictions of their future state of mind. These happier predictions affected their choices too – more often than not, they chose to travel to destinations that they had envisioned through dopamine-tinted goggles.

How dopamine has its way is unclear. Sharot suggests that it could boost how much we want something when we imagine it. Its effects could also tie into its role in learning. When we imagine the future, this chemical strengthens the link between what we think about and any feelings of enjoyment we might gain from it. This model fits with the fact that some neurons in the striatum become more active the more pleasure we expect from an experience.

Either way, it’s clear that our knowledge of dopamine’s myriad roles is just beginning. Broadening that knowledge is important for understanding our own behaviour, which, as Sharot says, “is largely driven by estimations of future pleasure and pain”.

Dopamine-graphs.jpg

Reference: Current Biology 10.1016/j.cub.2009.10.025

More on Sharot’s work and dopamine: 

 

 

Read More

Guerrilla reading – what former revolutionaries tell us about the neuroscience of literacy

By Ed Yong | October 14, 2009 1:00 pm

In the 1990s, Colombia reintegrated five left-wing guerrilla groups back into mainstream society after decades of conflict. Education was a big priority – many of the guerrillas had spent their entire lives fighting and were more familiar with the grasp of a gun than a pencil. Reintegration offered them the chance to learn to read and write for the first time in their lives, but it also offered Manuel Carreiras a chance to study what happens in the human brain as we become literate.

FARC.jpgOf course, millions of people – children – learn to read every year but this new skill arrives in the context of many others. Their brains grow quickly, they learn at a tremendous pace, and there’s generally so much going on that their developing are next to useless for understanding the changes wrought by literacy. Such a quest would be like looking for a snowflake on a glacier. Far better to study what happens when fully-grown adults, whose brains have gone past those hectic days of development, learn to read.

To that end, Carreiras scanned the brains of 42 adult ex-guerrillas, 20 of whom had just completed a literacy programme in Spanish. The other 22, who had shared similar ages, backgrounds and mental abilities, had yet to start the course. The scans revealed a neural signature of literacy, changes in the brain that are exclusive to reading.

These changes affected both the white matter – the brain’s wiring system consisting of the long arms of nerve cells, and the grey matter, consisting of the nerve cells’ central bodies. Compared to their illiterate peers, the newly literate guerrillas had more grey matter in five regions towards the back of their brains, such as their angular gyri. Some are thought to help us process the things we see, others help to recognise words and others process the sounds of language.

The late-literate group also had more white matter in the splenium. This part of the brain is frequently damaged in patients with alexia, who have excellent language skills marred only by a specific inability to read.

All of these areas are connected. Using a technique called diffusion tensor imaging that measures the connections between different parts of the brain, Carreiras showed that the grey matter areas on both sides of the brain (particularly the angular gyri and dorsal occipital gyri) are linked to one another via the splenium.

Learning to read involves strengthening these connections. Carreiras demonstrated this by comparing the brain activity of 20 literate adults as they either read the names of various objects or named the objects from pictures. The study showed that reading, compared to simple object-naming, involved stronger connections between the five gray matter areas identified in the former guerrillas, particularly the dorsal occipital gyri (DOCC, involved in processing images) and the supramarginal gyri (SMG, involved in processing sounds).

 Meanwhile, the angular gyrus, which deals with the meanings of words, exerts a degree of executive control over the other areas. Learning to read also involves more cross-talk between the angular gyri on both sides of the brain, and Carreiras suggests that this crucial area helps us to discriminate between words that look similar (such as chain or chair), based on their context.

These changes are a neural signature of literacy. Carreiras’s evidence is particularly strong because he homed in on the same part of the brain using three different types of brain-scanning techniques, and because he worked with people who learned to read as adults and as children.

The lessons from this study should be a boon to researchers working on dyslexia.  Many other studies have shown that dyslexics have less grey matter in key regions at the back of their brain, and less white matter in the splenium connecting these areas. But this insights gained from the Colombians suggests that these deficits aren’t the cause of reading difficulties, they are a result of them.

Reference: Nature 10.1038/nature08461

Image: By Sgiraldoa

More on language

Read More

Doctors repress their responses to their patients' pain

By Ed Yong | September 23, 2009 10:00 am

This article is reposted from the old WordPress incarnation of Not Exactly Rocket Science. The blog is on holiday until the start of October, when I’ll return with fresh material.

Many patients would like their doctors to be more sensitive to their needs. That may be a reasonable request but at a neurological level, we should be glad of a certain amount of detachment.

In some doctors, being detached can be a good thing.Humans are programmed, quite literally, to feel each others’ pain. The neural circuit in our brains that registers pain also fires when we see someone else getting hurt; it’s why we automatically wince.

This empathy makes evolutionary sense – it teaches us to avoid potential dangers that our peers have helpfully pointed out to us. But it can be liability for people like doctors, who see pain on a daily basis and are sometimes forced to inflict it in order to help their patients.

Clearly, not all doctors are wincing wrecks, so they must develop some means of keeping this automatic response at bay. That’s exactly what Yawei Chang from Taipei City Hospital and Jean Decety from University of Chicago found when they compared the brains of 14 acupuncturists with at least 2 years of experience to control group of 14 people with none at all.

They scanned the participants’ brains while they watched videos of people being pricked by needles in their mouths, hands and feet, or being prodded with harmless cotton swabs. Sure enough, the two groups showed very different patterns of brain activity when they watched the needle videos, but not the cotton swab ones.

Read More

Your brain on Oprah and Saddam (and what that says about Halle Berry and your grandmother)

By Ed Yong | July 23, 2009 11:00 am

Blogging on Peer-Reviewed ResearchFrom the scientists who brought you the infamous ‘Halle Berry neuron’ and the ‘Jennifer Aniston neuron’ come the ‘Oprah Winfrey neuron’ and the ‘Saddam Hussein neuron’.

Four years ago, Rodrigo Quian Quiroga from Leicester University showed that single neurons in the brain react selectively to the faces of specific people, including celebrities like Halle Berry, Jennifer Aniston and Bill Clinton. Now, he’s back, describing single neurons that respond selectively to the concept of Saddam Hussein or Oprah Winfrey. This time, Quiroga has found that these neurons work across different senses, firing to images of Oprah or Saddam as well as their written and spoken names.

In one of his volunteers, Quiroga even found a neuron that selectively responded to photos of himself! Before the study began, he had never met the volunteers in the study, which shows that these representations form very quickly, at least within a day or so.

In his original experiments, Quiroga used electrodes to study the activity of individual neurons, in the brains of patients undergoing surgery for epilepsy. As the volunteers saw photos of celebrities, animals and other objects, some of their neurons seemed to be unusually selective. One responded to several different photos of Halle Berry (even when she was wearing a Catwoman mask), as well as a drawing of her, or her name in print. Other neurons responded in similarly specific ways to Jennifer Aniston or to landmarks like the Leaning Tower of Pisa.

The results were surprising, not least because they seemed to support the “grandmother cell theory“, a paradox proposed by biologist Jerry Lettvin. As Jake Young (now at Neurotopia) beautifully explains, Lettvin was trying to argue against oversimplifying the way the brain stores information. Lettvin illustrated the pitfalls of doing so with a hypothetical neuron – the grandmother cell – that represents your grandmother and is only active when you think or see her. He ridiculed that if such cells existed, the brain would not only run out of neurons, but losing individual cells would be catastrophic (at least for your poor forgotten grandmother).

The grandmother cell concept was espoused by headlines like “One face, one neuron” from Scientific American, but these read too much in Quiroga’s work. It certainly seemed like one particular neuron was responding to the concept of Halle Berry. But there was nothing in Quiroga’s research to show that this cell was the only one to respond to Halle Berry, nor that Halle Berry was the only thing that activated the cell. As Jake Young wrote, “The purpose of the neuron is not to encode Halle Berry.”

Instead, our brains encode objects through patterns of activity, distributed over a group of neurons, which allows our large but finite set of brain cells to cope with significantly more concepts. The solution to Lettvin’s paradox is that the job of encoding specific objects falls not to single neurons, but to groups of them.

Read More

Why information is its own reward – same neurons signal thirst for water, knowledge

By Ed Yong | July 15, 2009 11:00 am

Blogging on Peer-Reviewed ResearchTo me, and I suspect many readers, the quest for information can be an intensely rewarding experience. Discovering a previously elusive fact or soaking up a finely crafted argument can be as pleasurable as eating a fine meal when hungry or dousing a thirst with drink. This isn’t just a fanciful analogy – a new study suggests that the same neurons that process the primitive physical rewards of food and water also signal the more abstract mental rewards of information.

Humans generally don’t like being held in suspense when a big prize is on the horizon. If we get wind of a raise or a new job, we like to get advance information about what’s in store. It turns out that monkeys feel the same way and like us, they find that information about a reward is rewarding in itself.

Ethan Bromberg-Martin and Okihide Hikosaka trained two thirsty rhesus monkeys to choose between two targets on a screen with a flick of their eyes; in return, they randomly received either a large drink or a small one after a few seconds. Their choice of target didn’t affect which drink they received, but it did affect whether they got prior information about the size of their reward. One target brought up another symbol that told them how much water they would get, while the other brought up a random symbol.

After a few days of training, the monkeys almost always looked at the target that would give them advance intel, even though it never actually affected how much water they were given. They wanted knowledge for its own sake. What’s more, even though the gap between picking a target and sipping some water was very small, the monkeys still wanted to know what was in store for them mere seconds later. To them, ignorance is far from bliss.

Read More

Babies' gestures partly explain link between wealth and vocabulary

By Ed Yong | February 17, 2009 8:38 am

Blogging on Peer-Reviewed ResearchBabies can say volume without saying a single word. They can wave good-bye, point at things to indicate an interest or shake their heads to mean “No”. These gestures may be very simple, but they are a sign of things to come. Year-old toddlers who use more gestures tend to have more expansive vocabularies several years later. And this link between early gesturing and future linguistic ability may partially explain by children from poorer families tend to have smaller vocabularies than those from richer ones.

Vocabulary size tallies strongly with a child’s academic success, so it’s striking that the lexical gap between rich and poor appears when children are still toddlers and can continue throughout their school life. What is it about a family’s socioeconomic status that so strongly affects their child’s linguistic fate at such an early age?

Obviously, spoken words are a factor. Affluent parents tend to spend more time talking to their kids and use more complicated sentences with a wider range of words. But Meredith Rowe and Susan Goldin-Meadow from the University of Chicago found that actions count too.

Children gesture before they learn to speak and previous studies have shown that even among children with similar spoken skills, those who gesture more frequently during early life tend to know more words later on. Rowe and Goldin-Meadow have shown that differences in gesturing can partly explain the social gradient in vocabulary size.

Read More

CATEGORIZED UNDER: Child development, Language, Learning

Teaching scientific knowledge doesn't improve scientific reasoning

By Ed Yong | January 30, 2009 8:30 am

Blogging on Peer-Reviewed ResearchOn Tuesday, I wrote a short essay on the rightful place of science in our society. As part of it, I argued that scientific knowledge is distinct from the scientific method – the latter gives people the tools with which to acquire the former. I also briefly argued that modern science education (at least in the UK) focuses too much on the knowledge and too little on the method. It is so blindsided by checklists of facts that it fails to instil the inquisitiveness, scepticism, critical thinking and respect for evidence that good science entails. Simply inhaling pieces of information won’t get the job done.

This assertion is beautifully supported by a simple new study that compared the performance of physics students in the USA and China. It was led by Lei Bao from Ohio State University who wanted to see if a student’s scientific reasoning skills were affected by their degree of scientific knowledge. Does filling young heads with facts and figures lead to a matching growth in their critical faculties?

Fortunately for Bao and his team of international researchers, a ready-made natural experiment had already been set up for them, in the education systems of China and the US. Both countries have very different science curricula leading to different levels of knowledge, but neither one explicitly teaches scientific reasoning in its schools. If greater knowledge leads to sharper reasoning, students from one country should have the edge in both areas. But that wasn’t the case.

Read More

CATEGORIZED UNDER: Education, Learning

Faulty connections responsible for inherited face-blindness

By Ed Yong | November 24, 2008 8:30 am

Blogging on Peer-Reviewed ResearchHave you ever seen someone that you’re sure you recognise but whose face you just can’t seem to place? It’s a common enough occurrence, but for some people, problems with recognising faces are a part of their daily lives. They have a condition called prosopagnosia, or face blindness, which makes them incredibly bad at recognising faces, despite their normal eyesight, memory, intelligence, and ability to recognise other objects.

Edface.jpgProsopagnosia can be caused by accidents that damage parts of the brain like the fusiform gyrus – the core part of a broad network of regions involved in processing images of faces. That seems straightforward enough, but some people are born with the condition and their background is very different. They lack any obvious brain damage and indeed, brain-scanning studies have found that the core face-processing areas of their brains are of normal size and show normal activity.

But these studies were looking in the wrong place – the core regions aren’t the problem, it’s the connections between them that are faulty. Different parts of the brain are connected by tracts of ‘white matter‘ – bundles of nerve cell stalks that transmit messages between distant regions. They are the equivalent of cables that link a network of computers together and in people born with prosopagnosia, these neural cables are shredded or missing, even though the individual machines work just fine.

Read More

NEW ON DISCOVER
OPEN
CITIZEN SCIENCE
ADVERTISEMENT

Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

Not Exactly Rocket Science

Dive into the awe-inspiring, beautiful and quirky world of science news with award-winning writer Ed Yong. No previous experience required.
ADVERTISEMENT

See More

ADVERTISEMENT
Collapse bottom bar
+

Login to your Account

X
E-mail address:
Password:
Remember me
Forgot your password?
No problem. Click here to have it e-mailed to you.

Not Registered Yet?

Register now for FREE. Registration only takes a few minutes to complete. Register now »