Every person thinks and acts a little differently than the other 7 billion on the planet. Scientists now say that variations in brain connections account for much of this individuality, and they’ve narrowed it down to a few specific regions of the brain. This might help us better understand the evolution of the human brain as well as its development in individuals.
Each human brain has a unique connectome—the network of neural pathways that tie all of its parts together. Like a fingerprint, every person’s connectome is unique. To find out where these individual connectomes differed the most, researchers used an MRI scanning technique to take cross-sectional pictures of 23 people’s brains at rest.
Infants are known for their impressive ability to learn language, which most scientists say kicks in
somewhere around the six-month mark. But a new study indicates that language recognition may begin even earlier, while the baby is still in the womb. Using a creative means of measurement, researchers found that babies could already recognize their mother tongue by the time they left their mothers’ bodies.
The researchers tested American and Swedish newborns between seven hours and three days old. Each baby was given a pacifier hooked up to a computer. When the baby sucked on the pacifier, it triggered the computer to produce a vowel sound—sometimes in English and sometimes in Swedish. The vowel sound was repeated until the baby stopped sucking. When the baby resumed sucking, a new vowel sound would start.
The Chukchansi Indian tribe runs a 2,000 slot casino in California. The casino has proven so profitable that the tribe has gone beyond providing healthcare and stipends for its members to make a sizable, and somewhat surprising, donation: They’re giving $1 million to linguists at nearby California State University, Fresno, to study their language and, with the help of a few remaining native speakers, teach it to younger generations. The Chuckchansi are one of many tribes, Norimitsu Onishi reports at the New York Times, spending casino earnings on efforts to pull their languages back from the brink of extinction:
A robot has learned a handful of simple words in the same general way that infants do: by listening to the speech, and feedback, of human adults.
Human teachers—who ran the gamut in terms of age, occupation, and experience with kids—worked with a humanoid, toddler-sized robot, describing the colors and shapes on a toy block, as seen in the video above and described in a new study in PLoS ONE. The robot babbled back, learning which combinations of sounds are correct based both on what it had heard and on how the human responded, much like babies do when learning to speak. Giving the robot a childlike form, the researchers suggest, let people interact with it more like they would an actual baby, helping it better model language learning than having people talk to a screen or a box.
It’s pretty cool that the robots could pick up words from human-like interactions. But it’s important to keep in mind that we can only build robots to imitate what it looks like when babies learn, because we don’t know exactly what’s going on in babies’ brains when they learn language—and we certainly don’t understand it well enough to build a program that would work just the same way.
[via Wired Science]
Who thought a paper on the history of words could have so many graphs? Enter “culturomics,” an emerging field that drops data-crunching into the laps of humanities professors. Armed with the scanned corpus of Google books, researchers published in 2011 the first culturomics paper, which examined the changing popularity of words over time. The paper hinted at all sorts of possibilites: tracking the evolution of irregular verbs, mapping a politician’s rise to fame, identifying censorship when a name suddenly drops in popularity, etc.
A group of physicists have taken up culturomics with a new study that models the birth and death of words in three languages: Spanish, Hebrew, and English. At the same time they’re crunching serious math, they also have an eye on history. Here are a few of their in findings:
Them’s Fighting Words
Artist’s rendering of an Australopithecus afarensis
When archaeologists hear whispers of humanity’s past, it’s through the painstaking work of piecing together a story from artifacts and fossilized remains: The actual calls, grunts, and other sounds made by our evolutionary ancestors didn’t fossilize. But working backward from clues in ancient skeletons, Dutch researcher Bart de Boer has built plastic models of an early hominin‘s vocal tract—and, by running air through the models, recreated the sounds our ancestors may have made millions of years ago.
What’s the News: Most of us need everyone to stop talking when we perform mental math. But for children trained to do math visually with a “mental abacus,” verbal disturbances roll off their backs, prompting psychologists to posit that unlike the rest of us, they aren’t routing their calculations through words.
What’s the News: While most people think of dyslexia as primarily a problem with reading, people with dyslexia seem to have trouble processing the spoken language, as well. A new study published last week Science found that people with dyslexia have a harder time recognizing voices than other people do.
Screenshot of Civilization IV, a later version
of the game that MIT’s computer played.
What’s the News: Many video gamers scoff at the idea of actually reading the instruction manual for a game. But a manual can not only teach you how to play a game, it can also give you the basics of language—that is, if you’re a machine-learning computer. Researchers at MIT’s Computer Science and Artificial Intelligence Lab have now designed a computer system that can learn the meaning of certain words by playing complex games like Civilization II and comparing on-screen information to the game’s instruction manual.