Julie Sedivy is the lead author of Sold on Language: How Advertisers Talk to You And What This Says About You. She contributes regularly to Psychology Today and Language Log. She is an adjunct professor at the University of Calgary, and can be found at juliesedivy.com and on Twitter/soldonlanguage.
There’s been a good bit of discussion and hand-wringing lately over whether the American public is becoming more and more politically polarized and what this all means for the future of our democracy. You may have wrung your own hands over the issue. But even if you have, chances are you’re not losing sleep over the fact that Americans are very clearly becoming more polarized linguistically.
It may seem surprising, but in this age where geographic mobility and instant communication have increased our exposure to people outside of our neighborhoods or towns, American regional dialects are pulling further apart from each other, rather than moving closer together. And renowned linguist William Labov thinks there’s a connection between political and linguistic segregation.
Dialect regions as defined by the Atlas of North American English
In the final volume of his seminal book series Principles of Linguistic Change, Labov spends a great deal of time discussing a riveting linguistic change that’s occurring in the northern region of the U.S. clustering around the Great Lakes. This dialect region is called the Inland North, and runs from just west of Albany to Milwaukee, loops down to St. Louis, and traces a line to the south of Chicago, Toledo, and Cleveland.
Thirty-four million speakers in this region are in the midst of a modern-day re-arrangement of their vowel system. Labov thinks it all started in the early 1800’s when the linguistic ancestors of this new dialect began to pronounce “a” in a distinct way: the pronunciation of “man” began to lean towards “mee-an”, at least some of the time. But it wasn’t until the 1960s that this sound change began to trigger a real domino effect.
By Deborah Blum, a Pulitzer-prize winning science writer and professor of journalism at the University of Wisconsin-Madison since 1997. It originally appeared on the Knight Science Journalism Tracker.
What I’ve been trying to figure out, since the processed beef “pink slime” story broke this month is this: Are we just reacting to what Benjamin Radford at Discovery News calls “the ick factor”? Or does pink slime (which the industry understandably prefers to call “lean finely textured beef“) actually pose a health risk? And does anything in the flurry of recent coverage help us sort that out?
It’s worth looking at the coverage that began in early March with a story in The Daily by David Knowles and an ABC News segment by Jim Avila. As noted in The Huffington Post by Michael Hill, in a matter of days the issue went “from simmer to boil.”
Let’s stipulate that some of this response derives from the very term “pink slime,” which tends to stimulate the “ugh” response. “Pink”—not so bad. But “slime”? Have you ever heard anyone use that word in a positive, how-attractive-your-slime-covered-dinner-is kind of way?
The term was reportedly coined by Gerald Zimstein, the former USDA scientist who brought the process to the public’s attention. Zimstein is not—surprise—a fan of the product. He also objected to a USDA decision allowing its use to be concealed from the American public and has made a point of calling it out. You’ll find him in the ABC News story reporting that some 70 percent of ground beef products in grocery stores contain pink slime.
So what is pink slime, or, um, finely textured beef?
It comes from a rather commercially clever use of scraps—fat and meat removed from standard meat cuts. These remnants are spun through a centrifuge to separate the beef bits from the fat. The rather soupy meat mixture is then squeezed through a thin tube and exposed to a puff of ammonia gas. The gas reacts with water in the meat to form a trace amount of ammonium hydroxide. This reduces acidity and kills (fairly reliably) any pathogenic bacteria lurking in the beef.
Charles Q. Choi is a science journalist who has also written for Scientific American, The New York Times, Wired, Science, and Nature. In his spare time, he has ventured to all seven continents.
The Fertile Crescent in the Near East was long known as “the cradle of civilization,” and at its heart lies Mesopotamia, home to the earliest known cities, such as Ur. Now satellite images are helping uncover the history of human settlements in this storied area between the Tigris and Euphrates rivers, the latest example of how two very modern technologies—sophisticated computing and images of Earth taken from space—are helping shed light on long-extinct species and the earliest complex human societies.
In a study published this week in PNAS, the fortuitously named Harvard archaeologist Jason Ur worked with Bjoern Menze at MIT to develop a computer algorithm that could detect types of soil known as anthrosols from satellite images. Anthrosols are created by long-term human activity, and are finer, lighter-colored and richer in organic material than surrounding soil. The algorithm was trained on what anthrosols from known sites look like based on the patterns of light they reflect, giving the software the chance to spot anthrosols in as-yet unknown sites.
This map shows Ur and Menze’s analysis of anthrosol probability for part of Mesopotamia.
Armed with this method to detect ancient human habitation from space, researchers analyzed a 23,000-square-kilometer area of northeastern Syria and mapped more than 14,000 sites spanning 8,000 years. To find out more about how the sites were used, Ur and Menze compared the satellite images with data on the elevation and volume of these sites previously gathered by the Space Shuttle. The ancient settlements the scientists analyzed were built atop the remains of their mostly mud-brick predecessors, so measuring the height and volume of sites could give an idea of the long-term attractiveness of each locale. Ur and Menze identified more than 9,500 elevated sites that cover 157 square kilometers and contain 700 million cubic meters of collapsed architecture and other settlement debris, more than 250 times the volume of concrete making up Hoover Dam.
“I could do this on the ground, but it would probably take me the rest of my life to survey an area this size,” Ur said. Indeed, field scientists that normally prospect for sites in an educated-guess, trial-by-error manner are increasingly leveraging satellite imagery to their advantage.
Mark Changizi is an evolutionary neurobiologist and director of human cognition at 2AI Labs. He is the author of The Brain from 25000 Feet, The Vision Revolution, and his newest book, Harnessed: How Language and Music Mimicked Nature and Transformed Ape to Man.”
What do ironing and hang-gliding have in common? Not much really, except that we weren’t designed to do either of them. And that goes for a million other modern-civilization things we regularly do but are not “supposed” to do. We’re fish out of water, living in radically unnatural environments and behaving ridiculously for a great ape. So, if one were interested in figuring out which things are fundamentally part of what it is to be human, then those million crazy things we do these days would not be on the list.
But what would be on the list?
At the top of the list of things we do that we’re supposed to be doing, and that are at the core of what it is to be human rather than some other sort of animal, are language and music. Language is the pinnacle of usefulness, and was key to our domination of the Earth (and the Moon). And music is arguably the pinnacle of the arts. Language and music are fantastically complex, and we’re brilliantly capable at absorbing them, and from a young age. That’s how we know we’re meant to be doing them, i.e., how we know we evolved brains for engaging in language and music.
But what if this gets language and music all wrong? What if we’re not, in fact, meant to have language and music? What if our endless yapping and music-filled hours each day are deeply unnatural behaviors for our species? (What if the parents in Footloose* were right?!)
I believe that language and music are, indeed, not part of our core—that we never evolved by natural selection to engage in them. The reason we have such a head for language and music is not that we evolved for them, but, rather, that language and music evolved—culturally evolved over millennia—for us. Our brains aren’t shaped for these pinnacles of humankind. Rather, these pinnacles of humankind are shaped to be good for our brains.
By now you may have heard about Oxford Nanopore’s new whole-genome sequencing technology, which has the promise of taking the enterprise of sequencing an individual’s genome out of the basic science laboratory, and out to the consumer mass market. From what I gather the hype is not just vaporware; it’s a foretaste of what’s to come. But at the end of the day, this particular device is not the important point in any case. Do you know which firm popularized television? Probably not. When technology goes mainstream, it ceases to be buzzworthy. Rather, it becomes seamlessly integrated into our lives and disappears into the fabric of our daily background humdrum. The banality of what was innovation is a testament to its success. We’re on the cusp of the age when genomics becomes banal, and cutting-edge science becomes everyday utility.
Granted, the short-term impact of mass personal genomics is still going to be exceedingly technical. Scientific genealogy nuts will purchase the latest software, and argue over the esoteric aspects of “coverage,” (the redundancy of the sequence data, which correlates with accuracy) and the necessity of supplementing the genome with the epigenome. Physicians and other health professionals will add genomic information to the arsenal of their diagnostic toolkit, and an alphabet soup of new genome-related terms will wash over you as you visit a doctor’s office. Your genome is not you, but it certainly informs who you are. Your individual genome will become ever more important to your health care.
The phylogeny of Prozac yogurt.
Christina Agapakis is a synthetic biologist and postdoctoral research fellow at UCLA who blogs about about biology, engineering, biological engineering, and biologically inspired engineering at Oscillator.
A few weeks ago, I saw a retweet that claimed “biohacking is easier than you think” with a link to a post on a blog accompanying a book called Massively Networked. The post included video of Tuur van Balen’s presentation at the NextNature power show a few months earlier. Van Balen is a designer whose work I’ve followed for a couple years now, and his most recent project imagines how synthetic biology might produce and deliver medicines in the future. He demonstrates—using homemade tools, equipment purchased on eBay, and online resources for finding and synthesizing DNA sequences—how someone could engineer a strain of bacteria to produce Prozac-laced yogurt. While he’s not actually making Prozac, his demonstration does show pretty accurately how someone could get DNA into a bacterium (without, of course, the frustrating months of troubleshooting that almost any experiment inevitably requires). I posted my own version of the story, writing that art projects like this can ask important questions about biological design.
The next day, my post was syndicated on the Huffington Post with a modified title that emphasized Prozac. Then a version appeared on Gizmodo, and it went on from there, spreading across the Internet. By the time its spread was complete, Van Balen, an artist interested in the implications of emerging biotechnologies, had mutated into a bioengineer at the forefront of synthetic biology research, creating Prozac yogurt in five days with just 860 base pairs of DNA. (If you were to actually make Prozac biologically, it would certainly take the action of many enzymes, each encoded by their own sequence of hundreds or thousands of base pairs).
How did an art piece, a design fiction that asks us to think critically about the possibilities opened up by synthetic biology, provoke an unskeptical acceptance of what bioengineering has made possible? Perhaps I should have been clearer in my post, or perhaps it’s the fault of sensationalized click-bait headlines. But I think it may be that we’ve become so accustomed to the hype surrounding the science of genes and DNA, so used to hearing about groundbreaking genetics, from the “gene for dry ear wax” to the “gene for Alzheimer’s” to the “gene for [common human behavior]” that we don’t think twice when we hear about mixing bacteria with the “gene for Prozac” to create antidepressant yogurt.