Archive for April, 2012

The Limits to Environmentalism

By Keith Kloor | April 27, 2012 11:58 am

By Keith Kloor, a freelance journalist whose stories have appeared in a range of publications, from Science to Smithsonian. Since 2004, he’s been an adjunct professor of journalism at New York University. This piece is a follow-up from a post on his blog, Collide-a-Scape.

 

party in Woody Allen's Sleeper
In Sleeper, Woody Allen finds that socializing is different after the 70’s.
Environmentalism? Not so much.

If you were cryogenically frozen in the early 1970s, like Woody Allen was in Sleeper, and brought back to life today, you would obviously find much changed about the world.

Except environmentalism and its underlying precepts. That would be a familiar and quaint relic. You would wake up from your Rip Van Winkle period and everything around you would be different, except the green movement. It’s still anti-nuclear, anti-technology, anti-industrial civilization. It still talks in mushy metaphors from the Aquarius age, cooing over Mother Earth and the Balance of Nature. And most of all, environmentalists are still acting like Old Testament prophets, warning of a plague of environmental ills about to rain down on humanity.

For example, you may have heard that a bunch of scientists produced a landmark report that concludes the earth is destined for ecological collapse, unless global population and consumption rates are restrained. No, I’m not talking about the UK’s just-published Royal Society report, which, among other things, recommends that developed countries put a brake on economic growth. I’m talking about that other landmark report from 1972, the one that became a totem of the environmental movement.

I mention the 40-year old Limits to Growth book in connection with the new Royal Society report not just to point up their Malthusian similarities (which Mark Lynas flags here), but also to demonstrate what a time warp the collective environmental mindset is stuck in. Even some British greens have recoiled in disgust at the outdated assumptions underlying the Royal Society’s report. Chris Goodall, author of  Ten Technologies to Save the Planet, told the Guardian: “What an astonishingly weak, cliché ridden report this is…’Consumption’ to blame for all our problems? Growth is evil?  A rich economy with technological advances is needed for radical decarbonisation. I do wish scientists would stop using their hatred of capitalism as an argument for cutting consumption.”

Goodall, it turns out, is exactly the kind of greenie (along with Lynas) I had in mind when I argued last week that only forward-thinking modernists could save environmentalism from being consigned to junkshop irrelevance. I juxtaposed today’s green modernist with the backward thinking “green traditionalist,” who I said remained wedded to environmentalism’s doom and gloom narrative and resistant to the notion that economic growth was good for the planet. Modernists, I wrote, offered the more viable blueprint for sustainability:

Read More

CATEGORIZED UNDER: Environment, Technology, Top Posts

Does Brain Scanning Show Just the Tip of the Iceberg?

By Neuroskeptic | April 25, 2012 10:10 am

By Neuroskeptic, a neuroscientist who takes a skeptical look at his own field, and beyond. A different version of this post appeared on the Neuroskeptic blog

 

Brain-scanning studies may be giving us a misleading picture of the brain, according to recently published findings from two teams of neuroscientists.

Both studies made use of a much larger set of data than is usual in neuroimaging studies. A typical scanning experiment might include around 20 people, each of whom performs a given task maybe a few dozen times. So when French neuroscientists Benjamin Thyreau and colleagues analysed the data from 1,326 people, they were able to increase the statistical power of their experiment by an order of magnitude. An American team led by Javier Gonzalez-Castillo, on the other hand, only had 3 people, but each one was scanned while performing the same task 500 times over.

In both cases, the researchers found that close to the whole of the brain “lit up”—that is, showed increased metabolic activity—when people were doing simple mental tasks, compared to just resting. In one case, it was seeing videos of people’s faces; in the other, it was deciding whether stimuli on the screen were letters or numbers. Both studies made use of functional magnetic resonance imaging (fMRI), which uses powerful magnetic fields to image the brain and detect the changes in blood oxygen caused by differences in the firing rate of the cells in different areas.

There have been many thousands of fMRI papers published since the technique was developed 20 years ago. The great majority of these have produced the familiar “blob” plots showing that different kinds of mental processes engage localized activity in particular parts of the brain. Thyreau and Gonzalez-Castillo, however, were able to detect effects too small to be noticed in such neuroimaging experiments, and found that rather than isolated blobs, large swathes of the brain were involved. This doesn’t mean that everywhere responded equally to the task: the signal was stronger in some areas of the brain than in others, but there were no clear-cut divisions between “active” and “inactive” areas.

While the new results don’t overturn the localization theory as such, they do show that it’s only part of the picture. The blobs are real enough, as they show us the areas where activation is strongest, but it’s misleading to think of these areas as the only places involved in a particular task. Other activations, smaller or less consistent but no less real, are hidden under the threshold of statistical noise. fMRI experiments may just be showing us the tip of the iceberg of brain activity.

Read More

CATEGORIZED UNDER: Mind & Brain, Top Posts

Steak of the Art: The Fatal Flaws of In Vitro Meat

By Guest Blogger | April 24, 2012 10:00 am

meat

Christina Agapakis is a synthetic biologist and postdoctoral research fellow at UCLA who blogs about about biology, engineering, biological engineering, and biologically inspired engineering at Oscillator.

When you factor in the fertilizer needed to grow animal feed and the sheer volume of methane expelled by cows (mostly, though not entirely, from their mouths), a carnivore driving a Prius can contribute more to global warming than a vegan in a Hummer. Given the environmental toll of factory farming it’s easy to see why people get excited about the idea of meat grown in a lab, without fertilizer, feed corn, or burps.

In this vision of the future, our steaks are grown in vats rather than in cows, with layers of cow cells nurtured on complex machinery to create a cruelty-free, sustainable meat alternative. The technology involved is today used mainly to grow cells for pharmaceutical development, but that hasn’t stopped several groups from experimenting with “in vitro meat,” as it’s called, over the last decade. In fact, a team of tissue engineers led by professor Mark Post at Maastricht University in the Netherlands recently announced their goal to make the world’s first in vitro hamburger by October 2012. The price tag is expected to be €250,000 (over $330,000), but we’re assured that as the technology scales up to industrial levels over the next ten years, the cost will scale down to mass-market prices.

Whenever I hear about industrial scaling as a cure-all, my skeptic alarms start going off, because scaling is the deus ex machina of so many scientific proposals, often minimized by scientists (myself included) as simply an “engineering problem.” But when we’re talking about food and sustainability, that scaling is exactly what feeds a large and growing population. Scaling isn’t just an afterthought, it’s often the key factor that determines if a laboratory-proven technology becomes an environmentally and economically sustainable reality. Looking beyond the hype of “sustainable” and “cruelty-free” meat to the details of how cell culture works exposes just how difficult this scaling would be.

Read More

5 Ways to Turn a Liberal Into a Conservative (At Least Until the Hangover Sets In)

By Guest Blogger | April 20, 2012 8:24 am

By Chris Mooney, a science and political journalist, blogger, podcaster, and experienced trainer of scientists in the art of communication. He is the author of four books, including the just-released The Republican Brain: The Science of Why They Deny Science and Reality and the New York Times-bestselling The Republican War on Science. He blogs for Science Progress, a website of the Center for American Progress and Center for American Progress Action Fund, and is a host of the Point of Inquiry podcast.

Voting image via Shutterstock

One of the first questions that usually comes up when people ask me about my book The Republican Brain is: “How do you explain my Uncle Elmer, who grew up a hard core Democrat and was very active in the union, but now has a bumper sticker that reads ‘Don’t Tread on Me’?”

Okay: I’m making this question up, but it’s pretty close to reality. People constantly want to know how to explain political conversions—cases in which individuals have changed political outlooks, sometimes very dramatically, from left to right or right to left.
When I get the standard political conversion question, the one I ask in return may come as a surprise: “Are you talking about permanent political conversions, or temporary ones?”

You see, Uncle Elmer is less interesting to me—and in some ways, less interesting to the emerging science of political ideology—than the committed Democrat who became strongly supportive of George W. Bush right after 9/11, but switched back to hating him a few months later. What caused that to happen? Because it certainly doesn’t seem to have much to do with thinking carefully about the issues.

Indeed, the growing science of politics has uncovered a variety of interventions that can shift liberal people temporarily to the political right. And notably, none of them seem to have anything substantive to do with policy, or with the widely understood political differences between Democrats and Republicans.

Here is a list of five things that can make a liberal change his or her stripes:

Distraction. Several studies have shown that “cognitive load”—in other words, requiring people to do something that consumes most or all of their attention, like listening to a piece of music and noting how many tones come before each change in pitch—produces a conservative political shift.

Read More

CATEGORIZED UNDER: Mind & Brain, Top Posts

What If Music and Language Are Neither Instinct nor Invention?

By Mark Changizi | April 19, 2012 8:42 am

Mark Changizi is an evolutionary neurobiologist and director of human cognition at 2AI Labs. He is the author of The Brain from 25000 FeetThe Vision Revolution, and his newest book, Harnessed: How Language and Music Mimicked Nature and Transformed Ape to Man.”


Earlier this week there was a debate on the origins of music at the Atlantic between two well-known psychologists. Geoffrey Miller (author of The Mating Mind) thinks music is an instinct, one due to sexual selection. On the other side is Gary Marcus (author of Guitar Zero), who believes music is a cultural invention. Given my recent book on the issue, Harnessed, many have asked me where I fall on the question, Is music an instinct or an invention?

My answer is that music is neither instinct nor invention—or, from another perspective, music is both—and this debate provides an opportunity to remind ourselves that there is a third option for the origins of music, an option that I have argued may also underlie our writing and language capabilities.

What if music only has the illusion of instinct? Might there be processes that could lead to music that is exquisitely shaped for our brains, even though music wasn’t something we ever evolved by natural seletion to process? Music in this case wouldn’t be merely an invention, one of the countless things we do that we’re not “supposed” to be doing and that we’re not particularly good at—like logic or rock-climbing. Instead, music would fit our brain like a glove, tightly inter-weaved amongst our instincts…but yet not be an instinct itself.

There is such a process that can give the gleamy shine of instinct to capabilities we never evolved to possess. It’s cultural evolution.

Once humans were sufficiently smart and social that cultural evolution could pick up steam, a new blind watchmaker was let loose on the world, one that could muster designs worthy of natural selection, and in a fraction of the time. Cultural selection could shape our artifacts to co-opt our innate capabilities.

Cultural evolution is an old idea, but there has been a resurgence of interest in it thanks to researchers like Stanislas Dehaene and Laurent Cohen, who have studied how writing neuronally recycles parts of our visual object-recognition hardware (see Reading in the Brain). And in my research I have tried to get down to brass tacks on how culture manages to harness our brain hardware.

Read More

CATEGORIZED UNDER: Top Posts

The Triumph of Technodorkiness: Why We’re Gladly Turning Ourselves Into Yesterday’s Losers

By Guest Blogger | April 17, 2012 9:08 am

By David H. Freedman, a journalist who’s contributed to many magazines, including DISCOVER, where he writes the Impatient Futurist column. His latest book, Wrong: Why Experts Keep Failing Us—and How to Know When Not to Trust Them, came out in 2010. Find him on Twitter at @dhfreedman.

 

Computer glasses have arrived, or are about to. Google has released some advance information about its Project Glass, which essentially embeds smartphone-like capabilities, including a video display, into eyeglasses. A video put out by the company suggests we’ll be able to walk down the street—and, we can extrapolate, distractedly walk right into the street, or drive down the street—while watching and listening to video chats, catching up on social networks (including Google+, of course), and getting turn-by-turn directions (though you’ll be on your own in avoiding people, lampposts and buses, unless there’s a radar-equipped version in the works).

Toshiba bubble helmet
Toshiba developed a six-pound surround-sight bubble helmet. It didn’t take off.

The reviews have mostly been cautiously enthusiastic. But they seem to be glossing over what an astounding leap this is for technophiles. I don’t mean in the sense that this is an amazing new technology. I mean I’m surprised that we seem to be seriously discussing wearing computer glasses as if it weren’t the dorkiest thing in the world—a style and coolness and common-sense violation of galactic magnitude. Video glasses are the postmodern version of the propeller beanie cap. These things have been around for 30 years. You could buy them at Brookstone, or via in-flight shopping catalogs. As far as I could tell, pretty much no one was interested in plunking these things down on their nose. What happened?

More interesting, the apparent sudden willingness to consider wearing computers on our faces may be part of a larger trend. Consider computer tablets, 3D movies, and video phone calls—other consumer technologies that have been long talked about, long offered in various forms, and long soundly rejected—only to relatively recently and suddenly gain mass acceptance.

The obvious explanation for the current triumph of technologies that never seemed to catch on is that the technologies have simply improved enough, and dropped in price enough, to make them sufficiently appealing or useful to a large percentage of the population. But I don’t think that’s nearly a full-enough explanation. Yes, the iPad offers a number of major improvements over Microsoft Tablet PC products circa 2000—but not so much that it could account for the complete shunning of the latter and the total adoration of the former. Likewise, the polarized-glasses-based 3D movie experience of the 1990s, as seen in IMAX and Disney park theaters at the time, really were fairly comparable to what you see in state-of-the-art theaters today.

I think three things are going on:

Read More

CATEGORIZED UNDER: Technology, Top Posts

Identical Twins Usually Don’t Die From the Same Thing: The Lost Message About Genes & Disease

By Guest Blogger | April 16, 2012 1:06 pm

By Luke Jostins, a postgraduate student working on the genetic basis of complex autoimmune diseases. Jostins has a strong background in informatics and statistical genetics, and writes about genetic epidemiology and sequencing technology on the his blog Genetic Inference. A different version of this post appeared on the group blog Genomes Unzipped.

 

One of the great hopes for genetic medicine is that we will be able to predict which people will develop certain diseases, and then focus preventative measures to those at risk. Scientists have long known that one of the wrinkles in this plan is that we will only rarely be able to say with certainty whether someone develop a given disease based on their genetics—more often, we can only give an estimate of their disease risk.

This realization came mostly from twin studies, which look at the disease histories of identical and non-identical twins. Twin studies use established models of genetic risk among families and populations, along with the different levels of similarity of identical and non-identical twins, to estimate how much of disease risk comes from genetic factors and how much comes from environmental risk factors. (See this post for more details.) There are some complexities here, and the exact model used can change the results you get, but in general the overall message is the same: genetic risk prediction contains a lot of information, but not enough to give guaranteed predictions of who will and who won’t get certain diseases. This is not only true of genetics either: parallel studies of environmental risk factors usually reveal tendencies and probabilities, not guarantees.

This means that two people with exactly the same weight, height, sex, race, diet, childhood infection exposures, vaccination history, family history, and environmental toxin levels will usually not get the same disease, but they are far more likely to than two individuals who differ in all those respects. To take an extreme example, identical twins, despite sharing the same DNA, socioeconomic background, childhood environment, and (generally) placenta, usually do not die from the same thing—but they are far more likely to than two random individuals. This is a perfect analogy for how well (and badly) risk prediction can work: you will never have a better prediction than knowing the health outcomes of a genetic copy of you. The health outcomes of another version of you will be invaluable, and will help guide you, your doctor, and the health-care establishment, if they use this information properly. But it won’t let them know exactly what will happen to you, because identical twins usually do not die from the same thing.

There is no health destiny: There is always a strong random component in anything that happens to your body. This does not mean that none of these things are important; being aware of your disease risks is one of the most important things you can do for your own future health. But risk is not destiny. And this central fact has been well known to scientists for a while now.

This was the context into which a recent paper in Science Translational Medicine by Bert Vogelstein and colleagues was published, which also used twin study data to ask how well genetics could predict disease. The take-home message from the study (or at least the message that many media outlets have taken home) is that DNA does not perfectly determine which disease or diseases you may get in the future. The paper was generally pretty flawed: many geneticists expressed annoyance at the paper, and Erika Check Hayden carried out a thorough investigation into the paper for the Nature News blog. In short, the study used a non-standard and arbitrary model of genetic risk, and failed to properly model the twin data, handling neither the many environmental confounders nor the large degree of uncertainty associated with studies of twins.

Many geneticists were annoyed that the authors seemed to be unaware of the existing literature on the subject, and that they presented their approach and their results as if they were novel and controversial at a well-attended press release at the American Association for Cancer Research annual meeting. However, what came as more of a shock was how surprised the media as a whole seemed to be at the results, with headlines such as “DNA Testing Not So Potent for Prevention” and “Your DNA blueprint may disappoint.” No reporter (other than Erika) even mentioned the information that we already had about the limits of genetic risk prediction. As Joe Pickrell pointed out on twitter, we can’t really know whether this was genuine surprise or merely newspapers hyping the message to make it seem more like news, but having talked to a few journalists and members of the public, the surprise appears to be at least in part genuine. The gap between the public perception and the established consensus on genetic risk prediction seemed to us to be unexpected and worrying.

Read More

Cheap Soul Teleportation, Coming Soon to a Theater Near You?

By Mark Changizi | April 10, 2012 12:39 pm

Mark Changizi is an evolutionary neurobiologist and director of human cognition at 2AI Labs. He is the author of The Brain from 25000 FeetThe Vision Revolution, and his newest book, Harnessed: How Language and Music Mimicked Nature and Transformed Ape to Man.”

Also check out his related commentary on a promotional video for Project Glass, Google’s augmented-reality project.

 

Experience happens here—from my point of view. It could happen over there, or from a viewpoint of an objective nowhere. But instead it happens from the confines of my own body. In fact, it happens from my eyes (or from a viewpoint right between the eyes). That’s where I am. That’s consciousness central—my “soul.” In fact, a recent study by Christina Starmans at Yale showed that children and adults presume that this “soul” lies in the eyes (even when the eyes are positioned, in cartoon characters, in unusual spots like the chest).

The question I wish to raise here is whether we can teleport our soul, and, specifically, how best we might do it. I’ll suggest that we may be able to get near-complete soul teleportation into the movie (or video game) experience, and we can do so with some fairly simple upgrades to the 3D glasses we already wear in movies.

Consider for starters a simple sort of teleportation, the “rubber arm illusion.” If you place your arm under a table out of your view, and have a fake, rubber, arm on the table where your arm usually would be, an experimenter who strokes the rubber arm while simultaneously stroking your real arm on the same spot will trick your brain into believing that the rubber arm is your arm. Your arm—or your arm’s “soul”—has “teleported” from under the table and within your real body into a rubber arm sitting well outside of your body.

It’s the same basic trick to get the rest of the body to transport. If you were to wear a virtual reality suit able to touch you in a variety of spots with actuators, then you can be presented with a virtual experience – a movie-like experience – wherein you can see your virtual body being touched and the bodysuit you’re wearing simultaneously touches your real body in those same spots. Pretty soon your entire body has teleported itself into the virtual body.

And… Yawn, we all know this. We saw James Cameron’s Avatar, after all, which uses this as the premise.

My question here is not whether such self-teleportation is possible, but whether it may be possible to actually do this in theaters and video games. Soon.

Read More

CATEGORIZED UNDER: Mind & Brain, Technology, Top Posts

Chocolate & Red Meat Can Be Bad for Your Science: Why Many Nutrition Studies Are All Wrong

By Guest Blogger | April 5, 2012 10:07 am

By Gary Taubes, author of Nobel Dreams (1987), Bad Science (1993), Good Calories, Bad Calories (2007), and Why We Get Fat (2011). Taubes is a former staff member at DISCOVER. He has won the Science in Society Award of the National Association of Science Writers three times and was awarded an MIT Knight Science Journalism Fellowship for 1996-97. A modified version of this post appeared on Taubes’ blog.

 

The last couple of weeks have witnessed a slightly-greater-than-usual outbreak of extremely newsworthy nutrition stories that could be described as bad journalism feasting on bad science. The first was a report out of the Harvard School of Public Health that meat-eating apparently causes premature death and disease (here’s how the New York Times covered it), and the second out of UC San Diego suggesting that chocolate is a food we should all be eating to lose weight (the Times again).

Both of these studies were classic examples of what is known technically as observational epidemiology, a field of research I discussed at great length back in 2007 in a cover article for in the New York Times Magazine. The article was called “Do We Really Know What Makes Us Healthy?” and I made the argument that this particular pursuit is closer to a pseudoscience than a real science.

As a case study, I used a collaboration of researchers from the Harvard School of Public Health, led by Walter Willett, who runs the Nurses’ Health Study. And I pointed out that every time that these Harvard researchers had claimed that an association observed in their observational trials was a causal relationship—that food or drug X caused disease or health benefit Y—and that this supposed causal relationship had then been tested in experiment, the experiment had failed to confirm the causal interpretation—i.e., the folks from Harvard got it wrong. Not most times, but every time.

Now it’s these very same Harvard researchers—Walter Willett and his colleagues—who have authored the article from two weeks ago claiming that red meat and processed meat consumption is deadly; that eating it regularly raises our risk of dying prematurely and contracting a host of chronic diseases. Zoe Harcombe has done a wonderful job dissecting the paper at her site. I want to talk about the bigger picture (in a less concise way).

This is an issue about science itself and the quality of research done in nutrition. Science is ultimately about establishing cause and effect. It’s not about guessing. You come up with a hypothesis—force x causes observation y—and then you do your best to prove that it’s wrong. If you can’t, you tentatively accept the possibility that your hypothesis might be right. In the words of Karl Popper, a leading philosopher of science, “The method of science is the method of bold conjectures and ingenious and severe attempts to refute them.” The bold conjectures, the hypotheses, making the observations that lead to your conjectures… that’s the easy part. The ingenious and severe attempts to refute your conjectures is the hard part. Anyone can make a bold conjecture. (Here’s one: space aliens cause heart disease.) Testing hypotheses ingeniously and severely is the single most important part of doing science.

The problem with observational studies like the  ones from Harvard and UCSD that gave us the bad news about meat and the good news about chocolate, is that the researchers do little of this. The hard part of science is left out, and they skip straight to the endpoint, insisting that their causal interpretation of the association is the correct one and we should probably all change our diets accordingly.

Read More

CATEGORIZED UNDER: Health & Medicine, Top Posts

Santorum’s Slipping Tongue: What Do Speech Errors Really Reveal About Inner Thoughts?

By Julie Sedivy | April 2, 2012 2:13 pm

Julie Sedivy is the lead author of Sold on Language: How Advertisers Talk to You And What This Says About You. She contributes regularly to Psychology Today and Language Log. She is an adjunct professor at the University of Calgary, and can be found at juliesedivy.com and on Twitter/soldonlanguage.

Last week, a verbal stumble by Republican candidate Rick Santorum led to a fresh batch of accusations that he harbors racist sentiments. Here is a video clip and transcript, from a speech delivered on March 27th 2012 in Janesville, Wisconsin:

We know, we know the candidate Barack Obama, what he was like. The anti-war government nig- uh, the uh America was a source for division around the world.”

Almost immediately, this video clip began to zip around the internet, with many people arguing that Santorum had caught himself in the middle of uttering a racial slur against Barack Obama, inadvertently revealing his true attitude. The presumption behind these arguments is that “Freudian slips” reflect a layer of thoughts and attitudes that sometimes slip past the mental guards of consciousness and bubble to the surface. That they’re the window to what someone was really thinking, despite his best efforts to conceal it.

But decades of research in psycholinguistics reveal that speech errors are rarely this incriminating. The vast majority of them come about simply because of the sheer mechanical complexity of the act of speaking. They’re less like Rorschach blot tests and more like mundane assembly-line mistakes that didn’t get caught by the mind’s inner quality control.

Speech errors occur because when it comes to talking, the mind cares much more about speed than it does about accuracy. We literally speak before we’re done thinking about what we’re going to say, and this is true not just for the more impetuous amongst us, but for all speakers, all of the time. Speech production really is like an assembly line, but an astoundingly frenzied one in which an incomplete set of blueprints is snatched out of the hands of the designers by workers eager to begin assembling the product before it’s fully sketched out.

Read More

CATEGORIZED UNDER: Mind & Brain, Top Posts
NEW ON DISCOVER
OPEN
CITIZEN SCIENCE
ADVERTISEMENT

Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

The Crux

A collection of bright and big ideas about timely and important science from a community of experts.
ADVERTISEMENT

See More

ADVERTISEMENT
Collapse bottom bar
+

Login to your Account

X
E-mail address:
Password:
Remember me
Forgot your password?
No problem. Click here to have it e-mailed to you.

Not Registered Yet?

Register now for FREE. Registration only takes a few minutes to complete. Register now »