Archive for December, 2011

Information Wants to Be Free. What About Killer Information?

By Malcolm MacIver | December 27, 2011 1:52 pm

Malcolm MacIver is a bioengineer at Northwestern University who studies the neural and biomechanical basis of animal intelligence. He also consults for sci-fi films (e.g., Tron Legacy), and was the science advisor for the TV show Caprica

A few years ago, the world was aflame with fears about the virulent H5N1 avian flu, which infected several hundred people around the world and killed about 300 of them. The virus never acquired the ability to move between people, so it never became the pandemic we feared it might be. But recently virologists have discovered a way to mutate the bird flu virus that makes it more easily transmitted. The results were about to be published in Science and Nature when the U.S. government requested that the scientists and the journal withhold details of the method to make the virus. The journals have agreed to this request. Because the information being withheld is useful to many other scientists, access to the redacted paragraphs will be provided to researchers who pass a vetting process currently being established.

As a scientist, the idea of having any scientific work withheld is one that does not sit well. But then, I work mostly on “basic science,” which is science-speak for “unlikely to matter to anyone in the foreseeable future.” But in one area of work, my lab is developing new propulsion techniques for high-agility underwater robots and sensors that use weak electric fields to “see” in complete darkness or muddy water. This work, like a lot of engineering research, has the potential to be used in machines that harm people. I reassure myself of the morality of my efforts by the length of the chain of causation from my lab to such a device, which doesn’t seem much shorter than the chain for colleagues making better steels or more powerful engines. But having ruminated about my possible involvement with an Empire of Dark Knowledge, here’s my two cents about how to balance the right of free speech and academic freedom with dangerous consequences.

Consider the following thought experiment: suppose there really is a Big Red Button to launch the nukes, one in the U.S., and one in Russia, each currently restricted to their respective heads of government. Launching the nukes will surely result in the devastation of humanity. I’m running for president, and as part of my techno-libertarian ideology, I believe that “technology wants to be free” and I decide to put my money where my slogan is by providing every household in the U.S. with their very own Big Red Button (any resemblance to a real presidential candidate is purely accidental).

If you think this is a good idea, the rest of this post is unlikely to be of interest. But, if you agree that this is an extraordinarily bad idea, then let’s continue.

Read More

Making Sense of CERN’s Higgs Circus

By Amir Aczel | December 21, 2011 4:29 pm

Amir D. Aczel has been closely associated with CERN and particle physics for a number of years and often consults on statistical issues relating to physics. He is also the author of 18 popular books on mathematics and science.

By now you’ve heard the news-non-news about the Higgs: there are hints of a Higgs—even “strong hints”—but no cigar (and no Nobel Prizes) yet. So what is the story about the missing particle that everyone is so anxiously waiting for?

Back in the summer, there was a particle physics conference in Mumbai, India, in which results of the search for the Higgs in the high-energy part of the spectrum, from 145 GeV (giga electron volts) to 466 GeV, were reported and nothing was found. At the low end of the energy spectrum, at around 120 GeV (a region of energy that attracted less attention because it had been well within the reach of Fermilab’s now-defunct Tevatron accelerator) there was a slight “bump” in the data, barely breaching the two-sigma (two standard deviations) bounds—which is something that happens by chance alone about once in twenty times (two-sigma bounds go with 95% probability, hence a one-in-twenty event is allowable as a fluke in the data). But since the summer, data has doubled: twice as many collision events had been recorded as had been by the time the Mumbai conference had taken place. And, lo and behold: the bump still remained!

This gave the CERN physicists the idea that perhaps that original bump was not a one-in-twenty fluke that happens by chance after all, but perhaps something far more significant. Two additional factors came into play as well: the new anomaly in the data at roughly 120 GeV was found by both competing groups at CERN: the CMS detector, and the ATLAS detector; and—equally important—when the range of energy is pre-specified, the statistical significance of the finding suddenly jumps from two-sigma to three-and-a-half-sigma!

Read More

CATEGORIZED UNDER: Space & Physics, Top Posts

The Future: Where Sexual Orientations Get Kind of Confusing

By Kyle Munkittrick | December 19, 2011 9:21 am

Sex, a biological function of reproduction, should be simple. We need to perpetuate the species, we have sex, babies are born, we raise them , they have sex, repeat. Simple, however, is one thing sex most certainly is not. And it’s only getting more complex by the day.

For those who are fans of human exceptionalism, it might be worth considering that the trait which differentiates us from all other animals is that we over-complicate everything. Sex, and its various accoutrements of sexual orientation, gender identity, gender expression, libido, and even how many partners one may have, contains a multitude.

Recently some psychologists have said that pedophilia is a sexual orientation, the erotic predilection that drives people like former Penn State football coach Jerry Sandusky to do what he allegedly did. This idea came to twitter and incited a minor firestorm over whether “sexual orientation” should really be applied to pedophilia. Nature editor Noah Gray used the term in a neutral sense, as in, “an attraction to a specific category of individuals”; io9’s Charlie Jane Anders and Boing Boing blogger Xeni Jardin pointed out the queer community’s long campaign to define sexual orientation only as an ethically acceptable preference for any category of consenting adults. Given that willful troglodytes like Rick Santorum regularly conflate homosexuality with pedophilia and zoophilia, you can see where the frustration around loose use of the term might arise.

Santorum aside, how should we classify pedophilia if not a “sexual orientation?” Why should that term include should one unchosen, inborn form of sexual attraction, but exclude another unchosen, inborn form of sexual attraction?

While we may have ready answers for these questions now, technological and social changes on the horizon will once again challenge our definitions and beliefs about sex. We can imagine a time when we have artificial intelligence (to at least some degree), or super-intelligent animals, or maybe we’ll even become a spacefaring species and encounter other alien intelligences. Without a doubt, people will start discovering that they are primarily attracted to something that isn’t the good ol’ Homo sapiens. Sex and sexuality will increase in complexity by powers of ten. If some person is attracted to a sexy cyborg, or a genetically enhanced dolphin, how will we know if it is ethical to act upon those desires?

Read More

If You Can’t Notice a Gorilla in Plain Sight, How Can You Testify as a Witness?

By Daniel Simons | December 14, 2011 8:48 am

by Daniel Simons, as told to Discover’s Valerie Ross. Simons is a professor of psychology at the University of Illinois, where he studies attention, perception, and memory—and how much worse people are with those skills than they think. He is the co-author, with fellow psychologist Chris Chabris, of The Invisible Gorilla.

Late one January night in 1995, Boston police officer Kenny Conley ran right past the site of a brutal beating without doing a thing about it. The case received extensive media coverage because the victim was an undercover police officer and the aggressors were other cops. Conley steadfastly refused to admit having seen anything, and he was tried and convicted of perjury and obstruction of justice. Prosecutors, jurors, and judges took Conley’s denial to reflect an unwillingness to testify against other cops, a lie by omission. How could you run right past something as dramatic as a violent attack without seeing it? Chris Chabris and I used this example to open our book because it illustrates two fundamental aspects of how our minds work. First, we experience inattentional blindness, a failure to notice unexpected events that fall outside the focus of our attention. Second, we are largely oblivious to the limits of perception, attention, and awareness; we think that we are far more likely to notice unexpected events than we actually are.

Chabris and I have studied this phenomenon of inattentional blindness for many years. Our best-known study was based on earlier work by Ulric Neisser: We  asked subjects to count how many times three players wearing white shirts passed a basketball while ignoring players wearing black who passed their own ball. We found that about 50 percent of subjects failed to notice when a person in a gorilla suit unexpectedly walked through the scene.

The mismatch between what we see and what we think we see has profound implications for our court system. As our research has shown, we can fail to notice something obvious if we are focused on something else. Yet, most jurors likely hold the mistaken belief that we should see anything that happens right before our eyes. Kenny Conley was convicted on the strength of that intuitive belief. Many others likely languish in jail due to similarly mistaken beliefs about the accuracy of memory. By studying these limits of attention and memory and our beliefs about them, we identify cases in which our beliefs diverge from reality. Ideally, we can then reveal these “invisible gorillas” in the court system.

Read More

CATEGORIZED UNDER: Mind & Brain, Top Posts

Occupy Federal Science: “Transformative” Research Can’t Come From Milquetoast

By John Hawks | December 13, 2011 12:17 pm

by John Hawks, an anthropologist at the University of Wisconsin—Madison who studies the genetic and environmental aspects of humanity’s 6-million-year evolution. This post ran in slightly different form on his own blog.

Philip Ball writes in The Guardian about another new initiative from NSF to fund “potentially transformative” research. He begins his essay with this:

The kind of idle pastime that might amuse physicists is to imagine drafting Einstein’s grant applications in 1905. “I propose to investigate the idea that light travels in little bits,” one might say. “I will explore the possibility that time slows down as things speed up,” goes another. Imagine what comments these would have elicited from reviewers for the German Science Funding Agency, had such a thing existed. Instead, Einstein just did the work anyway while drawing his wages as a technical expert third-class at the Bern patent office. And that is how he invented quantum physics and relativity.

The moral seems to be that really innovative ideas don’t get funded—that the system is set up to exclude them.

The system is set up to exclude really innovative ideas. But Einstein is a really misleading example. For one thing, Einstein didn’t need much grant funding for his research. Yes, if somebody had given the poor guy a postdoc, he might have had an easier time being productive in physics. But his theoretical work didn’t need expensive lab equipment, RA and postdoc salaries, and institutional overhead to fund secretarial support, building maintenance, and research opportunities for undergraduates.

It is a better question whether we would have wanted Einstein to spend 1905 applying for grants instead of publishing. But even this is terribly misleading. Most scientists who are denied grants are not Einstein. Most ideas that appear to be transformative in the end turn out to be bunk. Someone who compares himself to Einstein is overwhelmingly likely to be a charlatan. There should probably be a “No Einsteins need apply” clause in every federal grant program.

Setting aside the misleading Einstein comparison, our current grant system still has some severe problems. Is it selecting against “transformative” research—the big breakthroughs? I would put the problem differently. “Transformative” is in the eye of the beholder. Our grant system does what it has been designed for: it picks winners and losers, with a minimum of accountability for the people who set funding priorities.

Read More

CATEGORIZED UNDER: Top Posts

Why Calorie Counts Are Wrong: Cooked Food Provides a Lot More Energy

By Richard Wrangham | December 8, 2011 9:10 am

by Richard Wrangham, as told to Discover’s Veronique Greenwood. Wrangham is the chair of biological anthropology at Harvard University, where he studies the cultural similarities between humans and chimpanzees—including our unique tendencies to form murderous alliances and engage in recreational sexual activity. He is the author of Catching Fire: How Cooking Made Us Human

When I was studying the feeding behavior of wild chimpanzees in the early 1970s, I tried surviving on chimpanzee foods for a day at a time. I learned that nothing that chimpanzees ate (at Gombe, in Tanzania, at least) was so poisonous that it would make you ill, but nothing was so palatable that one could easily fill one’s stomach. Having eaten nothing but chimpanzee foods all day, I fell upon regular cooked food in the evenings with relief and delight.

About 25 years later, it occurred to me that my experience in Gombe of being unable to thrive on wild foods likely reflected a general problem for humans that was somehow overcome at some point, possibly through the development of cooking. (Various of our ancestors would have eaten more roots and meat than chimpanzees do, but I had plenty of experience of seeing chimpanzees working very hard to chew their way through tough raw meat—and had even myself tried chewing monkeys killed and discarded by chimpanzees.) In 1999, I published a paper [pdf] with colleagues that argued that the advent of cooking would have marked a turning point in how much energy our ancestors were able to reap from food.

To my surprise, some of the peer commentaries were dismissive of the idea that cooked food provides more energy than raw. The amazing fact is that no experiments had been published directly testing the effects of cooking on net energy gained. It was remarkable, given the abiding interest in calories, that there was a pronounced lack of studies of the effects of cooking on energy gain, even though there were thousands of studies on the effects of cooking on vitamin concentration, and a fair number on its effects on the physical properties of food such as tenderness. But more than a decade later, thanks particularly to the work of Rachel Carmody, a grad student in my lab, we now have a series of experiments that provide a solid base of evidence showing that the skeptics were wrong.

Whether we are talking about plants or meat, eating cooked food provides more calories than eating the same food raw. And that means that the calorie counts we’ve grown so used to consulting are routinely wrong. Read More

Bursting the Bubble of Human Intelligence

By Mark Changizi | December 7, 2011 10:01 am

Mark Changizi is an evolutionary neurobiologist and director of human cognition at 2AI Labs. He is the author of The Brain from 25000 FeetThe Vision Revolution, and his newest book, Harnessed: How Language and Music Mimicked Nature and Transformed Ape to Man.”

I’m king of the world! You are too. We humans—all of us—get props for being the smartest Earthlings. And we’re not merely the smartest. No, we’re the only species worth writing home about; we’re the only truly worth building artificial intelligence to mimic. We’re the smart ones. The rest of the diversity of life may be rich in clever design, like well-engineered tools and gadgets, but they’re not designed to be intelligent. That’s for humans. Rationality and intelligence is something natural selection granted us.

But…what if our Homo sapiens intelligence is radically overrated? What if we’re smarter, but only quantitatively so, not qualitatively? What if many of our Earthly cousins are respectably intelligent after all? More intriguingly, what if there are systematic barriers that lead us to overestimate our true level of intelligence relative to that of others? And, although I won’t get into this here, what are the implications for the rights of chimpanzees, if the chasm between us and them is, instead, a slender fault line? That question has led to a recent movement to ban invasive research on chimpanzees in the U.S., a measure that the EU has already adopted.

Here I’ll discuss just two barriers, a little one and a big one, that conceal how smart we really are—or are not.

Individuality: The Little Bubble

Read More

CATEGORIZED UNDER: Mind & Brain, Top Posts

The Ultimate Measure of a Planet—Habitability Isn’t a Yes/No Question

By Seth Shostak | December 6, 2011 1:57 pm

Seth Shostak is Senior Astronomer at the SETI Institute in California, and the host of the weekly radio show and podcast, “Big Picture Science.”

Back in the early days of “Star Trek,” whenever the Enterprise would chance upon a novel planet, we’d hear a quick analysis from Science Officer Spock. Frequently he would opine, “It’s an M-class planet, Captain.” That was the tip-off that this world was not only suited for life, but undoubtedly housed some intelligent beings eager for a meet-and-greet with the Enterprise crew.

But what is an “M-class planet” (also referred to as “class M”)? Clearly, it referred to a world on which intelligent life could thrive, and made it easy for the crew (and viewers) to see where the episode was headed. A recent paper by Washington State University astrobiologist Dirk Schulze-Makuch and his colleagues has suggested a somewhat similar way to categorize real-world orbs that might be home to cosmic confreres. Rather than giving planets a Spockian alphabetic designation, Schulze-Makuch prefers a less obscure, and more precise, numerical specification: a value between 0 and 1. A world that scores a 1 is identical to Earth in those attributes thought necessary for life. A score of 0 means that it’s a planet only an astronomer could love—likely to be as sterile as an autoclaved mule.

Schulze-Makuch computes this index—which he calls an Earth Similarity Index, or ESI—by considering both the composition of a planet (is it rocky and roughly the size of Earth?) and some crude measures of how salubrious the surface might be (does it have a thick atmosphere, and are temperatures above freezing and below boiling?) He combines parameters that define these characteristics in a series of multiplicative terms that are reminiscent of the well-known Drake equation, used to estimate the number of technologically adept civilizations in the Milky Way.

At present the number of worlds thought to have an ESI of 0.8 or greater—near-cousins of Earth—is only one: Gliese 581g (though that planet’s existence is disputed). But as additional data from NASA’s Kepler mission continue to stream in, we can expect that more such “habitable” planets will turn up. In particular, Kepler scientists reported this week on a newsworthy object called Kepler-22b. This planet is 2.4 times Earth’s diameter and in an orbit around a Sun-like star that places it securely in the habitable zone—where temperatures might be similar to a summer day in San Francisco.

Read More

CATEGORIZED UNDER: Top Posts

The Driver of Human Evolution Isn’t the Climate Around You, It’s the Worms Inside You

By Razib Khan | December 2, 2011 11:58 am


One of the strangest aspects of our understanding of evolutionary biology is the tendency to conflate a sprawling protean dynamic into a sliver of a phenomenon. Most prominently, evolution is often reduced to a process driven by natural selection, with an emphasis on the natural. When people think of populations evolving they imagine them being buffeted by inclement weather, meteors, or smooth geological shifts. These are all natural, physical phenomena, and they all apply potential selection pressures. But this is not the same as evolution; it’s just one part. A more subtle aspect of evolution is that much of the selection is due to competition between living organisms, not their relationship to exterior environmental conditions.

The question of what drives evolution is a longstanding one. Stephen Jay Gould famously emphasized of the role of randomness, while Richard Dawkins and others prioritize the shaping power of natural selection. More finely still, there is the distinction between those which emphasize competition across the species versus within species. And then there are the physical, non-biological forces.

Evolution as selection. Evolution as drift. Evolution as selection due to competition between individuals of the same species. Evolution as selection due to competition between individuals of different species. And so forth. There are numerous models, theories, and conjectures about what’s the prime engine of evolution. The evolutionary biologist Richard Lewontin famously observed that in the 20th century population geneticists constructed massively powerful analytic machines, but had very little data which they could throw into those machines. And so it is with theories of evolution. Until now.

Over the past 10 years in the domain of human genetics and evolution there has been a swell of information due to genomics. In many ways humans are now the “trial run” for our understanding of evolutionary process. Using theoretical models and vague inferences from difficult-to-interpret signals, our confidence in the assertions about the importance of any given dynamic have always been shaky at best. But now with genomics, researchers are testing the data against the models.

A recent paper is a case in point of the methodology. Using 500,000 markers, ~50 populations, and ~1,500 people, the authors tested a range of factors against their genomic data. The method is conceptually simple, though the technical details are rather abstruse. The ~1,500 individuals are from all around the globe, so the authors could construct a model where the markers varied as a function of space. As expected, most of the genetic variation across populations was predicted by the variation across space, which correlates with population demographic history; those populations adjacent to each other are likely to have common recent ancestors. But the authors also had some other variables in their system which varied as a function of space in a less gradual fashion: climate, diet, and pathogen loads. The key is to look for those genetic markers and populations where the expectation of differences being driven as a function of geography do not hold. Neighbors should be genetically like, but what if they’re not? Once you find a particular variant you can then see how it varies with the factors listed above.

Read More

CATEGORIZED UNDER: Living World, Top Posts

Is Our Universe a Big Schrödinger’s Cat—Where It’s Alive Is Where We Live?

By Amir Aczel | December 1, 2011 2:11 pm

In the early 1960s, Princeton physicist Robert Dicke invoked the anthropic principle to explain the age of the universe. He argued that this age must be compatible with the evolution of life, and, for that matter, with sentient, conscious beings who wonder about the age of the universe. In a universe that is too young for life to have evolved, there are no such beings. Over the decades, this argument has been extended to other parameters of the universe we observe around us, and thus to questions such as: Why is the mass of the electron 1,836.153 times smaller than that of the proton? Why are the electric charges of the up and down quarks exactly 2/3 and -1/3, respectively, on a scale in which the electron’s charge is -1? Why is Newton’s gravitational constant, G, equal to 6.67384 x 10-11? And, the question that has deeply puzzled so many physicists for a century (since its discovery in 1916): Why is the fine structure constant, which measures the strength of electromagnetic interactions, so tantalizingly close to 1/137 —the inverse of a prime number? (We now know it to far greater accuracy: about 1/137.035999.) Richard Feynman wrote: “It’s one of the greatest damn mysteries of physics: a magic number that comes to us with no understanding by man. You might say the ‘hand of God’ wrote that number, and ‘we don’t know how he pushed his pencil'” (QED: The Strange Theory of Light and Matter, page 131, Princeton, 1985). The great British astronomer Arthur Eddington (who in 1919 proved Einstein’s claim that spacetime curves around massive objects by making observations of starlight grazing the Sun during a total solar eclipse) built entire numerological theories around this number; and there is even a joke that the Austrian physicist and quantum pioneer Wolfgang Pauli, who throughout his life was equally obsessed with the number 137, asked God about it when he died (in fact: in a hospital room number 137) and went up to heaven; God handed him a thick packet and said: “Read my preprint, I explain it all here.” But if constants of nature are simply what they are, nothing more can be said about them, right?

Well, our viewpoint may suddenly change if a startling new finding should be confirmed through independent research by other scientists. Recently, astrophysicist John Webb of the University of New South Wales in Sydney, Australia, and colleagues published new findings that indicate that the fine structure constant may not be a constant after all—it may vary through space or time. Through observations of galaxies that lie 12 billion light-years roughly to the north with those at the same distance lying to the south, the team discovered variations in the fine structure constant amounting to about 1 part in 100,000. It is not clear whether quantum effects would drastically change when a fundamental constant such as the fine structure constant varies by such minute amounts. But if they do, and the change in the constant is significant, it could mean that there are universes—or distant parts of our own universe—where matter as we know it, and hence life, could not exist. Such a conclusion would greatly amplify the weight of the anthropic principle as a powerful argument for why we observe and measure the physical parameters we do. It is important to note that there is still skepticism about the finding, expressed for example in this post from Sean Carroll last year. But the possibility that this result is real cannot be discounted.

Read More

CATEGORIZED UNDER: Space & Physics, Top Posts
NEW ON DISCOVER
OPEN
CITIZEN SCIENCE
ADVERTISEMENT

Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

The Crux

A collection of bright and big ideas about timely and important science from a community of experts.
ADVERTISEMENT

See More

ADVERTISEMENT
Collapse bottom bar
+

Login to your Account

X
E-mail address:
Password:
Remember me
Forgot your password?
No problem. Click here to have it e-mailed to you.

Not Registered Yet?

Register now for FREE. Registration only takes a few minutes to complete. Register now »