Mark Changizi is an evolutionary neurobiologist and director of human cognition at 2AI Labs. He is the author of The Brain from 25000 Feet, The Vision Revolution, and his newest book, Harnessed: How Language and Music Mimicked Nature and Transformed Ape to Man.
There are few things more romantic than being a discoverer, whether it be Captain James Cook’s Sandwich Isles or Alvin Roth’s and Lloyd Shapley’s recent-Nobel-winning work on stable allocations. And the excitement exists even among us regular-folk scientists—our discoveries may not be of the magnitude of Sir Alexander Fleming’s penicillin or Einstein’s special relativity, but we bask away unheeded. “Dear world, here is my beautiful solution to the puzzle.” Not only is the solution typically beautiful—that’s often what makes a good discovery “good”—but it is packaged into elegantly-written journal articles or glossy books. On the basis of the splendor of our discoveries, laymen might wonder whether our minds are beautiful as well.
Far be it from me to debunk the mythical, magician-like qualities sometimes attributed to us scientists, but the dirtiest little secret in science is that our science minds are just as dirty and unbeautiful as everyone else’s… and this has important implications, both for aspiring students and for how science is funded. I’ll get to these later.
Now, it’s not that the entire scientific process behind discovery is ugly. Much of it is elegant. Good experimental design, valid statistics, analyses of hypotheses—there are sound principles guiding us, the same ones we teach our students.
But where we see the everyday-ness of our science minds is in the discovery process itself, that is, in the efforts to find the new idea (hypothesis, theory, whatever) in the first place. Discoveries can be dressed up well, but the way we go about finding our ideas is almost always an embarrassing display of buffoonery.
Amir D. Aczel writes often about physics and cosmology. His book about the discovery of the Higgs boson, Present at the Creation: Discovering the Higgs Boson, is published in paperback by Broadway Books in November 2012.
If somebody told you that there are angels floating in space, observing our world and forming their impressions of our everyday reality, you would think that this person is nuts—a religious fanatic with an active imagination, and certainly not a scientist. Scientists, as we all know, are rational beings who believe only in what nature reveals to us through experimentation and observation, coupled with theory that is never divorced from the physical measurements they make. The link between the two remains tightly regulated through the strict rules of the scientific method.
So how do you explain the bizarre fact that, for about five years now, some of the world’s most prominent physicists have been describing a scenario—which they seem to truly believe may be real—in which, instead of the Biblical angels, space is permeated by disembodied brains?
These compact, conscious observers, called “Boltzmann brains,” cruise the vastness of intergalactic space, and beyond it, to the infinite “multiverse” that some scientists believe exists outside the reaches of the universe we observe through our telescopes and satellites. Their consciousness makes the Boltzmann brains recreate our reality. They imagine life such as the one you and I believe we are experiencing here on Earth, to the point that these brains in space may think that they are living on a planet like ours, that they may even be us. Some recent physics papers and commentaries have even explored the possible limits on the number of Boltzmann brains in the universe as compared with “real” brains, in an effort to estimate the probability that we are real rather than Boltzmann entities.
Neuroskeptic is a neuroscientist who takes a skeptical look at his own field and beyond at the Neuroskeptic blog.
Fraud is one of the most serious concerns in science today. Every case of fraud undermines confidence amongst researchers and the public, threatens the careers of collaborators and students of the fraudster (who are usually entirely innocent), and can represent millions of dollars in wasted funds. And although it remains rare, there is concern that the problem may be getting worse.
But now some scientists are fighting back against fraud—using the methods of science itself. The basic idea is very simple. Real data collected by scientists in experiments and observations is noisy; there’s always random variation and measurement error, whether what’s being measured is the response of a cell to a particular gene, or the death rate in cancer patients on a new drug.
When fraudsters decide to make up data, or to modify real data in a fraudulent way, they often create data which is just “too good”—with less variation than would be seen in reality. Using statistical methods, a number of researchers have successfully caught data fabrication by detecting data which is less random than real results.
Most recently, Uri Simonsohn applied this approach to his own field, social psychology. He has two “hits” to his name, and more may be on the way.
Simonsohn used a number of statistical methods but in essence they were all based on spotting too-good-to-be-true data. In the case of the Belgian marketing psychologist Dirk Smeesters, Simonsohn noticed that the results of one experiment conducted by Smeesters were suspiciously “good”: They matched with his predictions almost perfectly.
“About what one can not speak, one must remain silent.” The last line of Ludwig Wittgenstein’s Tractatus tends to resonate with scientists, sceptics, atheists, and other fans of rationality. If your thought cannot be articulated sensibly in plain language then you had better keep it to yourself. Written amid the slaughter of World War I, the book became central to the Vienna Circle, a group of philosophers who sat around the Café Centrale in the 1920s discussing which statements could be boiled down into verifiable empirical claims and those that could not. The latter, which included all of metaphysics and theology, they dismissed as meaningless nonsense. When the group finally convinced a reluctant Wittgenstein to visit them, he was so exasperated with their philosophy, logical positivism, that he took to turning his chair to the wall and reading Rabindranath Tagore poetry out loud during their meetings. They had misunderstood him, Wittgenstein explained. The ethical convictions, values and metaphysical ideas they had busily classified as “nonsense” were not worthless. In fact, they were the most important concerns in life.
I was reminded of Wittgenstein recently, when I read the firestorm of online criticism that followed the publication of a column in Nature magazine by Daniel Sarewitz, co-director of the Consortium for Science, Policy and Outcomes at Arizona State University.
In the piece, inspired by a visit to the Angkor temples in Cambodia and gamely entitled “Sometimes science must give way to religion,” Sarewitz drew some parallels between science and religion. (Note, however, that he did not support the misguided idea that science and religion were the same, or that science was nothing more than a belief system.) Worse, in many people’s eyes, was that he went further and argued that science alone is not enough—humanity will always need other ways of understanding the world. Citing the recent discovery of the Higgs boson, Sarewitz says:
“For those who cannot follow the mathematics, belief in the Higgs is an act of faith, not of rationality…in practical terms, the Higgs is an incomprehensible abstraction, a partial solution to an extraordinarily rarified and perhaps always-incomplete intellectual puzzle. By contrast, the Angkor temples demonstrate how religion can offer an authentic personal encounter with the unknown.”
I have my own problems with the piece. But the vehemence of the attack on Sarewitz would have made anyone think he had advocated teaching creationism in science classes while smacking Richard Dawkins around the head with a copy of the Holy Bible.
Keith Kloor is a freelance journalist whose stories have appeared in a range of publications, from Science to Smithsonian. Since 2004, he’s been an adjunct professor of journalism at New York University. You can find him on Twitter here.
Myths about the Hero Twins,
one of whom is shown holding a bow here,
are an important part of Navajo identity.
In certain circles, there is a violent allergic reaction whenever someone suggests that religion and science are compatible. A particular type of atheist is especially vulnerable to this immune disorder. For example, P.Z. Myers, the evolutionary biologist and pugnacious blogger, became famously symptomatic at a 2010 gathering of atheists. After one participant suggested that non-religious people could still be spiritual, Myers nearly retched:
Whenever we start talking about spirituality, I just want to puke.
I hope Myers didn’t have too much to eat before reading the headline from this week’s commentary in Nature: “Sometimes Science Must Give Way to Religion.” The column, by Arizona State University’s Daniel Sarewitz, suggests that rational explanation of the universe’s existence, as advanced recently by discovery of the Higgs boson, can’t match the feelings evoked by spectacular religious symbolism, such as that found in Cambodia’s ancient Hindu temples, which Sarewitz explored this summer. He writes:
The overwhelming scale of the temples, their architectural complexity, intricate and evocative ornamentation and natural setting combine to form a powerful sense of mystery and transcendence, of the fertility of the human imagination and ambition in a Universe whose enormity and logic evade comprehension.
Science is supposed to challenge this type of quasi-mystical subjective experience, to provide an antidote to it.
But in the words of Time magazine’s Jeffrey Kluger, “our brains and bodies contain an awful lot of spiritual wiring.” Religion is the antidote our evolutionary history created. And even if you don’t buy that particular theory, you can’t simply dismiss the psychological and cultural importance of religion. For much of our history, religion has deeply influenced all aspects of life, from how we cope with death and random disaster to what moral codes we abide by. That science should (or could) eliminate all this with a rationalist cleansing of civilization, as a vocal group of orthodox atheists have suggested, is highly improbable.
Joanne Manaster shares cutting-edge biology with teachers working on masters degrees at the University of Illinois. In addition to videos and articles at her website, Joanne Loves Science, her work can be found at Scientific American. She always has time for science on twitter @sciencegoddess.
Luann Lee is a National Board Certified high school science teacher in Oregon. She can be found on Twitter as @Stardiverr and now that her dissertation is finished, blogs about science and education here.
On Wednesday, President Obama proposed the STEM Master Teacher Corps, a new program to incentivize teachers who display excellence in teaching science, technology, engineering, and mathematics (or “STEM”). The idea is that 2,500 such teachers would be chosen and positioned in 50 different locations around the country in the inaugural year of the project. According to the White House, these Master Teachers “will receive additional resources to mentor math and science teachers, inspire students, and help their communities grow.” The Master Teacher proposal is a follow-up to his 2010 STEM teacher-training initiative, “Educate to Innovate,” and part of a broader effort to fight the fact that students in the world’s only superpower don’t do so super in science and math, which figure to be so important for our economy in a tech-driven future.
Everyone supports the idea of improving STEM education, but there are some important questions about the program. Most importantly, the criteria for choosing the teachers (and the panel who will determine the criteria) remain unknown, though early hints are indicating that student test scores will be a factor in determining the worthiness of the teachers for this honor, according to information obtained during a White House Twitter chat on July 18, 2012 (the entire chat is here.)
Because the specifics of the program are not yet fully laid out, there’s still an opportunity for scientists, engineers, educators, and parents to speak up and insist that the science taught in schools be meaningful, authentic scientific inquiry as opposed to memorization, drill, and lecture. Ideally, teachers chosen for this honor (and the substantial stipend that accompanies it) must be able to guide students to become masters of inquiry-based, hands-on science. What would a learning environment at the hands of such a master teacher look like?
Derek Lowe is a medicinal chemist who has worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer’s, diabetes, osteoporosis, and other diseases. He has been writing about drug discovery at In the Pipeline, where this post originally appeared, for more than ten years.
Slate recently published one of those assume-the-conclusions articles up on science and technology education in the U.S. It’s right there in the title: “America Needs More Scientists and Engineers.”
Now, I can generally agree that America (and the world) needs more science and engineering. I’d personally like to have researchers who could realize room-temperature superconductors, a commercially feasible way to turn carbon dioxide from the atmosphere into industrial products, and both economically viable fusion power and high-efficiency solar power beamed down from orbit—for starters. We most definitely need better technology and more scientific understanding to develop these things, since none of them (as far as we know) are at all impossible, and we sure don’t have any of them yet.
But to automatically assume that we need lots more scientists and engineers to do that is a tempting, but illogical, conclusion. And it’s one that my currently unemployed readers who are scientists and engineers probably don’t enjoy hearing about very much. I think that the initial fallacies are (1) lumping together all science education into a common substance, and (2) assuming that if you just put more of that into the hopper, more good stuff will come out the other end.
If I had to pick one line from the article that I disagree with the most, it would be this one:
America needs Thomas Edisons and Craig Venters, but it really needs a lot more good scientists, more competent scientists, even more mediocre scientists.
No. I hate to be the one to say it, but mediocre scientists are, in fact, in long supply. Access to them is not a rate-limiting step. (That’s the chemist’s way of saying it’s not the main bottleneck.) Not all the unemployed science and technology folks out there are mediocre—not by a long shot (I’ve seen the CVs that come in)—but a lot of the mediocre ones are finding themselves unemployed, and they’re searching an awful long time for new positions when that happens. Who, exactly, would be clamoring to hire a fresh horde of I-guess-they’ll-do science graduates? Is that what we really need to put things over the top, technologically—more foot soldiers?
The phylogeny of Prozac yogurt.
Christina Agapakis is a synthetic biologist and postdoctoral research fellow at UCLA who blogs about about biology, engineering, biological engineering, and biologically inspired engineering at Oscillator.
A few weeks ago, I saw a retweet that claimed “biohacking is easier than you think” with a link to a post on a blog accompanying a book called Massively Networked. The post included video of Tuur van Balen’s presentation at the NextNature power show a few months earlier. Van Balen is a designer whose work I’ve followed for a couple years now, and his most recent project imagines how synthetic biology might produce and deliver medicines in the future. He demonstrates—using homemade tools, equipment purchased on eBay, and online resources for finding and synthesizing DNA sequences—how someone could engineer a strain of bacteria to produce Prozac-laced yogurt. While he’s not actually making Prozac, his demonstration does show pretty accurately how someone could get DNA into a bacterium (without, of course, the frustrating months of troubleshooting that almost any experiment inevitably requires). I posted my own version of the story, writing that art projects like this can ask important questions about biological design.
The next day, my post was syndicated on the Huffington Post with a modified title that emphasized Prozac. Then a version appeared on Gizmodo, and it went on from there, spreading across the Internet. By the time its spread was complete, Van Balen, an artist interested in the implications of emerging biotechnologies, had mutated into a bioengineer at the forefront of synthetic biology research, creating Prozac yogurt in five days with just 860 base pairs of DNA. (If you were to actually make Prozac biologically, it would certainly take the action of many enzymes, each encoded by their own sequence of hundreds or thousands of base pairs).
How did an art piece, a design fiction that asks us to think critically about the possibilities opened up by synthetic biology, provoke an unskeptical acceptance of what bioengineering has made possible? Perhaps I should have been clearer in my post, or perhaps it’s the fault of sensationalized click-bait headlines. But I think it may be that we’ve become so accustomed to the hype surrounding the science of genes and DNA, so used to hearing about groundbreaking genetics, from the “gene for dry ear wax” to the “gene for Alzheimer’s” to the “gene for [common human behavior]” that we don’t think twice when we hear about mixing bacteria with the “gene for Prozac” to create antidepressant yogurt.
Mike Taylor is a computer programmer with Index Data and a dinosaur palaeobiologist with the University of Bristol, UK. He blogs about palaeontology and open access at http://svpow.wordpress.com/ and tweets as @SauropodMike.
Everyone involved in academic publishing knows that it’s in a horrible mess. Authors increasingly see publishers as enemies rather than co-workers. And while publishers’ press releases talk about partnership with authors, unguarded comments on blogs tell a different story, revealing that the hostility is mutual. The Cost Of Knowledge boycott is the most obvious illustration of the fractious situation—more than 6000 researchers have declared that they will not write, edit, or review for Elsevier journals. But how did we get into this unhealthy situation? And how can we get out?
The problems all stem from the arrival of the Internet. Or, rather, the Internet has removed problems that used to exist, and this has caused problems for organisations that existed to solve those problems. Which is a problem for them.
Back in the day, it was hard to distribute the results of research. Authors would submit typewritten manuscripts, and publishers took it from there. Editors would fix errors and hone language. Typesetting was an art, especially when it involved equations or graphs. Making multiple copies was costly and time-consuming. And distributing them around the world needed enormous resources. So the researchers of 20 years ago saw publishers as necessary to their work. It’s no wonder that publishers were generally liked and respected.
But just as long-distance telephone networks made telegrams obsolete, so computers mean that most of what publishers do isn’t needed any more. By submitting machine-readable manuscripts and figures, we eliminate nearly all typesetting work. (In maths and physics, authors submit “camera-ready” copy that requires no further typesetting at all.) Printing is no longer needed. Copying is quick, free, and perfect. And worldwide distribution is also free and instantaneous.
You might think that publishers’ response would be to emphasise and increase their editorial role. Instead, surprisingly, they have shed most editorial work. Copyediting is rare, and when it does exist has a reputation for adding more errors than it removes. Most journals have stringent formatting guidelines that authors must follow in submitted manuscripts. (A colleague of mine recently gave up attempts to submit his manuscript to a particular journal after it was three times rejected without review for trivial formatting and punctuation errors, such as using the wrong kind of dash. Seriously.)*
by John Hawks, an anthropologist at the University of Wisconsin—Madison who studies the genetic and environmental aspects of humanity’s 6-million-year evolution. This post ran in slightly different form on his own blog.
Philip Ball writes in The Guardian about another new initiative from NSF to fund “potentially transformative” research. He begins his essay with this:
The kind of idle pastime that might amuse physicists is to imagine drafting Einstein’s grant applications in 1905. “I propose to investigate the idea that light travels in little bits,” one might say. “I will explore the possibility that time slows down as things speed up,” goes another. Imagine what comments these would have elicited from reviewers for the German Science Funding Agency, had such a thing existed. Instead, Einstein just did the work anyway while drawing his wages as a technical expert third-class at the Bern patent office. And that is how he invented quantum physics and relativity.
The moral seems to be that really innovative ideas don’t get funded—that the system is set up to exclude them.
The system is set up to exclude really innovative ideas. But Einstein is a really misleading example. For one thing, Einstein didn’t need much grant funding for his research. Yes, if somebody had given the poor guy a postdoc, he might have had an easier time being productive in physics. But his theoretical work didn’t need expensive lab equipment, RA and postdoc salaries, and institutional overhead to fund secretarial support, building maintenance, and research opportunities for undergraduates.
It is a better question whether we would have wanted Einstein to spend 1905 applying for grants instead of publishing. But even this is terribly misleading. Most scientists who are denied grants are not Einstein. Most ideas that appear to be transformative in the end turn out to be bunk. Someone who compares himself to Einstein is overwhelmingly likely to be a charlatan. There should probably be a “No Einsteins need apply” clause in every federal grant program.
Setting aside the misleading Einstein comparison, our current grant system still has some severe problems. Is it selecting against “transformative” research—the big breakthroughs? I would put the problem differently. “Transformative” is in the eye of the beholder. Our grant system does what it has been designed for: it picks winners and losers, with a minimum of accountability for the people who set funding priorities.