Carl Zimmer writes about science regularly for The New York Times and magazines such as DISCOVER, which also hosts his blog, The Loom. He is the author of 12 books, the most recent of which is Science Ink: Tattoos of the Science Obsessed.
It’s been nearly 87 years since F. Scott’s Fitzgerald published his brief masterpiece, The Great Gatsby. Charles Scribner’s and Son issued the first hardback edition in April 1925, adorning its cover with a painting of a pair of eyes and lips floating on a blue field above a cityscape. Ten days after the book came out, Fitzgerald’s editor, Maxwell Perkins, sent him one of those heart-breaking notes a writer never wants to get: “SALES SITUATION DOUBTFUL EXCELLENT REVIEWS.”
The first printing of 20,870 copies sold sluggishly through the spring. Four months later, Scribner’s printed another 3,000 copies and then left it at that. After his earlier commercial successes, Fitzgerald was bitterly disappointed by The Great Gatsby. To Perkins and others, he offered various theories for the bad sales. He didn’t like how he had left the description of the relationship between Gatsby and Daisy. The title, he wrote to Perkins, was “only fair.”
Today I decided to go shopping for that 1925 edition on the antiquarian site Abebooks. If you want a copy of it, be ready to pay. Or perhaps get a mortgage. A shop in Woodstock, New York, called By Books Alone, has one copy for sale. The years have not been kind to it. The spine is faded, the front inner hinge is cracked, the iconic dust jacket is gone. And for this mediocre copy, you’ll pay a thousand dollars.
The price goes up from there. For a copy with a torn dustjacket, you’ll pay $17,150. Between the Covers in Gloucester, New Jersey, has the least expensive copy that’s in really good shape. And it’s yours for just $200,000.
By the time Fitzgerald died in 1940, his reputation—and that of The Great Gatsby—had petered away. “The promise of his brilliant career was never fulfilled,” The New York Times declared in their obituary. Only after his death did the novel begin to rise to the highest ranks of American literature. And its ascent was driven in large part by a new form of media: paperback books.
Phil Plait, the creator of the Discover blog Bad Astronomy, is an astronomer, lecturer, and author. He’s written two books, dozens of magazine articles, and 12 bazillion blog articles.
On Wednesday, January 25th, Republican presidential hopeful Newt Gingrich spoke to a crowd of supporters in Florida. In a short speech guaranteed to create a buzz—online, as well as among space enthusiasts—he declared that if elected president, “… by the end of my second term we will have the first permanent base on the moon and it will be American.”
That’s a pretty bold statement. Unfortunately, it’s also impossible.
I’ll note he followed that up with something that is far more likely:
We will have commercial near-Earth activities that include science, tourism, and manufacturing, and are designed to create a robust industry precisely on the model of the development of the airlines in the 1930s, because it is in our interest to acquire so much experience in space that we clearly have a capacity that the Chinese and the Russians will never come anywhere close to matching.
That’s a lovely thought, but while that’s a more realistic goal, it’s likely to happen whether or not Gingrich makes it to the White House.
His second statement is the easiest to discuss, and to dismiss. I agree with the sentiment, but what he’s saying is already well on its way to being reality. We have several private companies vying to create commercial activities in orbit, including tourism and science. SpaceX has successfully launched rockets to orbit several times, and they are planning to do a rendezvous with the space station in the coming months as a demonstration that they can take supplies there. Virgin Galactic has shown it can do sub-orbital flights, and several other companies are on their way to space. Manufacturing is a far more difficult goal, but once a more reliable and cheaper method of getting to orbit is established, it’s an inevitable outcome.
With or without any possible future President Gingrich, private companies in space is already happening.
Seth Shostak is Senior Astronomer at the SETI Institute in California, and the host of the weekly radio show and podcast, “Big Picture Science.”
The Moon is a ball of left-over debris from a cosmic collision that took place more than four billion years ago. A Mars-sized asteroid—one of the countless planetesimals that were frantically churning our solar system into existence—hit the infant Earth, bequeathing it a very large, natural satellite.
OK, that’s a bit of modestly engaging astrophysics. But some scientists think there’s a biological angle here. Namely, that elaborate terrestrial life might never have appeared if that asteroid had arrived a few hours earlier, and sailed silently by. Put another way, if every night were moonless, you wouldn’t be around to notice the lack of a moon.
But is that true? Did our cratered companion really make our existence possible?
Julie Sedivy is the lead author of Sold on Language: How Advertisers Talk to You And What This Says About You. She contributes regularly to Psychology Today and Language Log. She is an adjunct professor at the University of Calgary, and can be found at juliesedivy.com and on Twitter/soldonlanguage.
Due to a migratory childhood (born in Czechoslovakia, and eventually landing in Montreal via Austria and Italy), English was the fifth language I had to grapple with in my tender years. On my first day of kindergarten, I spoke only a few words of English. I could see that my teacher had some concerns as to how well I would integrate linguistically; my stumbling English was met with pursed lips.
The pursed-lips reaction of my teacher is shared by many who advocate English-only legislation for the U.S., seeking to ban the use of other languages in schools, government documents, and even radio stations and signs on private businesses. The common worry is that making it easier for immigrants to function in their native language is a form of enabling—it prevents them from learning English, hobbling their full entry into American society. Over the past few decades, the waves of Latin American immigrants have only increased such concerns. For example, the U.S. Census Bureau reports that in 1980, less than 11% of the population spoke a language other than English at home. By 2007, that number had grown to almost 20%. If you looked no further, you might see this as evidence of a potential threat to the English-speaking identity of the U.S.
But these fears are misplaced. Just like I did, most young immigrants from any country eventually master English. It’s true that the rate of Spanish-only speakers in the U.S. has increased dramatically, and that these immigrants often cluster in Spanish-speaking neighborhoods. But a more telling statistic is what happens to such families a few generations after they’ve arrived. As Robert Lane Greene reports in his book You Are What You Speak, it’s the same thing that’s happened to all immigrant groups in the U.S.: within the space of a few generations, they not only function perfectly in English, but in the process lose their heritage language. Even among Mexican immigrants, currently the slowest group in the U.S. to shed their ancestral language, fewer than 10% of fourth-generation immigrants speak Spanish very well. As Greene points out, who needs disincentives to speak the heritage language when the economic and cultural imperatives to speak English are already so great?
Erika Check Hayden is a journalist at Nature and educator in San Francisco. Her work has taken her to wild and beautiful places, but focuses most of the time on the inner terrain of the human body. You can find her online at erikacheck.com and twitter.com/Erika_Check.
This piece was originally published at The Last Word on Nothing.
A few years ago, Eric Klavins found himself starting at the ceiling of his room in the Athenaeum, a private lodging on the grounds of the California Institute of Technology, in the middle of the night. Unable to sleep, Klavins was instead pondering a question that had been posed to him earlier that day at a meeting.
Klavins, a robotics researcher, was funded by grants from the U.S. Air Force and the Defense Advanced Research Projects Agency (DARPA) on robot self-organization: making many simple robots work together to assemble themselves into a shape or structure. While working on the grants, Klavins would routinely be called into meetings to discuss his work with various defense officials, and it was at one of these meetings that a Defense Department researcher had posed his question. “He said, ‘Do you think you could figure out how something that has been broken up into lots of little pieces could be reassembled so we could figure out what it was?’” he recalls.
Klavins spent hours thinking about how one could actually do it. Then, he realized, he had no idea why one would even want to—and hadn’t asked that question at all during all the years he worked with Defense Department funding. He suddenly felt uncomfortable about that. “It bothered me that someone would spend their time studying how things get blown up and working to make things get blown up better,” Klavins says. Not long after, he decided to steer away from defense funding and towards applications in biology and medical research that are part of the realm of synthetic biology, the field of science that tries to turn biology into more of an engineering discipline.
But if Klavins thought that the change would help him escape the moral dilemmas that used to keep him up in the middle of the night, he was wrong. The U.S. Department of Defense has emerged as one of the major funders of synthetic biology; last fall, for instance, DARPA accepted proposals for a highly coveted set of grants in a new program, Living Foundries, that aims to “enable the rapid development of previously unattainable technologies and products, leveraging biology to solve challenges associated with production of new materials, novel capabilities, fuel and medicines.”
Earlier this week, food columnist Ari LeVaux set off a storm of media reaction with a piece with this premise: tiny plant RNAs, recently discovered to survive digestion and alter host gene expression, are a major reason why genetically modified foods should be considered dangerous. For anyone familiar with the paper he referred to, or with molecular biology in general, the article was full of conflation and sloppy logic, and even as it became the most-emailed story on TheAtlantic.com, where it was published, biology bloggers and science writers were pointing out its significant flaws. To his credit, LeVaux revised the article to fix many (though not all) of the errors concerning genetics; the new version appeared yesterday at AlterNet and today replaced his original piece at The Atlantic.
So what did LeVaux get so wrong, and, once all of the wheat was sorted from the chaff, was there anything to what he was trying to say?
At the heart of the fracas is LeVaux’s claim that a class of molecules called miRNA is a reason to fear GMOs specifically, more than any other food plant or animal. miRNA, which is short for microRNA, is a class of molecules that perform various tasks in plants and animals. They were first discovered about twenty years ago, in nematode worms, and they regulate gene expression by binding the messenger RNA involved in translating a gene into a protein. The messenger RNA carries the “message” of the DNA’s sequence to a group of enzymes that translate it into the amino acid sequence of a protein. But if a miRNA binds to a messenger RNA, the message is destroyed, and the protein is never made. Thus, miRNA can be a powerful tool for preventing the expression of genes. In fact, that is what’s made it such an important lab tool in recent years: it allows researchers to knock down the expression of genes without physically removing them from an organism’s genome.
In the paper that LeVaux pegged his article on, Nanjing University researchers found that miRNAs usually seen in rice were circulating in the blood of humans, and that mice fed rice had the miRNA in their blood as well. That particular miRNA, in its native context, regulates plant development. When the researchers added it to human cells, it appeared to bind to the messenger RNA of a gene involved in removing cholesterol from the blood. Previous papers had found that plants have plenty of miRNA floating around in them [pdf] (as does just about everything we eat, since plants and animals make them by the thousands), but having them show up whole and unmolested in blood, apparently after digestion, was a new and very intriguing discovery.
Vincent Racaniello is Higgins Professor of Microbiology & Immunology at Columbia University, where he oversees research on viruses that cause common colds and poliomyelitis. He teaches virology to undergraduate, graduate, medical, dental, and nursing students, and writes about viruses at virology.ws.
The detection of a new virus called XMRV in the blood of patients with chronic fatigue syndrome (CFS) in 2009 raised hope that a long-sought cause of the disease, whose central characteristic is extreme tiredness that lasts for at least six months, had been finally found. But that hypothesis has dramatically fallen apart in recent months. Its public demise brings to mind an instance when a virus *was* successfully determined to be behind a mysterious scourge: the case of HIV and AIDS. How are these two diseases different—how was it that stringent lab tests and epidemiology ruled one of these viruses out, and one of them in?
David Ropeik is an international consultant in risk perception and risk communication, and an Instructor in the Environmental Management Program at the Harvard University Extension School. He is the author of How Risky Is It, Really? Why Our Fears Don’t Always Match the Facts and principal co-author of RISK A Practical Guide for Deciding What’s Really Safe and What’s Really Dangerous in the World Around You. He writes the blog Risk; Reason and Reality at Big Think.com and also writes for Huffington Post, Psychology Today, and Scientific American.
He founded the program “Improving Media Coverage of Risk,” was an award-winning journalist in Boston for 22 years and a Knight Science Journalism Fellow at MIT.
This post originally appeared on Soapbox Science, a guest blog hosted by the nature.com Communities team.
If you were to be diagnosed with cancer, how do you think you would feel? It would depend on the type of cancer of course, but there’s a good chance that no matter the details, the word “cancer” would make the diagnosis much more frightening. Frightening enough, in fact, to do you as much harm, or more, than the disease itself. There is no question that in many cases, we are cancer-phobic, more afraid of the disease than the medical evidence says we need to be, and that fear alone can be bad for our health. As much as we need to understand cancer itself, we need to recognize and understand this risk, the risk of cancer phobia, in order to avoid all of what this awful disease can do to us.
In a recent report to the U.S. National Institutes of Health (NIH), a panel of leading experts on prostate cancer, the second most common cancer in men (after skin), said;
“Although most prostate cancers are slow growing and unlikely to spread, most men receive immediate treatment with surgery or radiation. These therapeutic strategies are associated with short- and long-term complications including impotence and urinary incontinence.”
“Approximately 10 percent of men who are eligible for observational strategies (keep an eye on it but no immediate need for surgery or radiation) choose this approach.”
“Early results demonstrate disease-free and survival rates that compare favorably (between observation and) curative therapy.”
“Because of the very favorable prognosis of low-risk prostate cancer, strong consideration should be given to removing the anxiety-provoking term ‘cancer’ for this condition.”
Let me sum that up. Many prostate cancers grow so slowly they don’t need to be treated right away…the unnecessary treatment causes significant harm…and one of the reasons nine men out of ten men diagnosed with slow-growing prostate cancer accept, indeed choose these unnecessary harms, is because “cancer” sounds scary.
Death is good. Death clears away old people to make way for new people and ideas. Death makes sure there aren’t too many of us on the planet at once. Mortality is our condition, and as meaning-makers, we cannot but live through the lens of knowing we must die. Death is just too important to kill.
So efforts to postpone death are misguided and unethical. People who try to fend off death are being selfish, are in denial, and are pouring money down the drain for cockamamy schemes to preserve their frozen heads for some fingers-crossed future, which will never arrive. At the same time, we shouldn’t let people die, particularly (and ironically) if they really want to. Choosing death is untenable. It’s against nature. No, death is good only when death decides it’s ready for you.
Or so go the arguments of many who oppose anti-aging technology.
But just because we accept death as good and necessary, that doesn’t necessarily mean we have to say the same about aging. Can we argue for anti-aging technology, for 2,000-year lifespans of perpetual youth, and admit death can be good at the same time? Not only can we; we must.
We can accept death yet also seek to live vastly longer, healthier, and happier. Death is good, but so too is a long, long, long life. We can attain long lives of quality by rejecting extreme “life-saving measures,” embracing euthanasia, and accepting that there are just some things we cannot cure. Death has got to be our closest kept enemy if we want to be ageless. Baffling as it may seem, wanting to live to be a thousand years old is inextricably connected to the ability to decide when it’s time to give up the ghost.
Vaughan Bell is a clinical and research psychologist based at the Institute of Psychiatry, King’s College London and currently working in Colombia. He’s also working on a book about hallucinations due to be out in 2013.
During surgery, a patient awakes but is unable to move. She sees people dressed in green who talk in strange slowed-down voices. There seem to be tombstones nearby and she assumes she is at her own funeral. Slipping back into oblivion, she awakes later in her hospital bed, troubled by her frightening experiences.
These are genuine memories from a patient who regained awareness during an operation. Her experiences are clearly a distorted version of reality but crucially, none of the medical team was able to tell she was conscious.
This is because medical tests for consciousness are based on your behavior. Essentially, someone talks to you or prods you, and if you don’t respond, you’re assumed to be out cold. Consciousness, however, is not defined as a behavioral response but as a mental experience. If I were completely paralyzed, I could still be conscious and I could still experience the world, even if I was unable to communicate this to anyone else.
This is obviously a pressing medical problem. Doctors don’t want people to regain awareness during surgery because the experiences may be frightening and even traumatic. But on a purely scientific level, these fine-grained alterations in our awareness may help us understand the neural basis of consciousness. If we could understand how these drugs alter the brain and could see when people flicker into consciousness, we could perhaps understand what circuits are important for consciousness itself. Unfortunately, surgical anesthesia is not an ideal way of testing this because several drugs are often used at once and some can affect memory, meaning that the patient could become conscious during surgery but not remember it afterwards, making it difficult to do reliable retrospective comparisons between brain function and awareness.
An attempt to solve this problem was behind an attention-grabbing new study, led by Valdas Noreika from the University of Turku in Finland, that investigated the extent to which common surgical anesthetics can leave us behaviorally unresponsive but subjectively conscious.