Carl Zimmer writes about science regularly for The New York Times and magazines such as DISCOVER, which also hosts his blog, The Loom. He is the author of 12 books, the most recent of which is Science Ink: Tattoos of the Science Obsessed.
It’s been nearly 87 years since F. Scott’s Fitzgerald published his brief masterpiece, The Great Gatsby. Charles Scribner’s and Son issued the first hardback edition in April 1925, adorning its cover with a painting of a pair of eyes and lips floating on a blue field above a cityscape. Ten days after the book came out, Fitzgerald’s editor, Maxwell Perkins, sent him one of those heart-breaking notes a writer never wants to get: “SALES SITUATION DOUBTFUL EXCELLENT REVIEWS.”
The first printing of 20,870 copies sold sluggishly through the spring. Four months later, Scribner’s printed another 3,000 copies and then left it at that. After his earlier commercial successes, Fitzgerald was bitterly disappointed by The Great Gatsby. To Perkins and others, he offered various theories for the bad sales. He didn’t like how he had left the description of the relationship between Gatsby and Daisy. The title, he wrote to Perkins, was “only fair.”
Today I decided to go shopping for that 1925 edition on the antiquarian site Abebooks. If you want a copy of it, be ready to pay. Or perhaps get a mortgage. A shop in Woodstock, New York, called By Books Alone, has one copy for sale. The years have not been kind to it. The spine is faded, the front inner hinge is cracked, the iconic dust jacket is gone. And for this mediocre copy, you’ll pay a thousand dollars.
The price goes up from there. For a copy with a torn dustjacket, you’ll pay $17,150. Between the Covers in Gloucester, New Jersey, has the least expensive copy that’s in really good shape. And it’s yours for just $200,000.
By the time Fitzgerald died in 1940, his reputation—and that of The Great Gatsby—had petered away. “The promise of his brilliant career was never fulfilled,” The New York Times declared in their obituary. Only after his death did the novel begin to rise to the highest ranks of American literature. And its ascent was driven in large part by a new form of media: paperback books.
Sex, a biological function of reproduction, should be simple. We need to perpetuate the species, we have sex, babies are born, we raise them , they have sex, repeat. Simple, however, is one thing sex most certainly is not. And it’s only getting more complex by the day.
For those who are fans of human exceptionalism, it might be worth considering that the trait which differentiates us from all other animals is that we over-complicate everything. Sex, and its various accoutrements of sexual orientation, gender identity, gender expression, libido, and even how many partners one may have, contains a multitude.
Recently some psychologists have said that pedophilia is a sexual orientation, the erotic predilection that drives people like former Penn State football coach Jerry Sandusky to do what he allegedly did. This idea came to twitter and incited a minor firestorm over whether “sexual orientation” should really be applied to pedophilia. Nature editor Noah Gray used the term in a neutral sense, as in, “an attraction to a specific category of individuals”; io9′s Charlie Jane Anders and Boing Boing blogger Xeni Jardin pointed out the queer community’s long campaign to define sexual orientation only as an ethically acceptable preference for any category of consenting adults. Given that willful troglodytes like Rick Santorum regularly conflate homosexuality with pedophilia and zoophilia, you can see where the frustration around loose use of the term might arise.
Santorum aside, how should we classify pedophilia if not a “sexual orientation?” Why should that term include should one unchosen, inborn form of sexual attraction, but exclude another unchosen, inborn form of sexual attraction?
While we may have ready answers for these questions now, technological and social changes on the horizon will once again challenge our definitions and beliefs about sex. We can imagine a time when we have artificial intelligence (to at least some degree), or super-intelligent animals, or maybe we’ll even become a spacefaring species and encounter other alien intelligences. Without a doubt, people will start discovering that they are primarily attracted to something that isn’t the good ol’ Homo sapiens. Sex and sexuality will increase in complexity by powers of ten. If some person is attracted to a sexy cyborg, or a genetically enhanced dolphin, how will we know if it is ethical to act upon those desires?
Bret Victor has a solid grip on interface design. And he has a beef with touchscreens as the archetype of the Interface of the Future. He argues that poking at and sliding around pictures under glass is not really the greatest way to do things. Why? Because that just uses a finger! Victor is a fan of hands. They can grab, twist, flick, feel, manipulate, and hold things. Hands get two thumbs up from Victor.
As a result, Victor argues that any interface that neglects hands neglects human beings. Tools of the future need to be hand-friendly and take advantage of the wonderful functions hands can perform. His entire article, “A Brief Rant on the Future of Interfaces” is a glorious read and deserves your attention. One of the best parts is his simple but profound explanation of what a tool does: “A tool addresses human needs by amplifying human capabilities.”
There is, as I see it, one tiny problem with Victor’s vision: hands are tools themselves. They translate brain signals into physical action. Hands are, as Victor shows, super good at that translation. His argument is based on the idea that we should take as much advantage as possible of the amazing tools that hands already are. I disagree.
The “stabilization wedge” idea is a modular way of reducing carbon emissions.
The world is now home to 7 billion people, each of whom contributes to the carbon emissions that are slowly cooking the globe. To find out how growing population affects our plans to deal with climate change, we talked with Princeton’s Robert Socolow, co-creator of one of the best models for thinking about how to prevent climate change.
Many of my students are “green” consumers. They are proud of riding bicycles, they turn off lights when they leave the room, and they eat little or no meat. But they are usually surprised when I tell them that the most important decision they will make, as far as its impact on natural resources is concerned, is how many children to have.
Most sources of carbon emissions—heating and lighting homes and stores, making steel, providing food—grow in proportion to population. We’ve just hit 7 billion people, and there’s no way any single approach, or just two or three approaches, can effectively deal with the environmental pressures that this many people exert.
To foster a way of thinking about the problem of climate change that involves using many different approaches in tandem, Steve Pacala and I introduced the “stabilization wedge” in 2004. A wedge is a campaign or strategy that reduces carbon dioxide emissions to the atmosphere over the next 50 years by a specific amount, relative to some baseline future where nothing is done to slow down climate change. Examples of wedge strategies are driving more efficient cars, driving cars less far because cities are laid out differently, building lots of wind power, and growing many more trees.
I can feel it in the air, so thick I can taste it. Can you? It’s the we’re-going-to-build-an-artificial-brain-at-any-moment feeling. It’s exuded into the atmosphere from news media plumes (“IBM Aims to Build Artificial Human Brain Within 10 Years”) and science-fiction movie fountains…and also from science research itself, including projects like Blue Brain and IBM’s SyNAPSE. For example, here’s a recent press release about the latter:
Today, IBM (NYSE: IBM) researchers unveiled a new generation of experimental computer chips designed to emulate the brain’s abilities for perception, action and cognition.
Now, I’m as romantic as the next scientist (as evidence, see my earlier post on science monk Carl Sagan), but even I carry around a jug of cold water for cases like this. Here are four flavors of chilled water to help clear the palate.
The Worm in the Pass
In the story about the Spartans at the Battle of Thermopylae, 300 soldiers prevent a million-man army from making their way through a narrow mountain pass. In neuroscience it is the 300 neurons of the roundworm C. elegans that stand in the way of our understanding the huge collections of neurons found in our or any mammal’s brain.
We don’t often realize it, but all fashion is predicated upon human beings’ predilection for prostheses and augmentations. All clothing, bags, and shoes are augmentation to our body, skin, and feet allowing us to deal with non-tropical climates, to carry large amounts of stuff, and to deal with harsh or unforgiving terrain. If humans hadn’t already modified ourselves, the only fashion we’d have is hairstyle.
Eyeglasses and contact lenses are one of the most prolific forms of medical augmentation on the planet. In many industrialized modern cultures, eyeglasses and contacts are also a major element of fashion. Thin, small glasses are out of fashion; big, chunky frames with large lenses are in. Tomorrow it might be different. But in every case, you have glasses because you have a medical problem that needs fixing.
But what about other medical devices? Canes and even artificial legs are occasionally not merely built to work but are designed and crafted to be fashionable. Could exoskeletons, robotic limbs, and cybernetic augmentations reach a point where they are beautiful? Furthermore, could they ever become so prolific as to be fashionable? More and more, the answer looks to be yes. Read More
The Future has stalled. Sure, some snazzy new gadgets came out this year, but all the Next Big Things are still just over the horizon. Neal Stephenson and Peter Thiel both have depressing articles trying to pin down the culprit for our technological stagnation. They both take some shots at government, at education, and at how technological progress has become self-defeating. One passage from Stephenson crystallized the argument:
Innovation can’t happen without accepting the risk that it might fail. The vast and radical innovations of the mid-20th century took place in a world that, in retrospect, looks insanely dangerous and unstable. Possible outcomes that the modern mind identifies as serious risks might not have been taken seriously—supposing they were noticed at all—by people habituated to the Depression, the World Wars, and the Cold War, in times when seat belts, antibiotics, and many vaccines did not exist. Competition between the Western democracies and the communist powers obliged the former to push their scientists and engineers to the limits of what they could imagine and supplied a sort of safety net in the event that their initial efforts did not pay off. A grizzled NASA veteran once told me that the Apollo moon landings were communism’s greatest achievement.
We’ve stopped innovating in big, world-changing ways. Clean energy, health care, and space travel are all roughly where they were in the ‘70s. So what’s missing? Why is it not just the West, but seemingly the entire world has suddenly shuddered to a crawl in terms of technological progress?
Oddly enough, the beginnings of an answer to our innovation woes can be found in another cultural throwback: the massive protests that are Occupy Wall Street.