Amy Shira Teitel is a freelance space writer whose work appears regularly on Discovery News Space and Motherboard among many others. She blogs, mainly about the history of spaceflight, at Vintage Space, and tweets at @astVintageSpace.
Last week, NASA announced its next planetary mission. In 2016 the agency is going back to the surface of Mars with a spacecraft called InSight. The mission’s selection irked some who were hoping to see approval for one of the other, more ambitious missions up for funding: either a hopping probe sent to a comet or a sailing probe sent to the methane seas of Saturn’s moon Titan. Others were irked by NASA’s ambiguity over the mission’s cost during the press announcement.
An artist’s rendition of InSight deploying its seismometer and heat-flow experiments on Mars.
InSight is part of NASA’s Discovery program, a series of low-cost missions each designed to answer one specific question. For InSight, that question is why Mars evolved into such a different terrestrial planet than the Earth, a mystery it will investigate by probing a few meters into the Martian surface. The agency says InSight’s selection was based on its low cost—currently capped at $425 million excluding launch costs—and relatively low risk. It has, in short, fewer known unknowns than the other proposals.
But while InSight costs less than half a billion itself, the total value of the mission by the time it launches will be closer to $2 billion. How can NASA get that much zoom for so few bucks? By harnessing technologies developed for and proven on previous missions. The research, development, and testing that has gone into every previous lander take a lot of guesswork out of this mission, helping it fly for (relatively) cheap.
Aside from the Moon, Mars is the only body in the solar system that NASA has landed on more than once. With every mission, the agency learns a little more, and by recycling the technology and methods that work, it’s able to limit expensive test programs. This has played no small part in NASA’s success on the Red Planet thus far. When it comes to the vital task of getting landers safely to the surface, NASA has been reusing the same method for decades. It has its roots way back in the Apollo days.
Maggie Koerth-Baker is the author of Before the Lights Go Out: Conquering the Energy Crisis Before It Conquers Us. She is also the science editor at BoingBoing.net, where this post first appeared.
It began with a few small mistakes.
Around 12:15, on the afternoon of August 14, 2003, a software program that helps monitor how well the electric grid is working in the American Midwest shut itself down after after it started getting incorrect input data. The problem was quickly fixed. But nobody turned the program back on again.
A little over an hour later, one of the six coal-fired generators at the Eastlake Power Plant in Ohio shut down. An hour after that, the alarm and monitoring system in the control room of one of the nation’s largest electric conglomerates failed. It, too, was left turned off.
Those three unrelated things—two faulty monitoring programs and one generator outage—weren’t catastrophic, in and of themselves. But they would eventually help create one of the most widespread blackouts in history. By 4:15 pm, 256 power plants were offline and 55 million people in eight states and Canada were in the dark. The Northeast Blackout of 2003 ended up costing us between $4 billion and $10 billion. That’s “billion”, with a “B”.
But this is about more than mere bad luck. The real causes of the 2003 blackout were fixable problems, and the good news is that, since then, we’ve made great strides in fixing them. The bad news, say some grid experts, is that we’re still not doing a great job of preparing our electric infrastructure for the future.
Debbie Chachra is an Associate Professor of Materials Science at the Franklin W. Olin College of Engineering, with research interests in biological materials, education, and design. You can follow her on Twitter: @debcha.
In 1956, M. King Hubbert laid out a prediction for how oil production in a nation increases, peaks, and then quickly falls down. Since then many analysts have extended this logic and said that global oil production will soon max out—a point called “peak oil“—which could throw the world economy into turmoil.
I’m a materials scientist by training, and one aspect of peak oil I’ve been thinking about recently is peak plastic.
The use of oil for fuel is dominant, and there’s a reason for that. Oil is remarkable—not only does it have an insanely high energy density (energy stored per unit mass), but it also allows for a high energy flux. In about 90 seconds, I can fill the tank of my car—and that’s enough energy to move it at highway speeds for five hours—but my phone, which uses a tiny fraction of the energy, needs to be charged overnight. So we’ll need to replace what oil can do alone in two different ways: new sources of renewable energy, and also better batteries to store it in. And there’s no Moore’s law for batteries. Getting something that’s even close to the energy density and flux of oil will require new materials chemistry, and researchers are working hard to create better batteries. Still, this combination of energy density and flux is valuable enough that we’ll likely still extract every drop of oil that we can, to use as fuel.
But if we’re running out of oil, that also means that we’re running out of plastic. Compared to fuel and agriculture, plastic is small potatoes. Even though plastics are made on a massive industrial scale, they still only account for about 2% world’s oil consumption. So recycling plastic saves plastic and reduces its impact on the environment, but it certainly isn’t going to save us from the end of oil. Peak oil means peak plastic. And that means that much of the physical world around us will have to change.
Asteroid mining brings up some tricky legal questions.
By Frans von der Dunk, as told to Veronique Greenwood.
Frans von der Dunk is the Harvey and Susan Perlman Alumni and Othmer Professor of Space Law at the University of Nebraska College of Law. In addition, he is the director of a space law and policy consultancy, Black Holes, based in the Netherlands.
Within weeks of the launch of Sputnik I in 1957, after the U.S. made no protest against the satellite flying over its territory, space effectively became recognized as a global commons, free for all. The UN Committee on the Peaceful Uses of Outer Space, charged with codifying existing law and developing it further to apply to space, was brought into being, with all major nations being involved. The fundamental rule of space law they adopted is that no single nation can exercise territorial sovereignty over any part of outer space. American astronauts planting the flag on the moon did not, and never could, thereby turn the moon into U.S. territory.
Now that private companies are making forays into space, though—with SpaceX’s Dragon capsule mission last week only the first of many, and plans to mine asteroids for private profit seeming more and more plausible—we’re facing a sudden need to update the applicable laws. How will we deal with property ownership in space? Who is responsible for safety when private companies begin to ferry public employees, like NASA astronauts, to the International Space Station?
By Keith Kloor, a freelance journalist whose stories have appeared in a range of publications, from Science to Smithsonian. Since 2004, he’s been an adjunct professor of journalism at New York University. You can find him on Twitter here.
Greens are often mocked as self-righteous, hybrid-driving, politically correct foodies these days (see this episode of South Park and this scene from Portlandia.) But it wasn’t that long ago—when Earth First and Earth Liberation were in the headlines—that greens were perceived as militant activists. They camped out in trees to stop clear-cutting and intercepted whaling ships and oil and gas rigs on the high seas.
In recent years, a new forceful brand of green activism has come back into vogue. One action (carried out with Monkey Wrenching flair) became a touchstone for the nascent climate movement. In 2011, climate activists engaged in a multi-day civil disobedience event that has since turned a proposed oil pipeline into a rallying cause for American environmental groups.
This, combined with grassroots opposition to gas fracking, has energized the sagging global green movement. But though activist greens have frequently claimed to stand behind science, their recent actions, especially in regard to genetically modified organisms, or GMOs, say otherwise.
For instance, whether all the claims of fracking’s environmental contamination are true remains to be decided. (There are legitimate ecological and health issues—but also overstated ones. See this excellent Popular Mechanics deconstruction of all the “bold claims made about hydraulic fracturing.”) Meanwhile, an ancillary debate over natural gas and climate change has broken out, further inflaming an already combustible issue. Whatever the outcome, it’s likely that science will matter less than the politics, as often is the case in such debates.
That’s certainly the case when it comes to GMOs, which have been increasingly targeted by green-minded activists in Europe. The big story on this front of late has been the planned act of vandalism on the government-funded Rothamsted research station in the UK. Scientists there are testing an insect-resistant strain of genetically modified wheat that is objectionable to an anti-GMO group called Take the Flour Back. The attack on the experimental wheat plot is slated for May 27. The group explains that it intends to destroy the plot because “this open air trial poses a real, serious and imminent contamination threat to the local environment and the UK wheat industry.”
By Keith Kloor, a freelance journalist whose stories have appeared in a range of publications, from Science to Smithsonian. Since 2004, he’s been an adjunct professor of journalism at New York University. This piece is a follow-up from a post on his blog, Collide-a-Scape.
In Sleeper, Woody Allen finds that socializing is different after the 70’s.
Environmentalism? Not so much.
If you were cryogenically frozen in the early 1970s, like Woody Allen was in Sleeper, and brought back to life today, you would obviously find much changed about the world.
Except environmentalism and its underlying precepts. That would be a familiar and quaint relic. You would wake up from your Rip Van Winkle period and everything around you would be different, except the green movement. It’s still anti-nuclear, anti-technology, anti-industrial civilization. It still talks in mushy metaphors from the Aquarius age, cooing over Mother Earth and the Balance of Nature. And most of all, environmentalists are still acting like Old Testament prophets, warning of a plague of environmental ills about to rain down on humanity.
For example, you may have heard that a bunch of scientists produced a landmark report that concludes the earth is destined for ecological collapse, unless global population and consumption rates are restrained. No, I’m not talking about the UK’s just-published Royal Society report, which, among other things, recommends that developed countries put a brake on economic growth. I’m talking about that other landmark report from 1972, the one that became a totem of the environmental movement.
I mention the 40-year old Limits to Growth book in connection with the new Royal Society report not just to point up their Malthusian similarities (which Mark Lynas flags here), but also to demonstrate what a time warp the collective environmental mindset is stuck in. Even some British greens have recoiled in disgust at the outdated assumptions underlying the Royal Society’s report. Chris Goodall, author of Ten Technologies to Save the Planet, told the Guardian: “What an astonishingly weak, cliché ridden report this is…’Consumption’ to blame for all our problems? Growth is evil? A rich economy with technological advances is needed for radical decarbonisation. I do wish scientists would stop using their hatred of capitalism as an argument for cutting consumption.”
Goodall, it turns out, is exactly the kind of greenie (along with Lynas) I had in mind when I argued last week that only forward-thinking modernists could save environmentalism from being consigned to junkshop irrelevance. I juxtaposed today’s green modernist with the backward thinking “green traditionalist,” who I said remained wedded to environmentalism’s doom and gloom narrative and resistant to the notion that economic growth was good for the planet. Modernists, I wrote, offered the more viable blueprint for sustainability:
By David H. Freedman, a journalist who’s contributed to many magazines, including DISCOVER, where he writes the Impatient Futurist column. His latest book, Wrong: Why Experts Keep Failing Us—and How to Know When Not to Trust Them, came out in 2010. Find him on Twitter at @dhfreedman.
Computer glasses have arrived, or are about to. Google has released some advance information about its Project Glass, which essentially embeds smartphone-like capabilities, including a video display, into eyeglasses. A video put out by the company suggests we’ll be able to walk down the street—and, we can extrapolate, distractedly walk right into the street, or drive down the street—while watching and listening to video chats, catching up on social networks (including Google+, of course), and getting turn-by-turn directions (though you’ll be on your own in avoiding people, lampposts and buses, unless there’s a radar-equipped version in the works).
Toshiba developed a six-pound surround-sight bubble helmet. It didn’t take off.
The reviews have mostly been cautiously enthusiastic. But they seem to be glossing over what an astounding leap this is for technophiles. I don’t mean in the sense that this is an amazing new technology. I mean I’m surprised that we seem to be seriously discussing wearing computer glasses as if it weren’t the dorkiest thing in the world—a style and coolness and common-sense violation of galactic magnitude. Video glasses are the postmodern version of the propeller beanie cap. These things have been around for 30 years. You could buy them at Brookstone, or via in-flight shopping catalogs. As far as I could tell, pretty much no one was interested in plunking these things down on their nose. What happened?
More interesting, the apparent sudden willingness to consider wearing computers on our faces may be part of a larger trend. Consider computer tablets, 3D movies, and video phone calls—other consumer technologies that have been long talked about, long offered in various forms, and long soundly rejected—only to relatively recently and suddenly gain mass acceptance.
The obvious explanation for the current triumph of technologies that never seemed to catch on is that the technologies have simply improved enough, and dropped in price enough, to make them sufficiently appealing or useful to a large percentage of the population. But I don’t think that’s nearly a full-enough explanation. Yes, the iPad offers a number of major improvements over Microsoft Tablet PC products circa 2000—but not so much that it could account for the complete shunning of the latter and the total adoration of the former. Likewise, the polarized-glasses-based 3D movie experience of the 1990s, as seen in IMAX and Disney park theaters at the time, really were fairly comparable to what you see in state-of-the-art theaters today.
I think three things are going on:
Mark Changizi is an evolutionary neurobiologist and director of human cognition at 2AI Labs. He is the author of The Brain from 25000 Feet, The Vision Revolution, and his newest book, Harnessed: How Language and Music Mimicked Nature and Transformed Ape to Man.”
Also check out his related commentary on a promotional video for Project Glass, Google’s augmented-reality project.
Experience happens here—from my point of view. It could happen over there, or from a viewpoint of an objective nowhere. But instead it happens from the confines of my own body. In fact, it happens from my eyes (or from a viewpoint right between the eyes). That’s where I am. That’s consciousness central—my “soul.” In fact, a recent study by Christina Starmans at Yale showed that children and adults presume that this “soul” lies in the eyes (even when the eyes are positioned, in cartoon characters, in unusual spots like the chest).
The question I wish to raise here is whether we can teleport our soul, and, specifically, how best we might do it. I’ll suggest that we may be able to get near-complete soul teleportation into the movie (or video game) experience, and we can do so with some fairly simple upgrades to the 3D glasses we already wear in movies.
Consider for starters a simple sort of teleportation, the “rubber arm illusion.” If you place your arm under a table out of your view, and have a fake, rubber, arm on the table where your arm usually would be, an experimenter who strokes the rubber arm while simultaneously stroking your real arm on the same spot will trick your brain into believing that the rubber arm is your arm. Your arm—or your arm’s “soul”—has “teleported” from under the table and within your real body into a rubber arm sitting well outside of your body.
It’s the same basic trick to get the rest of the body to transport. If you were to wear a virtual reality suit able to touch you in a variety of spots with actuators, then you can be presented with a virtual experience – a movie-like experience – wherein you can see your virtual body being touched and the bodysuit you’re wearing simultaneously touches your real body in those same spots. Pretty soon your entire body has teleported itself into the virtual body.
And… Yawn, we all know this. We saw James Cameron’s Avatar, after all, which uses this as the premise.
My question here is not whether such self-teleportation is possible, but whether it may be possible to actually do this in theaters and video games. Soon.
Charles Q. Choi is a science journalist who has also written for Scientific American, The New York Times, Wired, Science, and Nature. In his spare time, he has ventured to all seven continents.
The Fertile Crescent in the Near East was long known as “the cradle of civilization,” and at its heart lies Mesopotamia, home to the earliest known cities, such as Ur. Now satellite images are helping uncover the history of human settlements in this storied area between the Tigris and Euphrates rivers, the latest example of how two very modern technologies—sophisticated computing and images of Earth taken from space—are helping shed light on long-extinct species and the earliest complex human societies.
In a study published this week in PNAS, the fortuitously named Harvard archaeologist Jason Ur worked with Bjoern Menze at MIT to develop a computer algorithm that could detect types of soil known as anthrosols from satellite images. Anthrosols are created by long-term human activity, and are finer, lighter-colored and richer in organic material than surrounding soil. The algorithm was trained on what anthrosols from known sites look like based on the patterns of light they reflect, giving the software the chance to spot anthrosols in as-yet unknown sites.
This map shows Ur and Menze’s analysis of anthrosol probability for part of Mesopotamia.
Armed with this method to detect ancient human habitation from space, researchers analyzed a 23,000-square-kilometer area of northeastern Syria and mapped more than 14,000 sites spanning 8,000 years. To find out more about how the sites were used, Ur and Menze compared the satellite images with data on the elevation and volume of these sites previously gathered by the Space Shuttle. The ancient settlements the scientists analyzed were built atop the remains of their mostly mud-brick predecessors, so measuring the height and volume of sites could give an idea of the long-term attractiveness of each locale. Ur and Menze identified more than 9,500 elevated sites that cover 157 square kilometers and contain 700 million cubic meters of collapsed architecture and other settlement debris, more than 250 times the volume of concrete making up Hoover Dam.
“I could do this on the ground, but it would probably take me the rest of my life to survey an area this size,” Ur said. Indeed, field scientists that normally prospect for sites in an educated-guess, trial-by-error manner are increasingly leveraging satellite imagery to their advantage.
By now you may have heard about Oxford Nanopore’s new whole-genome sequencing technology, which has the promise of taking the enterprise of sequencing an individual’s genome out of the basic science laboratory, and out to the consumer mass market. From what I gather the hype is not just vaporware; it’s a foretaste of what’s to come. But at the end of the day, this particular device is not the important point in any case. Do you know which firm popularized television? Probably not. When technology goes mainstream, it ceases to be buzzworthy. Rather, it becomes seamlessly integrated into our lives and disappears into the fabric of our daily background humdrum. The banality of what was innovation is a testament to its success. We’re on the cusp of the age when genomics becomes banal, and cutting-edge science becomes everyday utility.
Granted, the short-term impact of mass personal genomics is still going to be exceedingly technical. Scientific genealogy nuts will purchase the latest software, and argue over the esoteric aspects of “coverage,” (the redundancy of the sequence data, which correlates with accuracy) and the necessity of supplementing the genome with the epigenome. Physicians and other health professionals will add genomic information to the arsenal of their diagnostic toolkit, and an alphabet soup of new genome-related terms will wash over you as you visit a doctor’s office. Your genome is not you, but it certainly informs who you are. Your individual genome will become ever more important to your health care.