Category: Technology

Droopy Lines & Overloaded Grids: Inside the Worst-Ever Blackouts in India and the U.S.

By Guest Blogger | August 7, 2012 12:30 pm

Maggie Koerth-Baker is the author of Before the Lights Go Out: Conquering the Energy Crisis Before It Conquers Us. She is also the science editor at BoingBoing.net, where this post first appeared.

Shutterstock

It began with a few small mistakes.

Around 12:15, on the afternoon of August 14, 2003, a software program that helps monitor how well the electric grid is working in the American Midwest shut itself down after after it started getting incorrect input data. The problem was quickly fixed. But nobody turned the program back on again.

A little over an hour later, one of the six coal-fired generators at the Eastlake Power Plant in Ohio shut down. An hour after that, the alarm and monitoring system in the control room of one of the nation’s largest electric conglomerates failed. It, too, was left turned off.

Those three unrelated things—two faulty monitoring programs and one generator outage—weren’t catastrophic, in and of themselves. But they would eventually help create one of the most widespread blackouts in history. By 4:15 pm, 256 power plants were offline and 55 million people in eight states and Canada were in the dark. The Northeast Blackout of 2003 ended up costing us between $4 billion and $10 billion. That’s “billion”, with a “B”.

But this is about more than mere bad luck. The real causes of the 2003 blackout were fixable problems, and the good news is that, since then, we’ve made great strides in fixing them. The bad news, say some grid experts, is that we’re still not doing a great job of preparing our electric infrastructure for the future.

Read More

CATEGORIZED UNDER: Environment, Technology, Top Posts

Peak Plastic: One Generation’s Trash Is Another Generation’s Treasure

By Guest Blogger | July 2, 2012 10:23 am

Debbie Chachra is an Associate Professor of Materials Science at the Franklin W. Olin College of Engineering, with research interests in biological materials, education, and design. You can follow her on Twitter: @debcha.

In 1956, M. King Hubbert laid out a prediction for how oil production in a nation increases, peaks, and then quickly falls down. Since then many analysts have extended this logic and said that global oil production will soon max out—a point called “peak oil“—which could throw the world economy into turmoil.

I’m a materials scientist by training, and one aspect of peak oil I’ve been thinking about recently is peak plastic.

The use of oil for fuel is dominant, and there’s a reason for that. Oil is remarkable—not only does it have an insanely high energy density (energy stored per unit mass), but it also allows for a high energy flux. In about 90 seconds, I can fill the tank of my car—and that’s enough energy to move it at highway speeds for five hours—but my phone, which uses a tiny fraction of the energy, needs to be charged overnight. So we’ll need to replace what oil can do alone in two different ways: new sources of renewable energy, and also better batteries to store it in. And there’s no Moore’s law for batteries. Getting something that’s even close to the energy density and flux of oil will require new materials chemistry, and researchers are working hard to create better batteries. Still, this combination of energy density and flux is valuable enough that we’ll likely still extract every drop of oil that we can, to use as fuel.

But if we’re running out of oil, that also means that we’re running out of plastic. Compared to fuel and agriculture, plastic is small potatoes. Even though plastics are made on a massive industrial scale, they still only account for about 2% world’s oil consumption. So recycling plastic saves plastic and reduces its impact on the environment, but it certainly isn’t going to save us from the end of oil. Peak oil means peak plastic. And that means that much of the physical world around us will have to change.

Read More

CATEGORIZED UNDER: Technology, Top Posts

Lawyers in Space! The New Era of Spaceflight Needs Some New Rules

By Veronique Greenwood | June 7, 2012 12:13 pm

asteroid
Asteroid mining brings up some tricky legal questions.

By Frans von der Dunk, as told to Veronique Greenwood.

Frans von der Dunk is the Harvey and Susan Perlman Alumni and Othmer Professor of Space Law at the University of Nebraska College of Law. In addition, he is the director of a space law and policy consultancy, Black Holes, based in the Netherlands.

Within weeks of the launch of Sputnik I in 1957, after the U.S. made no protest against the satellite flying over its territory, space effectively became recognized as a global commons, free for all. The UN Committee on the Peaceful Uses of Outer Space, charged with codifying existing law and developing it further to apply to space, was brought into being, with all major nations being involved. The fundamental rule of space law they adopted is that no single nation can exercise territorial sovereignty over any part of outer space. American astronauts planting the flag on the moon did not, and never could, thereby turn the moon into U.S. territory.

Now that private companies are making forays into space, though—with SpaceX’s Dragon capsule mission last week only the first of many, and plans to mine asteroids for private profit seeming more and more plausible—we’re facing a sudden need to update the applicable laws. How will we deal with property ownership in space? Who is responsible for safety when private companies begin to ferry public employees, like NASA astronauts, to the International Space Station?

Read More

CATEGORIZED UNDER: Technology, Top Posts

Is Environmentalism Anti-Science?

By Keith Kloor | May 24, 2012 12:10 pm

By Keith Kloor, a freelance journalist whose stories have appeared in a range of publications, from Science to Smithsonian. Since 2004, he’s been an adjunct professor of journalism at New York University. You can find him on Twitter here.

 

Greens are often mocked as self-righteous, hybrid-driving, politically correct foodies these days (see this episode of South Park and this scene from Portlandia.) But it wasn’t that long ago—when Earth First and Earth Liberation were in the headlines—that greens were perceived as militant activists. They camped out in trees to stop clear-cutting and intercepted whaling ships and oil and gas rigs on the high seas.

spacing is important

In recent years, a new forceful brand of green activism has come back into vogue. One action (carried out with Monkey Wrenching flair) became a touchstone for the nascent climate movement.  In 2011, climate activists engaged in a multi-day civil disobedience event that has since turned a proposed oil pipeline into a rallying cause for American environmental groups.

This, combined with grassroots opposition to gas fracking, has energized the sagging global green movement. But though activist greens have frequently claimed to stand behind science, their recent actions, especially in regard to genetically modified organisms, or GMOs, say otherwise.

For instance, whether all the claims of fracking’s environmental contamination are true remains to be decided. (There are legitimate ecological and health issues—but also overstated ones. See this excellent Popular Mechanics deconstruction of all the “bold claims made about hydraulic fracturing.”) Meanwhile, an ancillary debate over natural gas and climate change has broken out, further inflaming an already combustible issue. Whatever the outcome, it’s likely that science will matter less than the politics, as often is the case in such debates.

That’s certainly the case when it comes to GMOs, which have been increasingly targeted by green-minded activists in Europe. The big story on this front of late has been the planned act of vandalism on the government-funded Rothamsted research station in the UK. Scientists there are testing an insect-resistant strain of genetically modified wheat that is objectionable to an anti-GMO group called Take the Flour Back. The attack on the experimental wheat plot is slated for May 27. The group explains that it intends to destroy the plot because “this open air trial poses a real, serious and imminent contamination threat to the local environment and the UK wheat industry.”

http://youtu.be/JYEN_tvqQaw

Read More

CATEGORIZED UNDER: Environment, Technology, Top Posts

The Limits to Environmentalism

By Keith Kloor | April 27, 2012 11:58 am

By Keith Kloor, a freelance journalist whose stories have appeared in a range of publications, from Science to Smithsonian. Since 2004, he’s been an adjunct professor of journalism at New York University. This piece is a follow-up from a post on his blog, Collide-a-Scape.

 

party in Woody Allen's Sleeper
In Sleeper, Woody Allen finds that socializing is different after the 70′s.
Environmentalism? Not so much.

If you were cryogenically frozen in the early 1970s, like Woody Allen was in Sleeper, and brought back to life today, you would obviously find much changed about the world.

Except environmentalism and its underlying precepts. That would be a familiar and quaint relic. You would wake up from your Rip Van Winkle period and everything around you would be different, except the green movement. It’s still anti-nuclear, anti-technology, anti-industrial civilization. It still talks in mushy metaphors from the Aquarius age, cooing over Mother Earth and the Balance of Nature. And most of all, environmentalists are still acting like Old Testament prophets, warning of a plague of environmental ills about to rain down on humanity.

For example, you may have heard that a bunch of scientists produced a landmark report that concludes the earth is destined for ecological collapse, unless global population and consumption rates are restrained. No, I’m not talking about the UK’s just-published Royal Society report, which, among other things, recommends that developed countries put a brake on economic growth. I’m talking about that other landmark report from 1972, the one that became a totem of the environmental movement.

I mention the 40-year old Limits to Growth book in connection with the new Royal Society report not just to point up their Malthusian similarities (which Mark Lynas flags here), but also to demonstrate what a time warp the collective environmental mindset is stuck in. Even some British greens have recoiled in disgust at the outdated assumptions underlying the Royal Society’s report. Chris Goodall, author of  Ten Technologies to Save the Planet, told the Guardian: “What an astonishingly weak, cliché ridden report this is…’Consumption’ to blame for all our problems? Growth is evil?  A rich economy with technological advances is needed for radical decarbonisation. I do wish scientists would stop using their hatred of capitalism as an argument for cutting consumption.”

Goodall, it turns out, is exactly the kind of greenie (along with Lynas) I had in mind when I argued last week that only forward-thinking modernists could save environmentalism from being consigned to junkshop irrelevance. I juxtaposed today’s green modernist with the backward thinking “green traditionalist,” who I said remained wedded to environmentalism’s doom and gloom narrative and resistant to the notion that economic growth was good for the planet. Modernists, I wrote, offered the more viable blueprint for sustainability:

Read More

CATEGORIZED UNDER: Environment, Technology, Top Posts

The Triumph of Technodorkiness: Why We’re Gladly Turning Ourselves Into Yesterday’s Losers

By Guest Blogger | April 17, 2012 9:08 am

By David H. Freedman, a journalist who’s contributed to many magazines, including DISCOVER, where he writes the Impatient Futurist column. His latest book, Wrong: Why Experts Keep Failing Us—and How to Know When Not to Trust Them, came out in 2010. Find him on Twitter at @dhfreedman.

 

Computer glasses have arrived, or are about to. Google has released some advance information about its Project Glass, which essentially embeds smartphone-like capabilities, including a video display, into eyeglasses. A video put out by the company suggests we’ll be able to walk down the street—and, we can extrapolate, distractedly walk right into the street, or drive down the street—while watching and listening to video chats, catching up on social networks (including Google+, of course), and getting turn-by-turn directions (though you’ll be on your own in avoiding people, lampposts and buses, unless there’s a radar-equipped version in the works).

Toshiba bubble helmet
Toshiba developed a six-pound surround-sight bubble helmet. It didn’t take off.

The reviews have mostly been cautiously enthusiastic. But they seem to be glossing over what an astounding leap this is for technophiles. I don’t mean in the sense that this is an amazing new technology. I mean I’m surprised that we seem to be seriously discussing wearing computer glasses as if it weren’t the dorkiest thing in the world—a style and coolness and common-sense violation of galactic magnitude. Video glasses are the postmodern version of the propeller beanie cap. These things have been around for 30 years. You could buy them at Brookstone, or via in-flight shopping catalogs. As far as I could tell, pretty much no one was interested in plunking these things down on their nose. What happened?

More interesting, the apparent sudden willingness to consider wearing computers on our faces may be part of a larger trend. Consider computer tablets, 3D movies, and video phone calls—other consumer technologies that have been long talked about, long offered in various forms, and long soundly rejected—only to relatively recently and suddenly gain mass acceptance.

The obvious explanation for the current triumph of technologies that never seemed to catch on is that the technologies have simply improved enough, and dropped in price enough, to make them sufficiently appealing or useful to a large percentage of the population. But I don’t think that’s nearly a full-enough explanation. Yes, the iPad offers a number of major improvements over Microsoft Tablet PC products circa 2000—but not so much that it could account for the complete shunning of the latter and the total adoration of the former. Likewise, the polarized-glasses-based 3D movie experience of the 1990s, as seen in IMAX and Disney park theaters at the time, really were fairly comparable to what you see in state-of-the-art theaters today.

I think three things are going on:

Read More

CATEGORIZED UNDER: Technology, Top Posts

Cheap Soul Teleportation, Coming Soon to a Theater Near You?

By Mark Changizi | April 10, 2012 12:39 pm

Mark Changizi is an evolutionary neurobiologist and director of human cognition at 2AI Labs. He is the author of The Brain from 25000 FeetThe Vision Revolution, and his newest book, Harnessed: How Language and Music Mimicked Nature and Transformed Ape to Man.”

Also check out his related commentary on a promotional video for Project Glass, Google’s augmented-reality project.

 

Experience happens here—from my point of view. It could happen over there, or from a viewpoint of an objective nowhere. But instead it happens from the confines of my own body. In fact, it happens from my eyes (or from a viewpoint right between the eyes). That’s where I am. That’s consciousness central—my “soul.” In fact, a recent study by Christina Starmans at Yale showed that children and adults presume that this “soul” lies in the eyes (even when the eyes are positioned, in cartoon characters, in unusual spots like the chest).

The question I wish to raise here is whether we can teleport our soul, and, specifically, how best we might do it. I’ll suggest that we may be able to get near-complete soul teleportation into the movie (or video game) experience, and we can do so with some fairly simple upgrades to the 3D glasses we already wear in movies.

Consider for starters a simple sort of teleportation, the “rubber arm illusion.” If you place your arm under a table out of your view, and have a fake, rubber, arm on the table where your arm usually would be, an experimenter who strokes the rubber arm while simultaneously stroking your real arm on the same spot will trick your brain into believing that the rubber arm is your arm. Your arm—or your arm’s “soul”—has “teleported” from under the table and within your real body into a rubber arm sitting well outside of your body.

It’s the same basic trick to get the rest of the body to transport. If you were to wear a virtual reality suit able to touch you in a variety of spots with actuators, then you can be presented with a virtual experience – a movie-like experience – wherein you can see your virtual body being touched and the bodysuit you’re wearing simultaneously touches your real body in those same spots. Pretty soon your entire body has teleported itself into the virtual body.

And… Yawn, we all know this. We saw James Cameron’s Avatar, after all, which uses this as the premise.

My question here is not whether such self-teleportation is possible, but whether it may be possible to actually do this in theaters and video games. Soon.

Read More

CATEGORIZED UNDER: Mind & Brain, Technology, Top Posts

Eyes in the Sky Look Back in Time

By Charles Choi | March 22, 2012 11:31 am

Charles Q. Choi is a science journalist who has also written for Scientific American, The New York Times, Wired, Science, and Nature. In his spare time, he has ventured to all seven continents.

The Fertile Crescent in the Near East was long known as “the cradle of civilization,” and at its heart lies Mesopotamia, home to the earliest known cities, such as Ur. Now satellite images are helping uncover the history of human settlements in this storied area between the Tigris and Euphrates rivers, the latest example of how two very modern technologies—sophisticated computing and images of Earth taken from space—are helping shed light on long-extinct species and the earliest complex human societies.

In a study published this week in PNAS, the fortuitously named Harvard archaeologist Jason Ur worked with Bjoern Menze at MIT to develop a computer algorithm that could detect types of soil known as anthrosols from satellite images. Anthrosols are created by long-term human activity, and are finer, lighter-colored and richer in organic material than surrounding soil. The algorithm was trained on what anthrosols from known sites look like based on the patterns of light they reflect, giving the software the chance to spot anthrosols in as-yet unknown sites.

map of antrhosols in Mesopotamia
This map shows Ur and Menze’s analysis of anthrosol probability for part of Mesopotamia.

Armed with this method to detect ancient human habitation from space, researchers analyzed a 23,000-square-kilometer area of northeastern Syria and mapped more than 14,000 sites spanning 8,000 years. To find out more about how the sites were used, Ur and Menze compared the satellite images with data on the elevation and volume of these sites previously gathered by the Space Shuttle. The ancient settlements the scientists analyzed were built atop the remains of their mostly mud-brick predecessors, so measuring the height and volume of sites could give an idea of the long-term attractiveness of each locale. Ur and Menze identified more than 9,500 elevated sites that cover 157 square kilometers and contain 700 million cubic meters of collapsed architecture and other settlement debris, more than 250 times the volume of concrete making up Hoover Dam.

“I could do this on the ground, but it would probably take me the rest of my life to survey an area this size,” Ur said. Indeed, field scientists that normally prospect for sites in an educated-guess, trial-by-error manner are increasingly leveraging satellite imagery to their advantage.

Read More

Bio-Info-Tech: The Cyborg Baby of Cheap Genomes and Cloud Data

By Razib Khan | March 8, 2012 9:00 am

By now you may have heard about Oxford Nanopore’s new whole-genome sequencing technology, which has the promise of taking the enterprise of sequencing an individual’s genome out of the basic science laboratory, and out to the consumer mass market. From what I gather the hype is not just vaporware; it’s a foretaste of what’s to come. But at the end of the day, this particular device is not the important point in any case. Do you know which firm popularized television? Probably not. When technology goes mainstream, it ceases to be buzzworthy. Rather, it becomes seamlessly integrated into our lives and disappears into the fabric of our daily background humdrum. The banality of what was innovation is a testament to its success. We’re on the cusp of the age when genomics becomes banal, and cutting-edge science becomes everyday utility.

Granted, the short-term impact of mass personal genomics is still going to be exceedingly technical. Scientific genealogy nuts will purchase the latest software, and argue over the esoteric aspects of “coverage,” (the redundancy of the sequence data, which correlates with accuracy) and the necessity of supplementing the genome with the epigenome. Physicians and other health professionals will add genomic information to the arsenal of their diagnostic toolkit, and an alphabet soup of new genome-related terms will wash over you as you visit a doctor’s office. Your genome is not you, but it certainly informs who you are. Your individual genome will become ever more important to your health care.

Read More

CATEGORIZED UNDER: Technology, Top Posts

I, Robopsychologist, Part 2: Where Human Brains Far Surpass Computers

By Andrea Kuszewski | February 9, 2012 10:08 am

Andrea Kuszewski is a behavior therapist and consultant, science writer, and robopsychologist at Syntience in San Francisco. She is interested in creativity, intelligence, and learning, in both humans and machines. Find her on Twitter at @AndreaKuszewski

Before you read this post, please see “I, Robopsychologist, Part 1: Why Robots Need Psychologists.”

A current trend in AI research involves attempts to replicate a human learning system at the neuronal level—beginning with a single functioning synapse, then an entire neuron, the ultimate goal being a complete replication of the human brain. This is basically the traditional reductionist perspective: break the problem down into small pieces and analyze them, and then build a model of the whole as a combination of many small pieces. There are neuroscientists working on these AI problems—replicating and studying one neuron under one condition—and that is useful for some things. But to replicate a single neuron and its function at one snapshot in time is not helping us understand or replicate human learning on a broad scale for use in the natural environment.

We are quite some ways off from reaching the goal of building something structurally similar to the human brain, and even further from having one that actually thinks like one. Which leads me to the obvious question: What’s the purpose of pouring all that effort into replicating a human-like brain in a machine, if it doesn’t ultimately function like a real brain?

If we’re trying to create AI that mimics humans, both in behavior and learning, then we need to consider how humans actually learn—specifically, how they learn best—when teaching them. Therefore, it would make sense that you’d want people on your team who are experts in human behavior and learning. So in this way, the field of psychology is pretty important to the successful development of strong AI, or AGI (artificial general intelligence): intelligence systems that think and act the way humans do. (I will be using the term AI, but I am generally referring to strong AI.)

Basing an AI system on the function of a single neuron is like designing an entire highway system based on the function of a car engine, rather than the behavior of a population of cars and their drivers in the context of a city. Psychologists are experts at the context. They study how the brain works in practice—in multiple environments, over variable conditions, and how it develops and changes over a lifespan.

The brain is actually not like a computer; it doesn’t always follow the rules. Sometimes not following the rules is the best course of action, given a specific context. The brain can act in unpredictable, yet ultimately serendipitous ways. Sometimes the brain develops “mental shortcuts,” or automated patterns of behavior, or makes intuitive leaps of reason. Human brain processes often involve error, which also happens to be a very necessary element of creativity, innovation, and human learning in general. Take away the errors, remove serendipitous learning, discount intuition, and you remove any chance of any true creative cognition. In essence, when it gets too rule-driven and perfect, it ceases to function like a real human brain.

To get a computer that thinks like a person, we have to consider some of the key strengths of human thinking and use psychology to figure out how to foster similar thinking in computers.

Read More

CATEGORIZED UNDER: Mind & Brain, Technology, Top Posts
NEW ON DISCOVER
OPEN
CITIZEN SCIENCE
ADVERTISEMENT

Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

The Crux

A collection of bright and big ideas about timely and important science from a community of experts.
ADVERTISEMENT

See More

ADVERTISEMENT
Collapse bottom bar
+

Login to your Account

X
E-mail address:
Password:
Remember me
Forgot your password?
No problem. Click here to have it e-mailed to you.

Not Registered Yet?

Register now for FREE. Registration only takes a few minutes to complete. Register now »