Category: Technology

Lawyers in Space! The New Era of Spaceflight Needs Some New Rules

By Veronique Greenwood | June 7, 2012 12:13 pm

asteroid
Asteroid mining brings up some tricky legal questions.

By Frans von der Dunk, as told to Veronique Greenwood.

Frans von der Dunk is the Harvey and Susan Perlman Alumni and Othmer Professor of Space Law at the University of Nebraska College of Law. In addition, he is the director of a space law and policy consultancy, Black Holes, based in the Netherlands.

Within weeks of the launch of Sputnik I in 1957, after the U.S. made no protest against the satellite flying over its territory, space effectively became recognized as a global commons, free for all. The UN Committee on the Peaceful Uses of Outer Space, charged with codifying existing law and developing it further to apply to space, was brought into being, with all major nations being involved. The fundamental rule of space law they adopted is that no single nation can exercise territorial sovereignty over any part of outer space. American astronauts planting the flag on the moon did not, and never could, thereby turn the moon into U.S. territory.

Now that private companies are making forays into space, though—with SpaceX’s Dragon capsule mission last week only the first of many, and plans to mine asteroids for private profit seeming more and more plausible—we’re facing a sudden need to update the applicable laws. How will we deal with property ownership in space? Who is responsible for safety when private companies begin to ferry public employees, like NASA astronauts, to the International Space Station?

Read More

CATEGORIZED UNDER: Technology, Top Posts

Is Environmentalism Anti-Science?

By Keith Kloor | May 24, 2012 12:10 pm

By Keith Kloor, a freelance journalist whose stories have appeared in a range of publications, from Science to Smithsonian. Since 2004, he’s been an adjunct professor of journalism at New York University. You can find him on Twitter here.

 

Greens are often mocked as self-righteous, hybrid-driving, politically correct foodies these days (see this episode of South Park and this scene from Portlandia.) But it wasn’t that long ago—when Earth First and Earth Liberation were in the headlines—that greens were perceived as militant activists. They camped out in trees to stop clear-cutting and intercepted whaling ships and oil and gas rigs on the high seas.

spacing is important

In recent years, a new forceful brand of green activism has come back into vogue. One action (carried out with Monkey Wrenching flair) became a touchstone for the nascent climate movement.  In 2011, climate activists engaged in a multi-day civil disobedience event that has since turned a proposed oil pipeline into a rallying cause for American environmental groups.

This, combined with grassroots opposition to gas fracking, has energized the sagging global green movement. But though activist greens have frequently claimed to stand behind science, their recent actions, especially in regard to genetically modified organisms, or GMOs, say otherwise.

For instance, whether all the claims of fracking’s environmental contamination are true remains to be decided. (There are legitimate ecological and health issues—but also overstated ones. See this excellent Popular Mechanics deconstruction of all the “bold claims made about hydraulic fracturing.”) Meanwhile, an ancillary debate over natural gas and climate change has broken out, further inflaming an already combustible issue. Whatever the outcome, it’s likely that science will matter less than the politics, as often is the case in such debates.

That’s certainly the case when it comes to GMOs, which have been increasingly targeted by green-minded activists in Europe. The big story on this front of late has been the planned act of vandalism on the government-funded Rothamsted research station in the UK. Scientists there are testing an insect-resistant strain of genetically modified wheat that is objectionable to an anti-GMO group called Take the Flour Back. The attack on the experimental wheat plot is slated for May 27. The group explains that it intends to destroy the plot because “this open air trial poses a real, serious and imminent contamination threat to the local environment and the UK wheat industry.”

http://youtu.be/JYEN_tvqQaw

Read More

CATEGORIZED UNDER: Environment, Technology, Top Posts

The Limits to Environmentalism

By Keith Kloor | April 27, 2012 11:58 am

By Keith Kloor, a freelance journalist whose stories have appeared in a range of publications, from Science to Smithsonian. Since 2004, he’s been an adjunct professor of journalism at New York University. This piece is a follow-up from a post on his blog, Collide-a-Scape.

 

party in Woody Allen's Sleeper
In Sleeper, Woody Allen finds that socializing is different after the 70′s.
Environmentalism? Not so much.

If you were cryogenically frozen in the early 1970s, like Woody Allen was in Sleeper, and brought back to life today, you would obviously find much changed about the world.

Except environmentalism and its underlying precepts. That would be a familiar and quaint relic. You would wake up from your Rip Van Winkle period and everything around you would be different, except the green movement. It’s still anti-nuclear, anti-technology, anti-industrial civilization. It still talks in mushy metaphors from the Aquarius age, cooing over Mother Earth and the Balance of Nature. And most of all, environmentalists are still acting like Old Testament prophets, warning of a plague of environmental ills about to rain down on humanity.

For example, you may have heard that a bunch of scientists produced a landmark report that concludes the earth is destined for ecological collapse, unless global population and consumption rates are restrained. No, I’m not talking about the UK’s just-published Royal Society report, which, among other things, recommends that developed countries put a brake on economic growth. I’m talking about that other landmark report from 1972, the one that became a totem of the environmental movement.

I mention the 40-year old Limits to Growth book in connection with the new Royal Society report not just to point up their Malthusian similarities (which Mark Lynas flags here), but also to demonstrate what a time warp the collective environmental mindset is stuck in. Even some British greens have recoiled in disgust at the outdated assumptions underlying the Royal Society’s report. Chris Goodall, author of  Ten Technologies to Save the Planet, told the Guardian: “What an astonishingly weak, cliché ridden report this is…’Consumption’ to blame for all our problems? Growth is evil?  A rich economy with technological advances is needed for radical decarbonisation. I do wish scientists would stop using their hatred of capitalism as an argument for cutting consumption.”

Goodall, it turns out, is exactly the kind of greenie (along with Lynas) I had in mind when I argued last week that only forward-thinking modernists could save environmentalism from being consigned to junkshop irrelevance. I juxtaposed today’s green modernist with the backward thinking “green traditionalist,” who I said remained wedded to environmentalism’s doom and gloom narrative and resistant to the notion that economic growth was good for the planet. Modernists, I wrote, offered the more viable blueprint for sustainability:

Read More

CATEGORIZED UNDER: Environment, Technology, Top Posts

The Triumph of Technodorkiness: Why We’re Gladly Turning Ourselves Into Yesterday’s Losers

By Guest Blogger | April 17, 2012 9:08 am

By David H. Freedman, a journalist who’s contributed to many magazines, including DISCOVER, where he writes the Impatient Futurist column. His latest book, Wrong: Why Experts Keep Failing Us—and How to Know When Not to Trust Them, came out in 2010. Find him on Twitter at @dhfreedman.

 

Computer glasses have arrived, or are about to. Google has released some advance information about its Project Glass, which essentially embeds smartphone-like capabilities, including a video display, into eyeglasses. A video put out by the company suggests we’ll be able to walk down the street—and, we can extrapolate, distractedly walk right into the street, or drive down the street—while watching and listening to video chats, catching up on social networks (including Google+, of course), and getting turn-by-turn directions (though you’ll be on your own in avoiding people, lampposts and buses, unless there’s a radar-equipped version in the works).

Toshiba bubble helmet
Toshiba developed a six-pound surround-sight bubble helmet. It didn’t take off.

The reviews have mostly been cautiously enthusiastic. But they seem to be glossing over what an astounding leap this is for technophiles. I don’t mean in the sense that this is an amazing new technology. I mean I’m surprised that we seem to be seriously discussing wearing computer glasses as if it weren’t the dorkiest thing in the world—a style and coolness and common-sense violation of galactic magnitude. Video glasses are the postmodern version of the propeller beanie cap. These things have been around for 30 years. You could buy them at Brookstone, or via in-flight shopping catalogs. As far as I could tell, pretty much no one was interested in plunking these things down on their nose. What happened?

More interesting, the apparent sudden willingness to consider wearing computers on our faces may be part of a larger trend. Consider computer tablets, 3D movies, and video phone calls—other consumer technologies that have been long talked about, long offered in various forms, and long soundly rejected—only to relatively recently and suddenly gain mass acceptance.

The obvious explanation for the current triumph of technologies that never seemed to catch on is that the technologies have simply improved enough, and dropped in price enough, to make them sufficiently appealing or useful to a large percentage of the population. But I don’t think that’s nearly a full-enough explanation. Yes, the iPad offers a number of major improvements over Microsoft Tablet PC products circa 2000—but not so much that it could account for the complete shunning of the latter and the total adoration of the former. Likewise, the polarized-glasses-based 3D movie experience of the 1990s, as seen in IMAX and Disney park theaters at the time, really were fairly comparable to what you see in state-of-the-art theaters today.

I think three things are going on:

Read More

CATEGORIZED UNDER: Technology, Top Posts

Cheap Soul Teleportation, Coming Soon to a Theater Near You?

By Mark Changizi | April 10, 2012 12:39 pm

Mark Changizi is an evolutionary neurobiologist and director of human cognition at 2AI Labs. He is the author of The Brain from 25000 FeetThe Vision Revolution, and his newest book, Harnessed: How Language and Music Mimicked Nature and Transformed Ape to Man.”

Also check out his related commentary on a promotional video for Project Glass, Google’s augmented-reality project.

 

Experience happens here—from my point of view. It could happen over there, or from a viewpoint of an objective nowhere. But instead it happens from the confines of my own body. In fact, it happens from my eyes (or from a viewpoint right between the eyes). That’s where I am. That’s consciousness central—my “soul.” In fact, a recent study by Christina Starmans at Yale showed that children and adults presume that this “soul” lies in the eyes (even when the eyes are positioned, in cartoon characters, in unusual spots like the chest).

The question I wish to raise here is whether we can teleport our soul, and, specifically, how best we might do it. I’ll suggest that we may be able to get near-complete soul teleportation into the movie (or video game) experience, and we can do so with some fairly simple upgrades to the 3D glasses we already wear in movies.

Consider for starters a simple sort of teleportation, the “rubber arm illusion.” If you place your arm under a table out of your view, and have a fake, rubber, arm on the table where your arm usually would be, an experimenter who strokes the rubber arm while simultaneously stroking your real arm on the same spot will trick your brain into believing that the rubber arm is your arm. Your arm—or your arm’s “soul”—has “teleported” from under the table and within your real body into a rubber arm sitting well outside of your body.

It’s the same basic trick to get the rest of the body to transport. If you were to wear a virtual reality suit able to touch you in a variety of spots with actuators, then you can be presented with a virtual experience – a movie-like experience – wherein you can see your virtual body being touched and the bodysuit you’re wearing simultaneously touches your real body in those same spots. Pretty soon your entire body has teleported itself into the virtual body.

And… Yawn, we all know this. We saw James Cameron’s Avatar, after all, which uses this as the premise.

My question here is not whether such self-teleportation is possible, but whether it may be possible to actually do this in theaters and video games. Soon.

Read More

CATEGORIZED UNDER: Mind & Brain, Technology, Top Posts

Eyes in the Sky Look Back in Time

By Charles Choi | March 22, 2012 11:31 am

Charles Q. Choi is a science journalist who has also written for Scientific American, The New York Times, Wired, Science, and Nature. In his spare time, he has ventured to all seven continents.

The Fertile Crescent in the Near East was long known as “the cradle of civilization,” and at its heart lies Mesopotamia, home to the earliest known cities, such as Ur. Now satellite images are helping uncover the history of human settlements in this storied area between the Tigris and Euphrates rivers, the latest example of how two very modern technologies—sophisticated computing and images of Earth taken from space—are helping shed light on long-extinct species and the earliest complex human societies.

In a study published this week in PNAS, the fortuitously named Harvard archaeologist Jason Ur worked with Bjoern Menze at MIT to develop a computer algorithm that could detect types of soil known as anthrosols from satellite images. Anthrosols are created by long-term human activity, and are finer, lighter-colored and richer in organic material than surrounding soil. The algorithm was trained on what anthrosols from known sites look like based on the patterns of light they reflect, giving the software the chance to spot anthrosols in as-yet unknown sites.

map of antrhosols in Mesopotamia
This map shows Ur and Menze’s analysis of anthrosol probability for part of Mesopotamia.

Armed with this method to detect ancient human habitation from space, researchers analyzed a 23,000-square-kilometer area of northeastern Syria and mapped more than 14,000 sites spanning 8,000 years. To find out more about how the sites were used, Ur and Menze compared the satellite images with data on the elevation and volume of these sites previously gathered by the Space Shuttle. The ancient settlements the scientists analyzed were built atop the remains of their mostly mud-brick predecessors, so measuring the height and volume of sites could give an idea of the long-term attractiveness of each locale. Ur and Menze identified more than 9,500 elevated sites that cover 157 square kilometers and contain 700 million cubic meters of collapsed architecture and other settlement debris, more than 250 times the volume of concrete making up Hoover Dam.

“I could do this on the ground, but it would probably take me the rest of my life to survey an area this size,” Ur said. Indeed, field scientists that normally prospect for sites in an educated-guess, trial-by-error manner are increasingly leveraging satellite imagery to their advantage.

Read More

Bio-Info-Tech: The Cyborg Baby of Cheap Genomes and Cloud Data

By Razib Khan | March 8, 2012 9:00 am

By now you may have heard about Oxford Nanopore’s new whole-genome sequencing technology, which has the promise of taking the enterprise of sequencing an individual’s genome out of the basic science laboratory, and out to the consumer mass market. From what I gather the hype is not just vaporware; it’s a foretaste of what’s to come. But at the end of the day, this particular device is not the important point in any case. Do you know which firm popularized television? Probably not. When technology goes mainstream, it ceases to be buzzworthy. Rather, it becomes seamlessly integrated into our lives and disappears into the fabric of our daily background humdrum. The banality of what was innovation is a testament to its success. We’re on the cusp of the age when genomics becomes banal, and cutting-edge science becomes everyday utility.

Granted, the short-term impact of mass personal genomics is still going to be exceedingly technical. Scientific genealogy nuts will purchase the latest software, and argue over the esoteric aspects of “coverage,” (the redundancy of the sequence data, which correlates with accuracy) and the necessity of supplementing the genome with the epigenome. Physicians and other health professionals will add genomic information to the arsenal of their diagnostic toolkit, and an alphabet soup of new genome-related terms will wash over you as you visit a doctor’s office. Your genome is not you, but it certainly informs who you are. Your individual genome will become ever more important to your health care.

Read More

CATEGORIZED UNDER: Technology, Top Posts

I, Robopsychologist, Part 2: Where Human Brains Far Surpass Computers

By Andrea Kuszewski | February 9, 2012 10:08 am

Andrea Kuszewski is a behavior therapist and consultant, science writer, and robopsychologist at Syntience in San Francisco. She is interested in creativity, intelligence, and learning, in both humans and machines. Find her on Twitter at @AndreaKuszewski

Before you read this post, please see “I, Robopsychologist, Part 1: Why Robots Need Psychologists.”

A current trend in AI research involves attempts to replicate a human learning system at the neuronal level—beginning with a single functioning synapse, then an entire neuron, the ultimate goal being a complete replication of the human brain. This is basically the traditional reductionist perspective: break the problem down into small pieces and analyze them, and then build a model of the whole as a combination of many small pieces. There are neuroscientists working on these AI problems—replicating and studying one neuron under one condition—and that is useful for some things. But to replicate a single neuron and its function at one snapshot in time is not helping us understand or replicate human learning on a broad scale for use in the natural environment.

We are quite some ways off from reaching the goal of building something structurally similar to the human brain, and even further from having one that actually thinks like one. Which leads me to the obvious question: What’s the purpose of pouring all that effort into replicating a human-like brain in a machine, if it doesn’t ultimately function like a real brain?

If we’re trying to create AI that mimics humans, both in behavior and learning, then we need to consider how humans actually learn—specifically, how they learn best—when teaching them. Therefore, it would make sense that you’d want people on your team who are experts in human behavior and learning. So in this way, the field of psychology is pretty important to the successful development of strong AI, or AGI (artificial general intelligence): intelligence systems that think and act the way humans do. (I will be using the term AI, but I am generally referring to strong AI.)

Basing an AI system on the function of a single neuron is like designing an entire highway system based on the function of a car engine, rather than the behavior of a population of cars and their drivers in the context of a city. Psychologists are experts at the context. They study how the brain works in practice—in multiple environments, over variable conditions, and how it develops and changes over a lifespan.

The brain is actually not like a computer; it doesn’t always follow the rules. Sometimes not following the rules is the best course of action, given a specific context. The brain can act in unpredictable, yet ultimately serendipitous ways. Sometimes the brain develops “mental shortcuts,” or automated patterns of behavior, or makes intuitive leaps of reason. Human brain processes often involve error, which also happens to be a very necessary element of creativity, innovation, and human learning in general. Take away the errors, remove serendipitous learning, discount intuition, and you remove any chance of any true creative cognition. In essence, when it gets too rule-driven and perfect, it ceases to function like a real human brain.

To get a computer that thinks like a person, we have to consider some of the key strengths of human thinking and use psychology to figure out how to foster similar thinking in computers.

Read More

CATEGORIZED UNDER: Mind & Brain, Technology, Top Posts

I, Robopsychologist, Part 1: Why Robots Need Psychologists

By Andrea Kuszewski | February 7, 2012 1:38 pm

Andrea Kuszewski is a behavior therapist and consultant, science writer, and robopsychologist at Syntience in San Francisco. She is interested in creativity, intelligence, and learning, in both humans and machines. Find her on Twitter a @AndreaKuszewski.

“My brain is not like a computer.”

The day those words were spoken to me marked a significant milestone for both me and the 6-year-old who uttered them. The words themselves may not seem that profound (and some may actually disagree), but that simple sentence represented months of therapy, hours upon hours of teaching, all for the hope that someday, a phrase like that would be spoken at precisely the right time. When he said that to me, he was showing me that the light had been turned on, the fire ignited. And he was letting me know that he realized this fact himself. Why was this a big deal?

I began my career as a behavior therapist, treating children on the autism spectrum. My specialty was Asperger syndrome, or high-functioning autism. This 6-year-old boy, whom I’ll call David, was a client of mine that I’d been treating for about a year at that time. His mom had read a book that had recently come out, The Curious Incident of the Dog in the Night-Time, and told me how much David resembled the main character in the book (who had autism), in regards to his thinking and processing style. The main character said, “My brain is like a computer.”

David heard his mom telling me this, and that quickly became one of his favorite memes. He would say things like “I need input” or “Answer not in the database” or simply “You have reached an error,” when he didn’t know the answer to a question. He truly did think like a computer at that point in time—he memorized questions, formulas, and the subsequent list of acceptable responses. He had developed some extensive social algorithms for human interactions, and when they failed, he went into a complete emotional meltdown.

My job was to change this. To make him less like a computer, to break him out of that rigid mindset. He operated purely on an input-output framework, and if a situation presented itself that wasn’t in the database of his brain, it was rejected, returning a 404 error.

Read More

CATEGORIZED UNDER: Mind & Brain, Technology, Top Posts

Ebooks: More Boon to Literacy Than Threat to Democracy

By Carl Zimmer | January 31, 2012 11:28 am

Carl Zimmer writes about science regularly for The New York Times and magazines such as DISCOVER, which also hosts his blog, The LoomHe is the author of 12 books, the most recent of which is Science Ink: Tattoos of the Science Obsessed.

It’s been nearly 87 years since F. Scott’s Fitzgerald published his brief masterpiece, The Great Gatsby. Charles Scribner’s and Son issued the first hardback edition in April 1925, adorning its cover with a painting of a pair of eyes and lips floating on a blue field above a cityscape. Ten days after the book came out, Fitzgerald’s editor, Maxwell Perkins, sent him one of those heart-breaking notes a writer never wants to get: “SALES SITUATION DOUBTFUL EXCELLENT REVIEWS.”

The first printing of 20,870 copies sold sluggishly through the spring. Four months later, Scribner’s printed another 3,000 copies and then left it at that. After his earlier commercial successes, Fitzgerald was bitterly disappointed by The Great Gatsby. To Perkins and others, he offered various theories for the bad sales. He didn’t like how he had left the description of the relationship between Gatsby and Daisy. The title, he wrote to Perkins, was “only fair.”

Today I decided to go shopping for that 1925 edition on the antiquarian site Abebooks. If you want a copy of it, be ready to pay. Or perhaps get a mortgage. A shop in Woodstock, New York, called By Books Alone, has one copy for sale. The years have not been kind to it. The spine is faded, the front inner hinge is cracked, the iconic dust jacket is gone. And for this mediocre copy, you’ll pay a thousand dollars.

The price goes up from there. For a copy with a torn dustjacket, you’ll pay $17,150. Between the Covers in Gloucester, New Jersey, has the least expensive copy that’s in really good shape. And it’s yours for just $200,000.

By the time Fitzgerald died in 1940, his reputation—and that of The Great Gatsby—had petered away. “The promise of his brilliant career was never fulfilled,” The New York Times declared in their obituary. Only after his death did the novel begin to rise to the highest ranks of American literature. And its ascent was driven in large part by a new form of media: paperback books.

Read More

CATEGORIZED UNDER: Technology, Top Posts
NEW ON DISCOVER
OPEN
CITIZEN SCIENCE
ADVERTISEMENT

Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

The Crux

A collection of bright and big ideas about timely and important science from a community of experts.
ADVERTISEMENT

See More

ADVERTISEMENT
Collapse bottom bar
+

Login to your Account

X
E-mail address:
Password:
Remember me
Forgot your password?
No problem. Click here to have it e-mailed to you.

Not Registered Yet?

Register now for FREE. Registration only takes a few minutes to complete. Register now »