Category: Technology

Cheap Soul Teleportation, Coming Soon to a Theater Near You?

By Mark Changizi | April 10, 2012 12:39 pm

Mark Changizi is an evolutionary neurobiologist and director of human cognition at 2AI Labs. He is the author of The Brain from 25000 FeetThe Vision Revolution, and his newest book, Harnessed: How Language and Music Mimicked Nature and Transformed Ape to Man.”

Also check out his related commentary on a promotional video for Project Glass, Google’s augmented-reality project.

 

Experience happens here—from my point of view. It could happen over there, or from a viewpoint of an objective nowhere. But instead it happens from the confines of my own body. In fact, it happens from my eyes (or from a viewpoint right between the eyes). That’s where I am. That’s consciousness central—my “soul.” In fact, a recent study by Christina Starmans at Yale showed that children and adults presume that this “soul” lies in the eyes (even when the eyes are positioned, in cartoon characters, in unusual spots like the chest).

The question I wish to raise here is whether we can teleport our soul, and, specifically, how best we might do it. I’ll suggest that we may be able to get near-complete soul teleportation into the movie (or video game) experience, and we can do so with some fairly simple upgrades to the 3D glasses we already wear in movies.

Consider for starters a simple sort of teleportation, the “rubber arm illusion.” If you place your arm under a table out of your view, and have a fake, rubber, arm on the table where your arm usually would be, an experimenter who strokes the rubber arm while simultaneously stroking your real arm on the same spot will trick your brain into believing that the rubber arm is your arm. Your arm—or your arm’s “soul”—has “teleported” from under the table and within your real body into a rubber arm sitting well outside of your body.

It’s the same basic trick to get the rest of the body to transport. If you were to wear a virtual reality suit able to touch you in a variety of spots with actuators, then you can be presented with a virtual experience – a movie-like experience – wherein you can see your virtual body being touched and the bodysuit you’re wearing simultaneously touches your real body in those same spots. Pretty soon your entire body has teleported itself into the virtual body.

And… Yawn, we all know this. We saw James Cameron’s Avatar, after all, which uses this as the premise.

My question here is not whether such self-teleportation is possible, but whether it may be possible to actually do this in theaters and video games. Soon.

Read More

CATEGORIZED UNDER: Mind & Brain, Technology, Top Posts

Eyes in the Sky Look Back in Time

By Charles Choi | March 22, 2012 11:31 am

Charles Q. Choi is a science journalist who has also written for Scientific American, The New York Times, Wired, Science, and Nature. In his spare time, he has ventured to all seven continents.

The Fertile Crescent in the Near East was long known as “the cradle of civilization,” and at its heart lies Mesopotamia, home to the earliest known cities, such as Ur. Now satellite images are helping uncover the history of human settlements in this storied area between the Tigris and Euphrates rivers, the latest example of how two very modern technologies—sophisticated computing and images of Earth taken from space—are helping shed light on long-extinct species and the earliest complex human societies.

In a study published this week in PNAS, the fortuitously named Harvard archaeologist Jason Ur worked with Bjoern Menze at MIT to develop a computer algorithm that could detect types of soil known as anthrosols from satellite images. Anthrosols are created by long-term human activity, and are finer, lighter-colored and richer in organic material than surrounding soil. The algorithm was trained on what anthrosols from known sites look like based on the patterns of light they reflect, giving the software the chance to spot anthrosols in as-yet unknown sites.

map of antrhosols in Mesopotamia
This map shows Ur and Menze’s analysis of anthrosol probability for part of Mesopotamia.

Armed with this method to detect ancient human habitation from space, researchers analyzed a 23,000-square-kilometer area of northeastern Syria and mapped more than 14,000 sites spanning 8,000 years. To find out more about how the sites were used, Ur and Menze compared the satellite images with data on the elevation and volume of these sites previously gathered by the Space Shuttle. The ancient settlements the scientists analyzed were built atop the remains of their mostly mud-brick predecessors, so measuring the height and volume of sites could give an idea of the long-term attractiveness of each locale. Ur and Menze identified more than 9,500 elevated sites that cover 157 square kilometers and contain 700 million cubic meters of collapsed architecture and other settlement debris, more than 250 times the volume of concrete making up Hoover Dam.

“I could do this on the ground, but it would probably take me the rest of my life to survey an area this size,” Ur said. Indeed, field scientists that normally prospect for sites in an educated-guess, trial-by-error manner are increasingly leveraging satellite imagery to their advantage.

Read More

Bio-Info-Tech: The Cyborg Baby of Cheap Genomes and Cloud Data

By Razib Khan | March 8, 2012 9:00 am

By now you may have heard about Oxford Nanopore’s new whole-genome sequencing technology, which has the promise of taking the enterprise of sequencing an individual’s genome out of the basic science laboratory, and out to the consumer mass market. From what I gather the hype is not just vaporware; it’s a foretaste of what’s to come. But at the end of the day, this particular device is not the important point in any case. Do you know which firm popularized television? Probably not. When technology goes mainstream, it ceases to be buzzworthy. Rather, it becomes seamlessly integrated into our lives and disappears into the fabric of our daily background humdrum. The banality of what was innovation is a testament to its success. We’re on the cusp of the age when genomics becomes banal, and cutting-edge science becomes everyday utility.

Granted, the short-term impact of mass personal genomics is still going to be exceedingly technical. Scientific genealogy nuts will purchase the latest software, and argue over the esoteric aspects of “coverage,” (the redundancy of the sequence data, which correlates with accuracy) and the necessity of supplementing the genome with the epigenome. Physicians and other health professionals will add genomic information to the arsenal of their diagnostic toolkit, and an alphabet soup of new genome-related terms will wash over you as you visit a doctor’s office. Your genome is not you, but it certainly informs who you are. Your individual genome will become ever more important to your health care.

Read More

CATEGORIZED UNDER: Technology, Top Posts

I, Robopsychologist, Part 2: Where Human Brains Far Surpass Computers

By Andrea Kuszewski | February 9, 2012 10:08 am

Andrea Kuszewski is a behavior therapist and consultant, science writer, and robopsychologist at Syntience in San Francisco. She is interested in creativity, intelligence, and learning, in both humans and machines. Find her on Twitter at @AndreaKuszewski

Before you read this post, please see “I, Robopsychologist, Part 1: Why Robots Need Psychologists.”

A current trend in AI research involves attempts to replicate a human learning system at the neuronal level—beginning with a single functioning synapse, then an entire neuron, the ultimate goal being a complete replication of the human brain. This is basically the traditional reductionist perspective: break the problem down into small pieces and analyze them, and then build a model of the whole as a combination of many small pieces. There are neuroscientists working on these AI problems—replicating and studying one neuron under one condition—and that is useful for some things. But to replicate a single neuron and its function at one snapshot in time is not helping us understand or replicate human learning on a broad scale for use in the natural environment.

We are quite some ways off from reaching the goal of building something structurally similar to the human brain, and even further from having one that actually thinks like one. Which leads me to the obvious question: What’s the purpose of pouring all that effort into replicating a human-like brain in a machine, if it doesn’t ultimately function like a real brain?

If we’re trying to create AI that mimics humans, both in behavior and learning, then we need to consider how humans actually learn—specifically, how they learn best—when teaching them. Therefore, it would make sense that you’d want people on your team who are experts in human behavior and learning. So in this way, the field of psychology is pretty important to the successful development of strong AI, or AGI (artificial general intelligence): intelligence systems that think and act the way humans do. (I will be using the term AI, but I am generally referring to strong AI.)

Basing an AI system on the function of a single neuron is like designing an entire highway system based on the function of a car engine, rather than the behavior of a population of cars and their drivers in the context of a city. Psychologists are experts at the context. They study how the brain works in practice—in multiple environments, over variable conditions, and how it develops and changes over a lifespan.

The brain is actually not like a computer; it doesn’t always follow the rules. Sometimes not following the rules is the best course of action, given a specific context. The brain can act in unpredictable, yet ultimately serendipitous ways. Sometimes the brain develops “mental shortcuts,” or automated patterns of behavior, or makes intuitive leaps of reason. Human brain processes often involve error, which also happens to be a very necessary element of creativity, innovation, and human learning in general. Take away the errors, remove serendipitous learning, discount intuition, and you remove any chance of any true creative cognition. In essence, when it gets too rule-driven and perfect, it ceases to function like a real human brain.

To get a computer that thinks like a person, we have to consider some of the key strengths of human thinking and use psychology to figure out how to foster similar thinking in computers.

Read More

CATEGORIZED UNDER: Mind & Brain, Technology, Top Posts

I, Robopsychologist, Part 1: Why Robots Need Psychologists

By Andrea Kuszewski | February 7, 2012 1:38 pm

Andrea Kuszewski is a behavior therapist and consultant, science writer, and robopsychologist at Syntience in San Francisco. She is interested in creativity, intelligence, and learning, in both humans and machines. Find her on Twitter a @AndreaKuszewski.

“My brain is not like a computer.”

The day those words were spoken to me marked a significant milestone for both me and the 6-year-old who uttered them. The words themselves may not seem that profound (and some may actually disagree), but that simple sentence represented months of therapy, hours upon hours of teaching, all for the hope that someday, a phrase like that would be spoken at precisely the right time. When he said that to me, he was showing me that the light had been turned on, the fire ignited. And he was letting me know that he realized this fact himself. Why was this a big deal?

I began my career as a behavior therapist, treating children on the autism spectrum. My specialty was Asperger syndrome, or high-functioning autism. This 6-year-old boy, whom I’ll call David, was a client of mine that I’d been treating for about a year at that time. His mom had read a book that had recently come out, The Curious Incident of the Dog in the Night-Time, and told me how much David resembled the main character in the book (who had autism), in regards to his thinking and processing style. The main character said, “My brain is like a computer.”

David heard his mom telling me this, and that quickly became one of his favorite memes. He would say things like “I need input” or “Answer not in the database” or simply “You have reached an error,” when he didn’t know the answer to a question. He truly did think like a computer at that point in time—he memorized questions, formulas, and the subsequent list of acceptable responses. He had developed some extensive social algorithms for human interactions, and when they failed, he went into a complete emotional meltdown.

My job was to change this. To make him less like a computer, to break him out of that rigid mindset. He operated purely on an input-output framework, and if a situation presented itself that wasn’t in the database of his brain, it was rejected, returning a 404 error.

Read More

CATEGORIZED UNDER: Mind & Brain, Technology, Top Posts

Ebooks: More Boon to Literacy Than Threat to Democracy

By Carl Zimmer | January 31, 2012 11:28 am

Carl Zimmer writes about science regularly for The New York Times and magazines such as DISCOVER, which also hosts his blog, The LoomHe is the author of 12 books, the most recent of which is Science Ink: Tattoos of the Science Obsessed.

It’s been nearly 87 years since F. Scott’s Fitzgerald published his brief masterpiece, The Great Gatsby. Charles Scribner’s and Son issued the first hardback edition in April 1925, adorning its cover with a painting of a pair of eyes and lips floating on a blue field above a cityscape. Ten days after the book came out, Fitzgerald’s editor, Maxwell Perkins, sent him one of those heart-breaking notes a writer never wants to get: “SALES SITUATION DOUBTFUL EXCELLENT REVIEWS.”

The first printing of 20,870 copies sold sluggishly through the spring. Four months later, Scribner’s printed another 3,000 copies and then left it at that. After his earlier commercial successes, Fitzgerald was bitterly disappointed by The Great Gatsby. To Perkins and others, he offered various theories for the bad sales. He didn’t like how he had left the description of the relationship between Gatsby and Daisy. The title, he wrote to Perkins, was “only fair.”

Today I decided to go shopping for that 1925 edition on the antiquarian site Abebooks. If you want a copy of it, be ready to pay. Or perhaps get a mortgage. A shop in Woodstock, New York, called By Books Alone, has one copy for sale. The years have not been kind to it. The spine is faded, the front inner hinge is cracked, the iconic dust jacket is gone. And for this mediocre copy, you’ll pay a thousand dollars.

The price goes up from there. For a copy with a torn dustjacket, you’ll pay $17,150. Between the Covers in Gloucester, New Jersey, has the least expensive copy that’s in really good shape. And it’s yours for just $200,000.

By the time Fitzgerald died in 1940, his reputation—and that of The Great Gatsby—had petered away. “The promise of his brilliant career was never fulfilled,” The New York Times declared in their obituary. Only after his death did the novel begin to rise to the highest ranks of American literature. And its ascent was driven in large part by a new form of media: paperback books.

Read More

CATEGORIZED UNDER: Technology, Top Posts

The Future: Where Sexual Orientations Get Kind of Confusing

By Kyle Munkittrick | December 19, 2011 9:21 am

Sex, a biological function of reproduction, should be simple. We need to perpetuate the species, we have sex, babies are born, we raise them , they have sex, repeat. Simple, however, is one thing sex most certainly is not. And it’s only getting more complex by the day.

For those who are fans of human exceptionalism, it might be worth considering that the trait which differentiates us from all other animals is that we over-complicate everything. Sex, and its various accoutrements of sexual orientation, gender identity, gender expression, libido, and even how many partners one may have, contains a multitude.

Recently some psychologists have said that pedophilia is a sexual orientation, the erotic predilection that drives people like former Penn State football coach Jerry Sandusky to do what he allegedly did. This idea came to twitter and incited a minor firestorm over whether “sexual orientation” should really be applied to pedophilia. Nature editor Noah Gray used the term in a neutral sense, as in, “an attraction to a specific category of individuals”; io9′s Charlie Jane Anders and Boing Boing blogger Xeni Jardin pointed out the queer community’s long campaign to define sexual orientation only as an ethically acceptable preference for any category of consenting adults. Given that willful troglodytes like Rick Santorum regularly conflate homosexuality with pedophilia and zoophilia, you can see where the frustration around loose use of the term might arise.

Santorum aside, how should we classify pedophilia if not a “sexual orientation?” Why should that term include should one unchosen, inborn form of sexual attraction, but exclude another unchosen, inborn form of sexual attraction?

While we may have ready answers for these questions now, technological and social changes on the horizon will once again challenge our definitions and beliefs about sex. We can imagine a time when we have artificial intelligence (to at least some degree), or super-intelligent animals, or maybe we’ll even become a spacefaring species and encounter other alien intelligences. Without a doubt, people will start discovering that they are primarily attracted to something that isn’t the good ol’ Homo sapiens. Sex and sexuality will increase in complexity by powers of ten. If some person is attracted to a sexy cyborg, or a genetically enhanced dolphin, how will we know if it is ethical to act upon those desires?

Read More

Think It To Do It: The Problem With Touchscreens—and Hands

By Kyle Munkittrick | November 29, 2011 5:17 pm

Tablets and touchscreen smartphones make it feel like we’re living in the future. But they’re the technology of the present. So what should be anticipating for the future of interfaces?

Bret Victor has a solid grip on interface design. And he has a beef with touchscreens as the archetype of the Interface of the Future. He argues that poking at and sliding around pictures under glass is not really the greatest way to do things. Why? Because that just uses a finger! Victor is a fan of hands. They can grab, twist, flick, feel, manipulate, and hold things. Hands get two thumbs up from Victor.

As a result, Victor argues that any interface that neglects hands neglects human beings. Tools of the future need to be hand-friendly and take advantage of the wonderful functions hands can perform. His entire article, “A Brief Rant on the Future of Interfaces” is a glorious read and deserves your attention. One of the best parts is his simple but profound explanation of what a tool does: “A tool addresses human needs by amplifying human capabilities.”

There is, as I see it, one tiny problem with Victor’s vision: hands are tools themselves. They translate brain signals into physical action. Hands are, as Victor shows, super good at that translation. His argument is based on the idea that we should take as much advantage as possible of the amazing tools that hands already are. I disagree.

Read More

CATEGORIZED UNDER: Technology, Top Posts

7 Billion People, 30 Gigatons of CO2, 1 Warming Planet: Population & Climate in the 21st Century

By Robert Socolow | November 18, 2011 11:03 am

wedges
The “stabilization wedge” idea is a modular way of reducing carbon emissions.

The world is now home to 7 billion people, each of whom contributes to the carbon emissions that are slowly cooking the globe. To find out how growing population affects our plans to deal with climate change, we talked with Princeton’s Robert Socolow, co-creator of one of the best models for thinking about how to prevent climate change. 

Many of my students are “green” consumers. They are proud of riding bicycles, they turn off lights when they leave the room, and they eat little or no meat. But they are usually surprised when I tell them that the most important decision they will make, as far as its impact on natural resources is concerned, is how many children to have.

Most sources of carbon emissions—heating and lighting homes and stores, making steel, providing food—grow in proportion to population. We’ve just hit 7 billion people, and there’s no way any single approach, or just two or three approaches, can effectively deal with the environmental pressures that this many people exert.

To foster a way of thinking about the problem of climate change that involves using many different approaches in tandem, Steve Pacala and I introduced the “stabilization wedge” in 2004. A wedge is a campaign or strategy that reduces carbon dioxide emissions to the atmosphere over the next 50 years by a specific amount, relative to some baseline future where nothing is done to slow down climate change. Examples of wedge strategies are driving more efficient cars, driving cars less far because cities are laid out differently, building lots of wind power, and growing many more trees.

Read More

CATEGORIZED UNDER: Environment, Technology, Top Posts

Later Terminator: We’re Nowhere Near Artificial Brains

By Mark Changizi | November 16, 2011 1:43 pm

I can feel it in the air, so thick I can taste it. Can you? It’s the we’re-going-to-build-an-artificial-brain-at-any-moment feeling. It’s exuded into the atmosphere from news media plumes (“IBM Aims to Build Artificial Human Brain Within 10 Years”) and science-fiction movie fountains…and also from science research itself, including projects like Blue Brain and IBM’s SyNAPSE. For example, here’s a recent press release about the latter:

Today, IBM (NYSE: IBM) researchers unveiled a new generation of experimental computer chips designed to emulate the brain’s abilities for perception, action and cognition.

Now, I’m as romantic as the next scientist (as evidence, see my earlier post on science monk Carl Sagan), but even I carry around a jug of cold water for cases like this. Here are four flavors of chilled water to help clear the palate.

The Worm in the Pass

In the story about the Spartans at the Battle of Thermopylae, 300 soldiers prevent a million-man army from making their way through a narrow mountain pass. In neuroscience it is the 300 neurons of the roundworm C. elegans that stand in the way of our understanding the huge collections of neurons found in our or any mammal’s brain.

Read More

CATEGORIZED UNDER: Mind & Brain, Technology, Top Posts
NEW ON DISCOVER
OPEN
CITIZEN SCIENCE
ADVERTISEMENT

Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

The Crux

A collection of bright and big ideas about timely and important science from a community of experts.
ADVERTISEMENT

See More

ADVERTISEMENT
Collapse bottom bar
+

Login to your Account

X
E-mail address:
Password:
Remember me
Forgot your password?
No problem. Click here to have it e-mailed to you.

Not Registered Yet?

Register now for FREE. Registration only takes a few minutes to complete. Register now »