Whenever I hear that some awesome technology is “twenty years away” my eyebrow inadvertently raises with suspicion. Cold fusion, male birth control, flying cars, and the cure for most diseases are all twenty years away. Why? Because that’s the distance at which it’s genuinely impossible to extrapolate scientific advancement. So, when Will Rosellini, the CEO and President of MicroTransponder and consultant to the team developing Deus Ex: Human Revolution, told me that neuroprosthetic augmentation was about twenty years away, I was skeptical, but intrigued.
Guessing at which technologies will come to fruition requires the ability to determine how many intermediate technologies can reasonably be attained in a given amount of time. From there, one can extrapolate and make educated suppositions about when one could reasonably expect something like a life-like prosthetic arm would be possible.
Rosellini explained his process with DX:HR:
My job at Microtransponder in large part is writing near-term science fiction. I do this by combining all the failure modes from science, business, law etc…and then designing a research strategy to mitigate these risks and get new technologies into patients. With Deus Ex, I was given the task of explaining in a rigorous all of the player abilities in the game. To do this, I extrapolated where technologies would be moving in the next 20 years (to 2027, the start of the game). Most implantable neuroprosthetics take 10 years to get to market, so essentially I was forced to make 1 extra jump to foreseeable technologies.
So what are the background technologies that support this research? Are there any scary government projects with weird code names like MK-ULTRA and project ARTICHOKE that may give us some insight into where neuro-implants might be heading? You bet there are. Read on to learn about just how soon we can hope for retinal displays, neuro-integrated prosthetics, and mind-computer interfaces. Read More
A fossilized trilobite with a bite mark.
Evolutionary neuroscientists suggest
that the brain only developed after
animals developed a taste for eating
animals. Pity the species of the planet
This is the third of a series of posts about the evolution of consciousness. In the first post, I laid out a basic theory that goes something like this: consciousness began to evolve about 350 million years ago, when we emerged from the water on to land. Why? By enabling vision to work over distances many times greater than in water, this move gave us the ability to perceive multiple futures. As a result, the ability to consciously plan ahead became important. In my last post, I detailed why long distance vision reigns supreme when it comes to planning (as opposed to other long distance senses such as hearing or sense of smell).
In this post, I want to make the argument more comprehensive. The crucial environmental condition for evolving neural structures to support planning is that there is an interlude— space to breathe— between perception and action. Without such a gap, only simple, fast, and direct transformations between sensory input and motor output can keep an organism safe from predators. But the long-range sensing abilities discussed in the last two posts are just one category of possibilities for such a gap to open: there are other fancy brain abilities unrelated to sensing that can also open this gap.
Here, I consider two such capabilities: memory and communication. An animal can plan to do something based on memory (“I remember good breakfast was always in this direction”), communication (“hey buddy, around the corner is a good place for lunch”), and, as discussed already, perception (“I see something tasty looking over there”). Let’s go through planning via memory and communication, and compare these to the perceptual route. Combined, the three different mechanisms are the very grist of the mill of consciousness-as-planning.
Rise of the Planet of the Apes caught me off guard. I went into the film thinking it would be another anti-enhancement, “All scientists are Frankenstein’s trying to cheat nature” film. I have rarely been so happy to be wrong. Instead, the film treats the viewer to an entertaining exploration of animal rights, what it means to be human, and what’s at stake when it comes to enhancing our minds.
Rise of the Planet of the Apes is told from the perspective of Caesar (Andy Serkis), a chimp who is exposed to an anti-Alzheimer’s drug, ALZ-112, in the womb. ALZ-112 causes Caesar’s already healthy brain to develop more rapidly than either a chimp or human counterpart. Due to a series of implausible but not unbelievable events, Caesar is raised by Will Rodman (James Franco), the scientist developing ALZ-112. Rodman is in part driven the desire to cure his father, Charles, (played masterfully by John Lithgow) who suffers from Alzheimer’s. As Caesar develops, his place in Will’s home becomes uncertain and his loyalty to humanity is called into question. After being mistreated, abandoned, and abused, Caesar uses his enhanced intelligence as a tool of self-defense and liberation for himself and his fellow apes.
That cognitive enhancement is a way of seeking liberty is a critical theme that gives Rise of the Apes a nuance and depth I was not anticipating. Though the apes are at times frightening, they are never monstrous or mindless. Though they are at time’s violent, they are never barbaric. Caesar and his comrades are oppressed and imprisoned – enhancement is a means to freedom. There is less Frankenstein and more Flowers for Algernon in the film than the trailer lets on. It’s an action film with a brain.
As Rise of the Planet of the Apes is not out yet, I’m reluctant to do a full analysis of the implications of the film’s plot. That will have to come after August 5th, when the movie releases.
I had a chance to interview Andy Serkis, James Franco, and director Rupert Wyatt. The interviews are posted after the jump, where you can see how James Franco was caught off guard by my questions about cognitive enhancement, Rupert Wyatt explores the way in which the apes mirror humanity, and Andy Serkis describes enhancement as a tool of liberation. It’s good stuff, enjoy. Read More
Update 8/8/11: The conversation continues in Part III here.
I’m back after a hiatus of a few weeks to catch up on some stuff in the lab and the waning weeks of spring quarter teaching here at Northwestern. In my last post, I put forward an idea about why consciousness– defined in a narrow way as “contemplation of plans” (after Bridgeman)–evolved, and used this idea to suggest some ways we might improve our consciousness in the future through augmentation technology.
Here’s a quick review: Back in our watery days as fish (roughly, 350 million years ago) we were in an environment that was not friendly to sensing things far away. This is because of a hard fact about light in water, which is that our ability to see things at a far distance is drastically compromised by attenuation and scattering of light in water. A useful figure of merit is “attenuation length,” which in water is tens of meters for light, while in air it is tens of ten thousand meters. This is in perfectly clear water –add a bit of algae or other kinds of microorganisms and it goes down dramatically. Roughly speaking, vision in water is similar to driving a car in a fog. Since you’re not seeing very far out, the idea I’ve proposed goes, there is less of an advantage to planning over the space you can sense. On land, you can see a lot further out. Now, if a chance set of mutations gives you the ability to contemplate more than one possible future path through the space ahead, then that mutation is more likely to be selected for.
Over at Cosmic Variance, Sean Carroll wrote a great summary of my post. Between my original post and his, many insightful questions and problems were raised by thoughtful readers.
In the interest of both responding to your comments and encouraging more insightful feedback, I’ll have a couple of further posts on this idea that will explore some of the recurring themes that have cropped up in the comments.
Today, since many commenters raised doubts about my claim that vision on land was key – raising the long distance sensory capabilities of our sense of smell, and hearing, among other points – I thought I’d start with a review of why, among biological senses, only vision (and, to a more limited degree echolocation) is capable of giving access to the detail that could be necessary to having multiple future paths to plan over. Are the other types of sensing that you’ve raised as important as sight?
If you loved reading Choose-Your-Own-Adventure books as a kid but have outgrown their puerile plots and dog-eared, unrepentantly analog format, take heart: A newly launched system called Myndplay is a next-gen video version of the genre for adults. “The viewer chooses who lives or dies, whether the good guy or the bad guy wins or whether the hero makes that all-important save,” Mohammed Azam, Myndplay’s managing director, told New Scientist. Instead of relying on old-fashioned reading, MyndPlay lets you guide the story using mind-reading, via a special headset that records and analyzes your brainwaves. Now you can sit back in your armchair, slap on the headset, and use your mind to direct the action on the screen in front of you. (No word yet if there’s a mind-powered equivalent of keeping a finger on the page you came from, so you can flip back to it if you don’t like how things turn out.)
Do Androids Dream of Electric Sheep? (Blade Runner‘s dead-tree forebear) opens with Deckard arguing with his wife about whether or not to alter her crummy attitude with the “mood organ.” She could, if she so desired, dial her mood so that she was happy and content. Philip K. Dick worried that the ability to alter our mood would remove the authenticity and immediacy of our emotions. Annalee Newitz at io9 seems to be worried mood manipulations will enable a form of social control.
The worry comes from recent developments in neuro-pharmaceuticals. Drugs are already on the market that allow for mood manipulation. The Guardian‘s Amelia Hill notes that drugs like Prozac and chemicals like oxytocin have the ability to make some people calmer, more empathetic, and more altruistic. Calm, empathetic, and altruistic people are far more likely to act morally than anxious, callous, and selfish people. But does that mean mood manipulation going to let us force people to be moral? And if it does, is that a good thing? Is it moral to force people to be moral? Read More
Source Code, a sci-fi thriller released last week, is based on the premise that science will let people really get into each other’s heads. The eponymous technology, the trailer tells us, is a computer program that “enables you to cross over into another man’s identity.” What results is a scenario that’s part Matrix, part Groundhog Day: lugged into the Source Code program, Jake Gyllenhaal—er, Captain Colter Stevens—lives through the last eight minutes of another man’s consciousness, just before the man’s train was blown up in a terrorist attack, in an effort to identify the bomber. (Stevens’s body, like Neo’s, stays in one place while his mind is elsewhere.) When the first run-through fails to turn up a culprit, Stevens relives those eight minutes again and again, having a different experience—new conversations, new sensations—each time.
Could something like that ever happen? While much of the technology in Source Code will remain purely fiction, says University of Arizona neuroscientist and electrical engineer Charles Higgins, modern science may eventually let us take a peek at, and even play around with, someone else’s consciousness. Among the movie’s technological inventions, Higgins says, “the idea of monitoring and influencing consciousness with a physical neural interface is the most plausible.”
You will spend a third of your life asleep. If you don’t, your waking hours will be of reduced quality and productivity. For 99% of us, seven hours a night is biological necessity. For a select 1%, what Melinda Beck at the Wall Street Journal dubs the “Sleepless Elite,” less sleep equals more life. So-called short sleepers operate with a kind of low-intensity mania which allows them to go to bed late and wake up early without needing a gallon of coffee to get through the day. And, as it turns out, the ability might be genetic.
“My long-term goal is to someday learn enough so we can manipulate the sleep pathways without damaging our health,” says human geneticist Ying-Hui Fu at the University of California-San Francisco. “Everybody can use more waking hours, even if you just watch movies.”
Dr. Fu was part of a research team that discovered a gene variation, hDEC2, in a pair of short sleepers in 2009. They were studying extreme early birds when they when they noticed that two of their subjects, a mother and daughter, got up naturally about 4 a.m. but also went to bed past midnight.
Genetic analyses spotted one gene variation common to them both. The scientists were able to replicate the gene variation in a strain of mice and found that the mice needed less sleep than usual, too.
Dr. Fu’s research is a reason for excitement because the goal is not just to locate the gene, but to find a way to manipulate sleep pathways safely. For those of us already alive, that means there might be better, safer, more effective stimulants in the future. For those not yet born, genetic engineering may enable future generations to spend less time sawing logs and more time enjoying life. More life! Less sleep! It’s like a longevity enhancement that does nothing to extend your time alive, but instead maximizes your use of that time. But how do short sleepers use their time? Read More
People who work in robotics prefer not to highlight a reality of our work: robots are not very reliable. They break, all the time. This applies to all research robots, which typically flake out just as you’re giving an important demo to a funding agency or someone you’re trying to impress. My fish robot is back in the shop, again, after a few of its very rigid and very thin fin rays broke. Industrial robots, such as those you see on car assembly lines, can only do better by operating in extremely predictable, structured environments, doing the same thing over and over again. Home robots? If you buy a Roomba, be prepared to adjust your floor plan so that it doesn’t get stuck.
What’s going on? The world is constantly throwing curveballs at robots that weren’t anticipated by the designers. In a novel approach to this problem, Josh Bongard has recently shown how we can use the principles of evolution to make a robot’s “nervous system”—I’ll call it the robot’s controller—robust against many kinds of change. This study was done using large amounts of computer simulation time (it would have taken 50–100 years on a single computer), running a program that can simulate the effects of real-world physics on robots.
What he showed is that if we force a robot’s controller to work across widely varying robot body shapes, the robot can learn faster, and be more resistant to knocks that might leave your home robot a smoking pile of motors and silicon. It’s a remarkable result, one that offers a compelling illustration of why intelligence, in the broad sense of adaptively coping with the world, is about more than just what’s above your shoulders. How did the study show it?