Whenever I hear that some awesome technology is “twenty years away” my eyebrow inadvertently raises with suspicion. Cold fusion, male birth control, flying cars, and the cure for most diseases are all twenty years away. Why? Because that’s the distance at which it’s genuinely impossible to extrapolate scientific advancement. So, when Will Rosellini, the CEO and President of MicroTransponder and consultant to the team developing Deus Ex: Human Revolution, told me that neuroprosthetic augmentation was about twenty years away, I was skeptical, but intrigued.
Guessing at which technologies will come to fruition requires the ability to determine how many intermediate technologies can reasonably be attained in a given amount of time. From there, one can extrapolate and make educated suppositions about when one could reasonably expect something like a life-like prosthetic arm would be possible.
Rosellini explained his process with DX:HR:
My job at Microtransponder in large part is writing near-term science fiction. I do this by combining all the failure modes from science, business, law etc…and then designing a research strategy to mitigate these risks and get new technologies into patients. With Deus Ex, I was given the task of explaining in a rigorous all of the player abilities in the game. To do this, I extrapolated where technologies would be moving in the next 20 years (to 2027, the start of the game). Most implantable neuroprosthetics take 10 years to get to market, so essentially I was forced to make 1 extra jump to foreseeable technologies.
So what are the background technologies that support this research? Are there any scary government projects with weird code names like MK-ULTRA and project ARTICHOKE that may give us some insight into where neuro-implants might be heading? You bet there are. Read on to learn about just how soon we can hope for retinal displays, neuro-integrated prosthetics, and mind-computer interfaces. Read More
A fossilized trilobite with a bite mark.
Evolutionary neuroscientists suggest
that the brain only developed after
animals developed a taste for eating
animals. Pity the species of the planet
This is the third of a series of posts about the evolution of consciousness. In the first post, I laid out a basic theory that goes something like this: consciousness began to evolve about 350 million years ago, when we emerged from the water on to land. Why? By enabling vision to work over distances many times greater than in water, this move gave us the ability to perceive multiple futures. As a result, the ability to consciously plan ahead became important. In my last post, I detailed why long distance vision reigns supreme when it comes to planning (as opposed to other long distance senses such as hearing or sense of smell).
In this post, I want to make the argument more comprehensive. The crucial environmental condition for evolving neural structures to support planning is that there is an interlude— space to breathe— between perception and action. Without such a gap, only simple, fast, and direct transformations between sensory input and motor output can keep an organism safe from predators. But the long-range sensing abilities discussed in the last two posts are just one category of possibilities for such a gap to open: there are other fancy brain abilities unrelated to sensing that can also open this gap.
Here, I consider two such capabilities: memory and communication. An animal can plan to do something based on memory (“I remember good breakfast was always in this direction”), communication (“hey buddy, around the corner is a good place for lunch”), and, as discussed already, perception (“I see something tasty looking over there”). Let’s go through planning via memory and communication, and compare these to the perceptual route. Combined, the three different mechanisms are the very grist of the mill of consciousness-as-planning.
Rise of the Planet of the Apes caught me off guard. I went into the film thinking it would be another anti-enhancement, “All scientists are Frankenstein’s trying to cheat nature” film. I have rarely been so happy to be wrong. Instead, the film treats the viewer to an entertaining exploration of animal rights, what it means to be human, and what’s at stake when it comes to enhancing our minds.
Rise of the Planet of the Apes is told from the perspective of Caesar (Andy Serkis), a chimp who is exposed to an anti-Alzheimer’s drug, ALZ-112, in the womb. ALZ-112 causes Caesar’s already healthy brain to develop more rapidly than either a chimp or human counterpart. Due to a series of implausible but not unbelievable events, Caesar is raised by Will Rodman (James Franco), the scientist developing ALZ-112. Rodman is in part driven the desire to cure his father, Charles, (played masterfully by John Lithgow) who suffers from Alzheimer’s. As Caesar develops, his place in Will’s home becomes uncertain and his loyalty to humanity is called into question. After being mistreated, abandoned, and abused, Caesar uses his enhanced intelligence as a tool of self-defense and liberation for himself and his fellow apes.
That cognitive enhancement is a way of seeking liberty is a critical theme that gives Rise of the Apes a nuance and depth I was not anticipating. Though the apes are at times frightening, they are never monstrous or mindless. Though they are at time’s violent, they are never barbaric. Caesar and his comrades are oppressed and imprisoned – enhancement is a means to freedom. There is less Frankenstein and more Flowers for Algernon in the film than the trailer lets on. It’s an action film with a brain.
As Rise of the Planet of the Apes is not out yet, I’m reluctant to do a full analysis of the implications of the film’s plot. That will have to come after August 5th, when the movie releases.
I had a chance to interview Andy Serkis, James Franco, and director Rupert Wyatt. The interviews are posted after the jump, where you can see how James Franco was caught off guard by my questions about cognitive enhancement, Rupert Wyatt explores the way in which the apes mirror humanity, and Andy Serkis describes enhancement as a tool of liberation. It’s good stuff, enjoy. Read More
A major argument against human enhancement is that most enhancements won’t be beneficial if everyone is enhanced. Being tall, for example, is only beneficial if you’re taller than most other people. In terms of competitive advantage, nearly any enhancement you look at fails the zero-sum test. Better, stronger muscles? Too bad, everyone else has those, so you won’t be an athletic super-star. Wiz-bang intelligence? Big deal, MIT just ups their entrance exam to compensate so only the most brilliant among a population of geniuses gets in. If all boats rise, you don’t benefit, right?
An excellent example of this mindset can be found in The Incredibles. My love of Pixar is not a mystery to anyone. However, one of the lines that bothers me most in any of their films is Syndrome’s motivating thesis in The Incredibles. Syndrome (Buddy Pine) is a once-in-a-generation genius who, born without superpowers like those of ElastiGirl and Mr. Incredible, builds technology that enables him to be superhuman. In short, Syndrome is what would happen if Tony Stark had been bullied as a kid and told by Captain America to let the big boys take care of everything.
When “monologuing” (the meta humor in the movie is fantastic), Syndrome betrays the kernel of his motivation to be a super villain. His goal is to neutralize those with superpowers (aka “supers”) so that when his robot attacks the city, he can be the sole savior. After being crowned a hero when the supers fail, he will sell his own gizmos and gadgets — rocket boots and zero-point energy among other things — to anyone who wants them. Thereby, he will give every person the opportunity to be super. And, by his logic, “When everyone is super, then no one will be.”
We can apply Syndrome’s concept to cognitive enhancement. That is, “When everyone is gifted and talented, no one will be.” Buddy, you are mistaken. Ender’s Game explains why. Read More
Ethics has a bizarre blind spot around parents and children. For no justifiable reason that I can discern, we deem it perfectly tolerable for a parent to decide unilaterally to raise their child genderless or under the Tiger Mother or laissez-faire method of parenting, but horror at the idea of someone “testing” one of these parental styles on a child. Recall, there is no test to become a parent, no minimum qualification or form of licensing. In fact, if you are so irresponsible as to unintentionally have a child you do not want and cannot support, you have more of a right (and obligation) to rear that child than a stranger with the means and desire to give that child a better life.
We erroneously connect the ability to reproduce with the ability to rear in our social norms and in our laws. As adoption, IVF, sperm/egg donation and surrogate mothers along with new family structures challenge the concept that the person who provides the gametes or womb is also the person who will teach the child to ride a bicycle, we need to investigate the impact of perpetuating the idea that there is a link between reproducing and rearing.
I would like to test this reproduce-rearing correlation with a thought experiment. The details of the thought experiment appear below the fold, but the conclusion is as follows: it would be ethically permissible for a scientist to adopt a large group of children and then perform specific, non-harmful, nature-vs-nurture social experiments on those children. My idea comes from an interview by Charles Q. Choi at Too Hard for Science? with Steven Pinker about just such an experiment:
There is one morally repugnant line of thought Pinker strenuously objects to that could resolve this question. “Basically, every nature-nurture debate could be settled for good if we could raise a group of children in a closed environment of our own design, they way we do with animals,” he says. . .
“The biological basis of sex differences could be tested by dressing babies identically, hiding their sex from the people they interact with, and treating them identically, or better still, dividing them into four groups — boys treated as boys, boys treated as girls, girls treated as girls, girls treated as boys,” he notes. . .
“There’s no end to the ethical horrors that could be raised by this exercise,” Pinker says.
“In the sex-difference experiment, could we emasculate the boys at different ages, including in utero, and do sham operations on the girls as a control?” Pinker asks. “In the language experiment, could we ‘sacrifice’ the children at various ages, to use the common euphemism in animal research, and dissect their brains?”
“This is a line of thought that is morally corrosive even in the contemplation, so your thought experiments can go only so far,” he says.
So let’s test the limits of Pinker’s last line. Ethics is rife with and wrought by horrific thought experiments designed to out our biases and assumptions. And I intend to use a thought experiment to expose our bias that reproductive capacity equals rearing capacity. That is, merely because you can have a kid doesn’t mean you should be allowed to decide how to raise it. Using three scenarios, I’ll prove that a team of scientists adopting a large group of children with the dual intent of raising happy and healthy children while also conducting non-surgical or invasive sociological experiments would be ethically permissible. Read More
Update 8/8/11: The conversation continues in Part III here.
I’m back after a hiatus of a few weeks to catch up on some stuff in the lab and the waning weeks of spring quarter teaching here at Northwestern. In my last post, I put forward an idea about why consciousness– defined in a narrow way as “contemplation of plans” (after Bridgeman)–evolved, and used this idea to suggest some ways we might improve our consciousness in the future through augmentation technology.
Here’s a quick review: Back in our watery days as fish (roughly, 350 million years ago) we were in an environment that was not friendly to sensing things far away. This is because of a hard fact about light in water, which is that our ability to see things at a far distance is drastically compromised by attenuation and scattering of light in water. A useful figure of merit is “attenuation length,” which in water is tens of meters for light, while in air it is tens of ten thousand meters. This is in perfectly clear water –add a bit of algae or other kinds of microorganisms and it goes down dramatically. Roughly speaking, vision in water is similar to driving a car in a fog. Since you’re not seeing very far out, the idea I’ve proposed goes, there is less of an advantage to planning over the space you can sense. On land, you can see a lot further out. Now, if a chance set of mutations gives you the ability to contemplate more than one possible future path through the space ahead, then that mutation is more likely to be selected for.
Over at Cosmic Variance, Sean Carroll wrote a great summary of my post. Between my original post and his, many insightful questions and problems were raised by thoughtful readers.
In the interest of both responding to your comments and encouraging more insightful feedback, I’ll have a couple of further posts on this idea that will explore some of the recurring themes that have cropped up in the comments.
Today, since many commenters raised doubts about my claim that vision on land was key – raising the long distance sensory capabilities of our sense of smell, and hearing, among other points – I thought I’d start with a review of why, among biological senses, only vision (and, to a more limited degree echolocation) is capable of giving access to the detail that could be necessary to having multiple future paths to plan over. Are the other types of sensing that you’ve raised as important as sight?
Imagine you know everything on Wikipedia, in the Oxford English Dictionary, and the contents of every book in digital form. When someone asks you what you did twenty years ago, on demand you recall with perfect accuracy every sensation and thought from that moment. Sifting and parsing all of this information is effortless and unconscious. Any fact, instant of time, skill, technique, or data point that you’ve experienced or can access on the internet is in your mind.
Cybernetic brains might make that possible. As computing power and storage continue to plod along their 18-month doubling cycle, there is no reason to believe we won’t at least have cybernetic sub-brains within the coming century. We already offload a tremendous amount of information and communication to our computers and smartphones. Why not make the process more integrated? Of course, what I’m engaging in right now is rampant speculation. But a neuro-computer interface is a possibility. More than that: cyber-brains may be necessary. Read More
If you loved reading Choose-Your-Own-Adventure books as a kid but have outgrown their puerile plots and dog-eared, unrepentantly analog format, take heart: A newly launched system called Myndplay is a next-gen video version of the genre for adults. “The viewer chooses who lives or dies, whether the good guy or the bad guy wins or whether the hero makes that all-important save,” Mohammed Azam, Myndplay’s managing director, told New Scientist. Instead of relying on old-fashioned reading, MyndPlay lets you guide the story using mind-reading, via a special headset that records and analyzes your brainwaves. Now you can sit back in your armchair, slap on the headset, and use your mind to direct the action on the screen in front of you. (No word yet if there’s a mind-powered equivalent of keeping a finger on the page you came from, so you can flip back to it if you don’t like how things turn out.)
Do Androids Dream of Electric Sheep? (Blade Runner‘s dead-tree forebear) opens with Deckard arguing with his wife about whether or not to alter her crummy attitude with the “mood organ.” She could, if she so desired, dial her mood so that she was happy and content. Philip K. Dick worried that the ability to alter our mood would remove the authenticity and immediacy of our emotions. Annalee Newitz at io9 seems to be worried mood manipulations will enable a form of social control.
The worry comes from recent developments in neuro-pharmaceuticals. Drugs are already on the market that allow for mood manipulation. The Guardian‘s Amelia Hill notes that drugs like Prozac and chemicals like oxytocin have the ability to make some people calmer, more empathetic, and more altruistic. Calm, empathetic, and altruistic people are far more likely to act morally than anxious, callous, and selfish people. But does that mean mood manipulation going to let us force people to be moral? And if it does, is that a good thing? Is it moral to force people to be moral? Read More
Source Code, a sci-fi thriller released last week, is based on the premise that science will let people really get into each other’s heads. The eponymous technology, the trailer tells us, is a computer program that “enables you to cross over into another man’s identity.” What results is a scenario that’s part Matrix, part Groundhog Day: lugged into the Source Code program, Jake Gyllenhaal—er, Captain Colter Stevens—lives through the last eight minutes of another man’s consciousness, just before the man’s train was blown up in a terrorist attack, in an effort to identify the bomber. (Stevens’s body, like Neo’s, stays in one place while his mind is elsewhere.) When the first run-through fails to turn up a culprit, Stevens relives those eight minutes again and again, having a different experience—new conversations, new sensations—each time.
Could something like that ever happen? While much of the technology in Source Code will remain purely fiction, says University of Arizona neuroscientist and electrical engineer Charles Higgins, modern science may eventually let us take a peek at, and even play around with, someone else’s consciousness. Among the movie’s technological inventions, Higgins says, “the idea of monitoring and influencing consciousness with a physical neural interface is the most plausible.”