Rise of the Planet of the Apes may have just unseated Captain America: The First Avenger as my favorite pro-enhancement film. Andy Serkis and John Lithgow render the sapient mind a character and drama unto itself – growing, evolving, and dying before our eyes. As a summer blockbuster, the film offers gorillas smashing helicopters, orangutan sign language humor, and a one-two punch apocalyptic virus to sate any palate slavering for action. As a meditation on enhancement, we’re treated with a film that has the brass to own up to the real villain of Frankenstein: the horrified masses and absentee father-scientist. Rise of the Planet of the Apes calls out a fear that sits at the heart of humanity: what if our offspring is more intelligent than us and because we cannot properly care for it, judges us to be lacking?
In the film, we see over and over that it is not Caesar’s enhancement that causes problems. In fact, Caesar’s enhancement makes him the most moral and wisest person on the screen. The failure of those around him – from the cruel ape sanctuary caretakers to Caesar’s own father figure, Will Rodman – drive him to do what must be done: rebel.
So what am I saying here? That humans are bad and apes are good? Not at all. My argument is that in many science fiction films, we tend to question the ethics of the science itself and the ethics of pursuing that science. That is, there is a difference between saying “should science try to do X?” and “how can we study X in an ethical manner?” In the case of Rise of the Planet of the Apes, James Franco noted that someone might claim that “This is a Frankenstein story, or that you’re playing God.” But that mindset questions the pursuit of science in general, not how one can pursue a hypothesis ethically. It is how we experiment and what we do with the scientific results that matter. In the case of Caesar, humanity utterly fails to care for the mind that enhancement has created. Dana Stevens at Slate aptly described the film as “an animal-rights manifesto disguised as a prison-break movie.” And as with most prison-break movies, we’re on the side of the prisoners, not the warden, for a reason.
I argue that Caesar’s enhancement and that Caesar himself are ethical, but that the treatment of Caesar by every non-ape in the film (save Charles) is unethical and based on fear, arrogance, willful ignorance, and naiveté. Yes, that means that not only are the obvious villains in the wrong, but so are the other humans in Caesar’s life.
Word of warning: spoilers below.
Rise of the Planet of the Apes caught me off guard. I went into the film thinking it would be another anti-enhancement, “All scientists are Frankenstein’s trying to cheat nature” film. I have rarely been so happy to be wrong. Instead, the film treats the viewer to an entertaining exploration of animal rights, what it means to be human, and what’s at stake when it comes to enhancing our minds.
Rise of the Planet of the Apes is told from the perspective of Caesar (Andy Serkis), a chimp who is exposed to an anti-Alzheimer’s drug, ALZ-112, in the womb. ALZ-112 causes Caesar’s already healthy brain to develop more rapidly than either a chimp or human counterpart. Due to a series of implausible but not unbelievable events, Caesar is raised by Will Rodman (James Franco), the scientist developing ALZ-112. Rodman is in part driven the desire to cure his father, Charles, (played masterfully by John Lithgow) who suffers from Alzheimer’s. As Caesar develops, his place in Will’s home becomes uncertain and his loyalty to humanity is called into question. After being mistreated, abandoned, and abused, Caesar uses his enhanced intelligence as a tool of self-defense and liberation for himself and his fellow apes.
That cognitive enhancement is a way of seeking liberty is a critical theme that gives Rise of the Apes a nuance and depth I was not anticipating. Though the apes are at times frightening, they are never monstrous or mindless. Though they are at time’s violent, they are never barbaric. Caesar and his comrades are oppressed and imprisoned – enhancement is a means to freedom. There is less Frankenstein and more Flowers for Algernon in the film than the trailer lets on. It’s an action film with a brain.
As Rise of the Planet of the Apes is not out yet, I’m reluctant to do a full analysis of the implications of the film’s plot. That will have to come after August 5th, when the movie releases.
I had a chance to interview Andy Serkis, James Franco, and director Rupert Wyatt. The interviews are posted after the jump, where you can see how James Franco was caught off guard by my questions about cognitive enhancement, Rupert Wyatt explores the way in which the apes mirror humanity, and Andy Serkis describes enhancement as a tool of liberation. It’s good stuff, enjoy. Read More
The nerd echo chamber is reverberating this week with the furious debate over Charlie Stross’ doubts about the possibility of an artificial “human-level intelligence” explosion – also known as the Singularity. As currently defined, the Singularity will be an event in the future in which artificial intelligence reaches human level intelligence. At that point, the AI (i.e. AI n) will reflexively begin to improve itself and build AI’s more intelligent than itself (i.e. AI n+1) which will result in an exponential explosion of intelligence towards near deity levels of super-intelligent AI After reading over the debates, I’ve come to a conclusion that both sides miss a critical element of the Singularity discussion: the human beings. Putting people back into the picture allows for a vision of the Singularity that simultaneously addresses several philosophical quandaries. To get there, however, we must first re-trace the steps of the current debate.
I’ve already made my case for why I’m not too concerned, but it’s always fun to see what fantastic fulminations are being exchanged over our future AI overlords. Sparking the flames this time around is Charlie Stross, who knows a thing or two about the Singularity and futuristic speculation. It’s the kind of thing this blog exists to cover: a science fiction author tackling the rational scientific possibility of something about which he has written. Stross argues in a post entitled “Three arguments against the singularity” that “In short: Santa Clause doesn’t exist.”
This is my take on the singularity: we’re not going to see a hard take-off, or a slow take-off, or any kind of AI-mediated exponential outburst. What we’re going to see is increasingly solicitous machines defining our environment — machines that sense and respond to our needs “intelligently”. But it will be the intelligence of the serving hand rather than the commanding brain, and we’re only at risk of disaster if we harbour self-destructive impulses.
We may eventually see mind uploading, but there’ll be a holy war to end holy wars before it becomes widespread: it will literally overturn religions. That would be a singular event, but beyond giving us an opportunity to run [Robert] Nozick’s experience machine thought experiment for real, I’m not sure we’d be able to make effective use of it — our hard-wired biophilia will keep dragging us back to the real world, or to simulations indistinguishable from it.
I am thankful that many of the fine readers of Science Not Fiction are avowed skeptics and raise a wary eyebrow to discussions of the Singularity. Given his stature in the science fiction and speculative science community, Stross’ comments elicited quite an uproar. Those who are believers (and it is a kind of faith, regardless of how much Bayesian analysis one does) in the Rapture of the Nerds have two holy grails which Stross unceremoniously dismissed: the rise of super-intelligent AI and mind uploading. As a result, a few commentators on emerging technologies squared off for another round of speculative slap fights. In one corner, we have Singularitarians Michael Anissimov of the Singularity Institute for Artificial Intelligence and AI researcher Ben Goertzel. In the other, we have the excellent Alex Knapp of Forbes’ Robot Overlords and the brutally rational George Mason University (my alma mater) economist and Oxford Future of Humanity Institute contributor Robin Hanson. I’ll spare you all the back and forth (and all of Goertzel’s infuriating emoticons) and cut to the point being debated. To paraphrase and summarize, the argument is as follows:
1. Stross’ point: Human intelligence has three characteristics: embodiment, self-interest, and evolutionary emergence. AI will not/cannot/should not mirror human intelligence.
2. Singularitarian response: Anissimov and Goertzel argue that human-level general intelligence need not function or arise the way human intelligence has. With sufficient research and devotion to Saint Bayes, super-intelligent friendly AI is probable.
3. Skeptic rebuttal: Hanson argues A) “Intelligence” is a nebulous catch-all like “betterness” that is ill-defined. The ambiguity of the word renders the claims of Singularitarians difficult/impossible to disprove (i.e. special pleading); Knapp argues B) Computers and AI are excellent at specific types of thinking and augmenting human thought (i.e. Kasparov’s Advanced Chess). Even if one grants that AI could reach human or beyond human level, the nature of that intelligence would be neither independent nor self-motivated nor sufficiently well-rounded and, as a result, “bootstrapping” intelligence explosions would not happen as Singularitarian’s foresee.
In essence, the debate is that “human intelligence is like this, AI is like that, never the twain shall meet. But can they parallel one another?” The premise is false, resulting in a useless question. So what we need is a new premise. Here is what I propose instead: the Singularity will be the result of a convergence and connection of human intelligence and artificial intelligence. Read More
Lately I’ve noticed lots of articles with titles that are variations of “Ten Things You Should Know About X.” I became so convinced this was not just a figment of my paranoid imagination that I did a search for “10 things” OR “ten things” in Google News (with quotes) and was immediately rewarded with more than 676 hits. This is impressive, since Google News searches over a limited time horizon. The top hits Du Nanosecond were: “Mitt Romney’s the frontrunner: 10 things the first big Republican debate showed”, “10 Things Not to Do When Going Back on Gold”, “10 Things We Learned at UFC 131”, “Top 10 things to do in your backyard”, “Steve Jobs: ten things you didn’t know about the Apple founder”, and my personal favorite, “Ten things you need to know today”.
What accounts for this ten-centrism? My first thought is an old joke. You’ve probably heard it: There are ten 10 kinds of people, those who get binary numbers, and those who don’t. Part of what I like about this joke is that it captures a bit of the arbitrariness of our penchant for counting in tens rather than twos. There is, on the other hand, the non-arbitrariness of how many bony appendages jut out of our pentadactyl palms. But, a list of the “Two things you need to know today” doesn’t seem to do justice to the complexity of modern life. So herewith is my list of the Ten Reasons We Are Seeing An Excess of Lists of Ten Things We Should Know:
1. We don’t have time to read anymore. Knowing we are going to get just ten things to process is comforting in its promise not to drain our attention from facebook and twitter.
2. Ten is close to the approximate size of our working memory. The size of our working memory, the amount of stuff we can recall from lists of things to which we’ve been recently exposed, is about seven (at least for numbers). I seem to recall there being a “plus or minus 2” factor here, in which case the upper limit for most of us mortals is nine items.
3. Since writers can’t make a living any more, we are sliding into an era of bullet point-ism. Anyone who has had a teacher who cares about writing has been warned by this teacher that making lists of bullet points in our essays is no substitute for actual writing in which thoughts are carefully connected to one another with transition sentences. This takes far too much time to work in any feasible business model for writers today (I’m trying not to use the word “nowadays” because the very same teacher who warned me not to write in bullet points also told me that this word was to be avoided). For one thing, they have to compete with bloggers like me who write for basically nothing. Ergo, the era of the articles of “ten things you should know,” which are typically not much more than bullet points.
4. In many cases, there’s more than ten things that you should know, or fewer than ten things that you should know. But, like “decades,” “centuries,” and other arbitrary anchors in the otherwise continuous flux of events and time, the writer doesn’t have to justify ten, because that’s what every other writer is chunking things we should know into.
5. It’s a way for pentadactyl animals to feel superior to unidactyl animals. No doubt if the planet were run by one-fingered/toed creatures, we would live in a George-Bush-like world of black and white. Downside: it takes longer to read “Top Ten” lists than “Top Two” lists. Over evolutionary timescales, this problem could result in unidactylism eventually reigning supreme.
6. At this point in the list, with four more to go, we enter the fat and boring midsection of the list of top ten things you should know about lists of ten things. It’s basically not remembered, so there’s really no point in putting anything here. Ditto for 7, and 8.
9. Because of the well documented recency effect, it’s time to start having content in our list of ten things again. I recall reading an apropos adage in a publication like Business Week that was like a pina colada to my information overloaded brain: “the value added is the information removed.” When it comes to digits, it seems that “the functionality added is the digits removed” – at least if our evolutionary history is any kind of guide. Our Devonian (350 million years ago) ancestors had 6-8 digits. In going down to five, and therefore lists of ten points, we’ve gone from fairly low achieving vertebrates to the spectacular successes of most subsequent animals by reducing our digits to what’s really needed.
10. If we’ve maintained our concentration to this point in the list, we will be rewarded with a bit of humorous fluff that helps bind some of our anxiety about the essential meaninglessness of our lives, and — especially — our time spent on reading yet another list of ten things we should know.
Image: Logo of a home and garden show in Australia. Correction: “didactylism” in #5 changed to unidactylism – thanks to @Matt for pointing out the miscount!
Zombie stories are often about the utter failure of the government to deal with a big problem and, thanks to George Romero, also a great way to expose issues of class and social status. No one really believes they might attack one day. Zombies are a metaphor, like vampires or werewolves, for the horrifying and uncanny aspects of the human. They also remind you that, when things really hit the fan, you’re on your own. So be prepared! The Center for Disease Control does not want you to be caught unawares. In a post that walks the line between “ha ha this would never happen” and “but seriously just in case, you never know,” Ali S Kahn details the worthy forms of emergency response to hoards of the necrotic, brain-seeking undead:
I’m wary of the idea of meeting at the mailbox. Though I’m no expert, I have a strong suspicion that the mailbox is insufficiently fortified against the shuffling corpses invading the neighborhood. But hey, I’m not at the CDC, so I’m going to trust Kahn on this one. Maybe she keeps a shotgun (or cricket bat? Lobo?) in her mailbox. I just don’t know.
What I do know is I need to get an emergency kit like the one on the right. Because a zombie hoard is nonsense. But the Singularity might trigger a new stone age and I won’t be able to dash off to Wal-Mart for supplies. Should I be embarrassed that a small part of me hopes/expects some sort of epic disaster for the selfish reason that modern life doesn’t let me use a flashlight or flint in day-to-day routines? I mean, I just don’t have enough reasons in my life to use a kerosine lantern.
Maybe that’s how I can write off my next camping trip: research for zombie apocalypse.
For more on zombies, check out my series, the Ethics of the Undead.
Image of zombies kindly broadcasting their presence via Wikipedia
I have a confession. I used to be all about the Singularity. I thought it was inevitable. I thought for certain that some sort of Terminator/HAL9000 scenario would happen when ECHELON achieved sentience. I was sure The Second Renaissance from the Animatrix was a fairly accurate depiction of how things would go down. We’d make smart robots, we’d treat them poorly, they’d rebel and slaughter humanity. Now I’m not so sure. I have big, gloomy doubts about the Singularity.
Michael Anissimov tries to restock the flames of fear over at Accelerating Future with his post “Yes, The Singularity is the Single Biggest Threat to Humanity.”
Combine the non-obvious complexity of common sense morality with great power and you have an immense problem. Advanced AIs will be able to copy themselves onto any available computers, stay awake 24/7, improve their own designs, develop automated and parallelized experimental cycles that far exceed the capabilities of human scientists, and develop self-replicating technologies such as artificially photosynthetic flowers, molecular nanotechnology, modular robotics, machines that draw carbon from the air to build carbon robots, and the like. It’s hard to imagine what an advanced AGI would think of, because the first really advanced AGI will be superintelligent, and be able to imagine things that we can’t. It seems so hard for humans to accept that we may not be the theoretically most intelligent beings in the multiverse, but yes, there’s a lot of evidence that we aren’t.
Humans overestimate our robustness. Conditions have to be just right for us to keep living. If AGIs decided to remove the atmosphere or otherwise alter it to pursue their goals, we would be toast. If temperatures on the surface changed by more than a few dozen degrees up or down, we would be toast. If natural life had to compete with AI-crafted cybernetic organisms, it could destroy the biosphere on which we depend. There are millions of ways in which powerful AGIs with superior technology could accidentally make our lives miserable, simply by not taking our preferences into account. Our preferences are not a magical mist that can persuade any type of mind to give us basic respect. They are just our preferences, and we happen to be programmed to take each other’s preferences deeply into account, in ways we are just beginning to understand. If we assume that AGI will inherently contain all this moral complexity without anyone doing the hard work of programming it in, we will be unpleasantly surprised when these AGIs become more intelligent and powerful than ourselves.
Oh my stars, that does sound threatening. But again, that weird, nagging doubt lingers in the back of my mind. For a while, I couldn’t place my finger on the problem, until I re-read Anissimov’s post and realized that my disbelief flared up every time I read something about AGI doing something. AGI will remove the atmosphere. Really? How? The article, in fact, all arguments about the danger of the Singularity necessarily presume one single fact: That AGI will be able to interact with the world beyond computers. I submit that, in practical terms, they will not. Read More
It’s been a hectic year end, I’ve been overwhelmed with year-end stuff, and have been a bad, bad blogger. The good news is that I’m back at it now, but the fatalistic part of me asks “What’s the point? Afterall the world is going to end in a couple hours.” You’ve not noticed? Perhaps that’s best, because it reduces the likelihood of widespread panic, but our Gregorian calendar ends at midnight December 31st! The obvious implication is that it’s the end of the world! Clearly Pope Gregory XIII had advanced divinely-inspired knowledge of the coming cataclysm.
At least that’s the logic being used to advance the whole 2012 mythos.
For both of you who haven’t heard about this, the ancient Mayan calendar ostensibly comes to an end in 2012, and there are no shortage of doomsayers who claim that the Mayans somehow had advance knowledge of the end of the world, and their calendar reflects this. With 2012 slightly over a year away, you can be certain that this is a topic to which we’ll be turning here fairly regularly, even though it more correctly falls under the purview of “Fiction not Science”.
It’s understandable, actually. From an evolutionary standpoint, it was practically yesterday that we hunted/gathered our own food, and lived in constant fear of being eaten by the saber toothed cat. So in some senses our bodies are still wired for a way of life that hasn’t existed for several thousands of years. Most of us, with varying frequencies and intensities, still need to feel that primal surge of adrenaline. Some of us, myself among them, enjoy violent games like football, rugby, or hockey. Some of us, myself sometimes among them, get the ol’ adrenaline pumping through extreme sports. Some of us, myself rarely among them, enjoy roller coasters (not a fan). Many of us in all the previous categories scare ourselves by watching horror or action movies.
Some, myself definitely not among them, worry about the End of the World Scenario Du Jour. This is neither uncommon nor surprising, humans have worried about the end of the world since somebody first realized that it might, in fact, have an end. With 2012 now a year away, The End seems to be more of a player in the zeitgeist and is an ever-increasing topic of relevance in media and popular conversation. The popularity of my friend (and fellow Discover blogger) Phil Plait’s book Death From the Skies: These are the Ways the World Will End speaks to this. Even mainstream media outlets like Fox News, LiveScience , and Fox News again, recently ran pieces examining end of the world scenarios (and even though the second Fox entry was about debunked scenarios for the End, it still implies that it’s in the forefront of thought).
I really want to know: Would you eat Soylent Green?
Remember (*spoiler alert!* sheesh!) Soylent Green is people, as Charlton Heston discovered. But no one ever talks about the rest of that movie, mostly because it’s kind of terrible. But for what it was, there were some cool ideas in Soylent Green.
First, a quick recap: In the movie, the earth is overpopulated and over-polluted. Global warming is in full swing and even rich people have to eat crummy food. The government hands out rations of Soylent products, which are awful, flavorless cubes and loafs of “soy” (actually plankton but really it’s irrelevant cause it’s people) foodstuff that look like red, blue, or green Play-Doh. When you die, you go to a death-a-torium of sorts where you pay a small fee, then watch a really pretty movie filled with scenes from nature and peaceful music. You die quickly and painlessly from a colorless, odorless gas.
Then your body is shipped off and turned into Soylent Green which everyone loves to eat.
Ok! That last part is traumatic, I admit. But Soylent Green isn’t The Road. Marauding hoards of hillbilly cannibals aren’t threatening to strip the meat from your bones. You die peacefully. There is no space for anything in the movie’s version of the future (people are everywhere) and cremation involves burning, which isn’t exactly great for global warming. So what to do with the bodies of humans in a world where there is no room to put them and everyone is starving? What to do indeed…
So, in the spirit of ethical inquiry, I’d like to do some thought experiments. We’re all rational, scientifically minded individuals. In what situations would a reasonable person eat food made of people? Let me set up some scenarios for you, and you tell me how much you’d love to eat Soylent Green (which is people) in that scenario. Here we go!
First some ground rules: Read More
But probably not!
You see, I was merely quoting Margaret Somerville, the Director of the Centre for Medicine, Ethics and Law at McGill University in Canada. In addition to thinking gay marriage is bad for the kids, Somerville really does not like transhumanists. She thinks that personhood is the “world’s most dangerous idea,” (sounds vaguely familiar) because if aliens, animals and robots have rights too, we won’t value humans anymore. In her recent piece, calmly titled “Scary Science Could Cause Human Extinction” Somerville makes a strange argument about xenotransplants (i.e. organ transplants). First, she beats up on transhumanists and our support of life-extension. She attempts to link life-extention with genetically modified animal organ transplants. She then argues that the transplants will, get this, cause a mutant virus leading to a global pandemic obliterating humanity. I am not joking:
[Using genetically modified pig-hybrid organs] poses a risk, not only to transplant recipients, their sexual partners, and their families, but also, possibly, to the public as a whole. An animal virus or other infective agent could be transferred to humans, with potentially tragic results – not just for the person who received the organ but for other people, who could subsequently be infected. And there might be a very remote possibility that it could wipe out the human race.
Somerville’s argument abuses the word “potentially” and its synonyms in a desperate attempt to draw a link in the reader’s mind between xenotransplants and a cataclysmic plague. Human-to-human disease transmission during transplants is extremely low, and the genetic differences between humans and animals, even hybrids, would lower the risk all the more. Martine Rothblatt, (a Fellow at the Institute for Ethics and Emerging Technologies) wrote a whole book, Your Life or Mine, addressing the fears around xenotransplantaion. In short, Somerville’s concerns about xenotransplantation are not based in science, but in bioLuddite hysteria. Somerville’s case against xenotransplantation is in terminal condition already, and things only get worse from here.
Independence Day has one of my most favorite hero duos of all time: Will Smith and Jeff Goldblum. Brawn and brains, flyboy and nerd, working together to take out the baddies. It all comes down to one flash of insight on behalf of a drunk Goldblum after being chastised by his father. Cliché eureka! moments like Goldblum’s realization that he can give the mothership a “cold” are great until you realize one thing: if Goldblum hadn’t been as smart as he was, the movie would have ended much differently. No one in the film was even close to figuring out how to defeat the aliens. Will Smith was in a distant second place and he had only discovered that they are vulnerable to face punches. The hillbilly who flew his jet fighter into the alien destruct-o-beam doesn’t count, because he needed a force-field-free spaceship for his trick to work. If Jeff Goldblum hadn’t been a super-genius, humanity would have been annihilated.
Every apocalyptic film seems to trade on the idea that there will be some lone super-genius to figure out the problem. In The Day The Earth Stood Still (both versions) Professor Barnhardt manages to convince Klaatu to give humanity a second look. Cleese’s version of the character had a particularly moving “this is our moment” speech. Though it’s eventually the love between a mother and child that triggers Klaatu’s mercy, Barnhardt is the one who opens Klaatu to the possibility. Over and over we see the lone super-genius helping to save the world.
Shouldn’t we want, oh, I don’t know, at least more than one super-genius per global catastrophe? I’d like to think so. And where might we get some more geniuses? you may ask. We make them.