If there is anything the internet is good for beyond cat photos (see “le sneak” above), it is for arguing. In the spirit of elevating the discourse, I’m going to try and salvage the aftermath of my designer baby post, which itself was a response to Peter Lawler’s post. In the process, I’ll explain to you exactly how social conservatives view the human enhancement debate.
A quick recap: Peter Lawler wrote a post at Big Think about Designer Babies and how they pose a threat to the middle class. I responded with a brilliant rebuttal that displayed my rapier wit and rhetorical dynamism. Now, the chaps at The New Atlantis‘ Futurisms are unhappy with how I portrayed George W. Bush’s President’s Council on Bioethics and Peter Lawler in that magnificent post. Peter Lawler also “responded” to me by block-quoting the arguments of blogger Minerva, who writes her own blog. Minerva made some astute comments about the social ramifications of human enhancement and worried I was not considering them; Lawler took her points and used them as a springboard to describe me as “intolerantly judgmental.” What did I say about religion again? Let’s re-read my artful prose:
I have a very, very hard time disagreeing with Haraway that teaching creationism is a form of abuse. Any religious fundamentalism (funny how Lawler neglects Islam, Judaism, and protestants) is a pestilence. Believe in whatever Supreme Being you so desire, just don’t attempt to derive logic or laws that govern the rest of us from the fictive texts you hold so dear.
Man, that’s great. I claim that fundamentalists teaching their children the Earth is 6000 years old is awful and borderline neglect; Lawler argues that makes me intolerant. He is wrong. Let me be clear. I do not believe those who are religious are stupid, abusive, or bad parents. I believe fundamentalists often are those who teach their children Creationism: that evolution is not real, that the Earth is 6000 years old, or that Noah forgot the dinosaurs. Fundamentalists of all religions also attempt to impose their beliefs by law and that should be opposed at all turns. Finally, I grew up Christian, have studied religion more than is probably healthy and remain far more agnostic than atheist. Let’s drop the “he hates religion” canard and address the actual claims against engineering.
On that note, let me first address Minerva’s concerns about human enhancement, as they are actually cogent and relevant. To begin, Minerva, I agree with you. Enhancement is eugenics. I’ve said it before and I’ll say it again, I support eugenics. Now let me tell you why.
I am a scientist and academic by day, but by night I’m increasingly called upon to talk about transhumanism and the Singularity. Last year, I was science advisor to Caprica, a show that explored relationships between uploaded digital selves and real selves. Some months ago I participated in a public panel on “Mutants, Androids, and Cyborgs: The science of pop culture films” for Chicago’s NPR affiliate, WBEZ. This week brings a panel at the Director’s Guild of America in Los Angeles, entitled “The Science of Cyborgs” on interfacing machines to living nervous systems.
The latest panel to be added to my list is a discussion about the first transhumanist opera, Tod Machover’s “Death and the Powers.” The opera is about an inventor and businessman, Simon Powers, who is approaching the end of his life. He decides to create a device (called The System) that he can upload himself into (hmm I wonder who this might be based on?). After Act 2, the entire set, including a host of OperaBots and a musical chandelier (created at the MIT Media Lab), become the physical manifestation of the now incorporeal Simon Powers, who’s singing we still hear but who has disappeared from the stage. Much of the opera is exploring how his relationships with his daughter and mother change post-uploading. His daughter and wife ask whether The System is really him. They wonder if they should follow his pleas to join him, and whether life will still be meaningful without death. The libretto, by the renown Robert Pinsky, renders these questions in beautiful poetry. It will open in Chicago in April.
These experiences have been fascinating. But I can’t help wondering, what’s with all the sudden interest in transhumanism and the singularity? Read More
Are designer babies a danger to the middle class? Should we, as a society, specially breed children for submission to the Achievatron to defeat Chinese mothers and live up to the genetic “Sputnik Moment” in which we find ourselves? Will designer babies be atheists? Peter Lawler, ostensible smart person, seems to think so! If I am translating his compassionate conservative gibberish properly, Lawler is under the distinct impression that the goal behind designer babies is to make a more productive populace and that doing so will wreak havoc upon our families and lives.
Some background on Peter Lawler. He writes for Big Think, loves The New Atlantis (their writers at Futurisms are great sparring partners) and was on the President’s Council on Bioethics (PCBE) . For those of you unfamiliar with Bush’s President’s Council on Bioethics, they were the brilliant minds behind halting stem cell research, focusing on it-worked-for-Bristol-Palin abstinence-only sex education and being generally terrible philosophers and thinkers. Charles Krauthammer was asked his opinion of ethical issues, I kid you not. In short, the PCBE happily rubber-stamped the backwards and anti-science decrees of Bush and Cheney in an effort to supplicate the deranged Christian base of the Republican party. I tell you all of this lovely information so you have a working context for the luminary Big Think has decided to employ.
Thus, on to the question: will designer babies turn the USA into a culture of compulsory overachievement? Read More
Humans and dolphins are inventing a common language together. This is big news!
In all the hoopla over the world ending due to being asteroid-smashed, man becoming immortal thanks to the singularity in 2045, and Watson the trivia-machine winning Jeopardy! the story of budding interspecies communication got under-reported. Denise Herzing and her team with the Wild Dolphin project has begun developing a language to allow humans and dolphins to communicate. If successful, the ability to communicate with dolphins would fundamentally change animal intelligence research, animal rights arguments, and our ability to talk to aliens.
Herzing and her team faced two huge problems when it came to talking to dolphins. The first problem is that the current state of animal language research creates an asymmetrical relationship between humans and the animals with whom they wish to communicate. The second problem is that (save for parrots) animal vocal cords cannot replicate human speech, and visa versa.
Most, if not nearly all, animal language research involves either studying how animals communicate with one another, or teaching them a human language to see if they can communicate with us. There is a problem with both methods–humans don’t learn much (if any) animal language in the process. Think of it this way: how many commands does the smartest dog you’ve met know? Some border collies, like Chaser, can learn upwards of 1000 words. Now how many words do you know in dog? Or parrot? How about gorilla or whale? Know any corvid? I bet you can at least read cuttlefish patterns, right? No? Of course, I’m being facetious, but with a purpose: up to this point, humans have always attempted to understand animal language by teaching animals how to talk to humans. The glaring flaw in this process of teaching animals to use human language is that it is nary impossible to prove the animal is using language, not merely playing a very complex game of repeater.
There is a second, equally interesting problem. Think about your favorite science fiction series populated by aliens (for me, that’s a toss up between Star Trek and Mass Effect). At some point in that series, an alien has introduced itself as having a very un-alien name, like “Grunt.” The reason? “My real name is unpronounceable by humans.” That is rarely an actual problem, because as it always works out the other alien species (why do we refer to aliens as “races” btw?) can pronounce our human words. One of the only films I can think of that doesn’t have this common sci-fi fallacy is District 9. Humans and prawn seem to be able to understand the other’s language in a rudimentary way, despite neither species being even remotely able to reproduce the other’s sounds. Cetaceans pose the same problem: humans cannot whistle, squeak, chortle, or pop the way a beluga or bottle-nose can. Further, the higher squeals of some dolphins and the low rumbles of some whales are beyond the human auditory spectrum. Dolphins can’t say a word in human languages and we certainly can’t do more than parody the spectrum of cetacean sounds.
Which presents quite a question: How in the heck did Herzing figure out a way to both not teach the dolphins an anthropocentric language and ensure the language was speakable by both species?
Was it just me, or was their something faintly bizarre about yesterday’s historical ass whooping of man by machine? Maybe it was Brad Rutter’s increasingly frantic swaying as Watson took his lead and asked for yet another clue in its stilted, strangely mis-timed way. Perhaps it was the effect of the last corporate stiff of the event – in front of a stone wall backdrop that seemed a parody of cheesy corporate décor – telling us where Watson’s winnings will go, all while speaking with a monotone that would make Al Gore jealous. Or maybe it was Alex Trebek’s nonchalance after the historic event as he immediately turned his attention to pitching the next day’s all-teen tournament. Somehow I expected balloons and confetti to descend from the ceiling, maybe with the voice of Hal in the background—“I’m sorry Ken, but you were really improving from your performance yesterday. Would you mind taking out the garbage?” The most important intelligence test of machine versus man in decades sails by with hardly the rattle of a plastic fern.
Besides the very impressive technical achievement of Watson, IBM should be congratulated for managing to turn three episodes of Jeopardy! into a three-episode-long infomercial for their brand. We saw breathless executives tell us how Watson was a real game-changer for medicine, genomics, and spiky hairdos for avatars. We saw the lead engineers puzzling over mathematical squiggles written on staggered layers of sliding glass panels (something we’ve seen in an Intel commercial before when it was necessary for a visual joke to work, and so obviously useless for doing real work that it seems an insult to viewers in this context).
People who work in robotics prefer not to highlight a reality of our work: robots are not very reliable. They break, all the time. This applies to all research robots, which typically flake out just as you’re giving an important demo to a funding agency or someone you’re trying to impress. My fish robot is back in the shop, again, after a few of its very rigid and very thin fin rays broke. Industrial robots, such as those you see on car assembly lines, can only do better by operating in extremely predictable, structured environments, doing the same thing over and over again. Home robots? If you buy a Roomba, be prepared to adjust your floor plan so that it doesn’t get stuck.
What’s going on? The world is constantly throwing curveballs at robots that weren’t anticipated by the designers. In a novel approach to this problem, Josh Bongard has recently shown how we can use the principles of evolution to make a robot’s “nervous system”—I’ll call it the robot’s controller—robust against many kinds of change. This study was done using large amounts of computer simulation time (it would have taken 50–100 years on a single computer), running a program that can simulate the effects of real-world physics on robots.
What he showed is that if we force a robot’s controller to work across widely varying robot body shapes, the robot can learn faster, and be more resistant to knocks that might leave your home robot a smoking pile of motors and silicon. It’s a remarkable result, one that offers a compelling illustration of why intelligence, in the broad sense of adaptively coping with the world, is about more than just what’s above your shoulders. How did the study show it?
Brian Christian is an exemplar of the human species. In 2009, Christian participated in the annual Loebner Prize competition, which is based on Alan Turing’s eponymous test for determining if a computer is able to “think” like a human. Christian did not submit an A.I. he had programmed, but his own mind. Christian was a “confederate,” that is, one of the humans representing humanity in the competition. Five A.I. programs and five humans compete to be judged the most human:
During the competition, each of four judges will type a conversation with one of us for five minutes, then the other, and then will have 10 minutes to reflect and decide which one is the human. Judges will also rank all the contestants—this is used in part as a tiebreaking measure. The computer program receiving the most votes and highest ranking from the judges (regardless of whether it passes the Turing Test by fooling 30 percent of them) is awarded the title of the Most Human Computer.
What makes the competition so intriguing is that, as all contestants are ranked, be they human or computer, there is not only an award for the Most Human Computer, but also an award for the Most Human Human. Brian Christian is one of the vetted few humans who has earned the accolade. He describes his experience in the competition in his outstanding article “Mind vs. Machine” in The Atlantic. The article presents a snippet of what will surely be a wonderful book, The Most Human Human.
Like Sherry Turkle, Christian argues that machines are calling our humanity into stark relief. Yet he sees human-like computers not as automatons dragging us into banality, but as imperfect mirrors, reminding us of what makes us human by what they cannot reflect. I suspect it’s Christian’s double-life as a science journalist and poet that drew him to consider our dual-natured human brain:
Perhaps the fetishization of analytical thinking, and the concomitant denigration of the creatural—that is, animal—and bodily aspects of life are two things we’d do well to leave behind. Perhaps at last, in the beginnings of an age of AI, we are starting to center ourselves again, after generations of living slightly to one side—the logical, left-hemisphere side. Add to this that humans’ contempt for “soulless” animals, our unwillingness to think of ourselves as descended from our fellow “beasts,” is now challenged on all fronts: growing secularism and empiricism, growing appreciation for the cognitive and behavioral abilities of organisms other than ourselves, and, not coincidentally, the entrance onto the scene of an entity with considerably less soul than we sense in a common chimpanzee or bonobo—in this way AI may even turn out to be a boon for animal rights.
Among many conclusions Christian draws is that to be more human you must be yourself. But this is no idle command. The process of being oneself is an active, conscious, and, in some cases, laborious task. Consider your average conversation at a cocktail party – safe topics, non-confrontational questions, scripted answers. Part of Christian’s message, it seems, is not that we should worry about a computer sounding human, but that we humans may make the task too easy. So go forth and be quirky, odd, unique, expressive, honest, clever, eccentric, and above all yourself; in a phrase, be more human.
Image of The Most Human Human via RandomHouse
Can you have an emotional connection with a robot? Sherry Turkle, Director of the MIT Initiative on Technology and Self, believes you certainly could. Whether or not you should is the question. People, especially children, project personalities and emotions on to rudimentary robots. As the Chronicle of Higher Education article on her shows, the result of believing a robot can feel is not always happy:
One day during Turkle’s study at MIT, Kismet malfunctioned. A 12-year-old subject named Estelle became convinced that the robot had clammed up because it didn’t like her, and she became sullen and withdrew to load up on snacks provided by the researchers. The research team held an emergency meeting to discuss “the ethics of exposing a child to a sociable robot whose technical limitations make it seem uninterested in the child,” as Turkle describes in [her new book] Alone Together.
We want to believe our robots love us. Movies like Wall-E, The Iron Giant, Short Circuit and A.I. are all based on the simple idea that robots can develop deep emotional connections with humans. For fans of the Half-Life video game series, Dog, a large scrapheap monstrosity with a penchant for dismembering hostile aliens, is one of the most lovable and loyal characters in the game. Science fiction is packed with robots that endear themselves to us, such as Data from Star Trek, the replicants in Blade Runner, and Legion from Mass Effect. Heck, even R2-D2 and C-3PO seem endeared to one another. And Futurama has a warning for all of us.
Yet these lovable mechanoids are not what Turkle is critiquing. Turkle is no Luddite, and does not strike me as a speciesist. What Turkle is critiquing is contentless performed emotion. Robots like Kisemet and Cog are representative of a group of robots where the brains are second to bonding. Humans have evolved to react to subtle emotional cues that allow us to recognize other minds, other persons. Kisemet and Cog have rather rudimentary A.I., but very advanced mimicking and response abilities. The result is they seem to understand us. Part of what makes HAL-9000 terrifying is that we cannot see it emote. HAL simply processes and acts.
On the one hand, we have empty emotional aping; on the other, faceless super-computers. What are we to do? Are we trapped between the options of the mindless bot with the simulated smile or the sterile super-mind calculating the cost of lives? Read More
We all have our favorite capacity/organ that we fail modern-day AI for not having, and that we think it needs to have to get truly intelligent machines. For some it’s consciousness, for others it is common sense, emotion, heart, or soul. What if it came down to a gut? That we need to make our AI have the capacity to get hungry, and slake that hunger with food, for the next real breakthrough? There’s some new information on the role of gut microbes in brain development that’s worth some mental mastication in this regard (PNAS via PhysOrg).