A major argument against human enhancement is that most enhancements won’t be beneficial if everyone is enhanced. Being tall, for example, is only beneficial if you’re taller than most other people. In terms of competitive advantage, nearly any enhancement you look at fails the zero-sum test. Better, stronger muscles? Too bad, everyone else has those, so you won’t be an athletic super-star. Wiz-bang intelligence? Big deal, MIT just ups their entrance exam to compensate so only the most brilliant among a population of geniuses gets in. If all boats rise, you don’t benefit, right?
An excellent example of this mindset can be found in The Incredibles. My love of Pixar is not a mystery to anyone. However, one of the lines that bothers me most in any of their films is Syndrome’s motivating thesis in The Incredibles. Syndrome (Buddy Pine) is a once-in-a-generation genius who, born without superpowers like those of ElastiGirl and Mr. Incredible, builds technology that enables him to be superhuman. In short, Syndrome is what would happen if Tony Stark had been bullied as a kid and told by Captain America to let the big boys take care of everything.
When “monologuing” (the meta humor in the movie is fantastic), Syndrome betrays the kernel of his motivation to be a super villain. His goal is to neutralize those with superpowers (aka “supers”) so that when his robot attacks the city, he can be the sole savior. After being crowned a hero when the supers fail, he will sell his own gizmos and gadgets — rocket boots and zero-point energy among other things — to anyone who wants them. Thereby, he will give every person the opportunity to be super. And, by his logic, “When everyone is super, then no one will be.”
We can apply Syndrome’s concept to cognitive enhancement. That is, “When everyone is gifted and talented, no one will be.” Buddy, you are mistaken. Ender’s Game explains why. Read More
Ethics has a bizarre blind spot around parents and children. For no justifiable reason that I can discern, we deem it perfectly tolerable for a parent to decide unilaterally to raise their child genderless or under the Tiger Mother or laissez-faire method of parenting, but horror at the idea of someone “testing” one of these parental styles on a child. Recall, there is no test to become a parent, no minimum qualification or form of licensing. In fact, if you are so irresponsible as to unintentionally have a child you do not want and cannot support, you have more of a right (and obligation) to rear that child than a stranger with the means and desire to give that child a better life.
We erroneously connect the ability to reproduce with the ability to rear in our social norms and in our laws. As adoption, IVF, sperm/egg donation and surrogate mothers along with new family structures challenge the concept that the person who provides the gametes or womb is also the person who will teach the child to ride a bicycle, we need to investigate the impact of perpetuating the idea that there is a link between reproducing and rearing.
I would like to test this reproduce-rearing correlation with a thought experiment. The details of the thought experiment appear below the fold, but the conclusion is as follows: it would be ethically permissible for a scientist to adopt a large group of children and then perform specific, non-harmful, nature-vs-nurture social experiments on those children. My idea comes from an interview by Charles Q. Choi at Too Hard for Science? with Steven Pinker about just such an experiment:
There is one morally repugnant line of thought Pinker strenuously objects to that could resolve this question. “Basically, every nature-nurture debate could be settled for good if we could raise a group of children in a closed environment of our own design, they way we do with animals,” he says. . .
“The biological basis of sex differences could be tested by dressing babies identically, hiding their sex from the people they interact with, and treating them identically, or better still, dividing them into four groups — boys treated as boys, boys treated as girls, girls treated as girls, girls treated as boys,” he notes. . .
“There’s no end to the ethical horrors that could be raised by this exercise,” Pinker says.
“In the sex-difference experiment, could we emasculate the boys at different ages, including in utero, and do sham operations on the girls as a control?” Pinker asks. “In the language experiment, could we ‘sacrifice’ the children at various ages, to use the common euphemism in animal research, and dissect their brains?”
“This is a line of thought that is morally corrosive even in the contemplation, so your thought experiments can go only so far,” he says.
So let’s test the limits of Pinker’s last line. Ethics is rife with and wrought by horrific thought experiments designed to out our biases and assumptions. And I intend to use a thought experiment to expose our bias that reproductive capacity equals rearing capacity. That is, merely because you can have a kid doesn’t mean you should be allowed to decide how to raise it. Using three scenarios, I’ll prove that a team of scientists adopting a large group of children with the dual intent of raising happy and healthy children while also conducting non-surgical or invasive sociological experiments would be ethically permissible. Read More
The nerd echo chamber is reverberating this week with the furious debate over Charlie Stross’ doubts about the possibility of an artificial “human-level intelligence” explosion – also known as the Singularity. As currently defined, the Singularity will be an event in the future in which artificial intelligence reaches human level intelligence. At that point, the AI (i.e. AI n) will reflexively begin to improve itself and build AI’s more intelligent than itself (i.e. AI n+1) which will result in an exponential explosion of intelligence towards near deity levels of super-intelligent AI After reading over the debates, I’ve come to a conclusion that both sides miss a critical element of the Singularity discussion: the human beings. Putting people back into the picture allows for a vision of the Singularity that simultaneously addresses several philosophical quandaries. To get there, however, we must first re-trace the steps of the current debate.
I’ve already made my case for why I’m not too concerned, but it’s always fun to see what fantastic fulminations are being exchanged over our future AI overlords. Sparking the flames this time around is Charlie Stross, who knows a thing or two about the Singularity and futuristic speculation. It’s the kind of thing this blog exists to cover: a science fiction author tackling the rational scientific possibility of something about which he has written. Stross argues in a post entitled “Three arguments against the singularity” that “In short: Santa Clause doesn’t exist.”
This is my take on the singularity: we’re not going to see a hard take-off, or a slow take-off, or any kind of AI-mediated exponential outburst. What we’re going to see is increasingly solicitous machines defining our environment — machines that sense and respond to our needs “intelligently”. But it will be the intelligence of the serving hand rather than the commanding brain, and we’re only at risk of disaster if we harbour self-destructive impulses.
We may eventually see mind uploading, but there’ll be a holy war to end holy wars before it becomes widespread: it will literally overturn religions. That would be a singular event, but beyond giving us an opportunity to run [Robert] Nozick’s experience machine thought experiment for real, I’m not sure we’d be able to make effective use of it — our hard-wired biophilia will keep dragging us back to the real world, or to simulations indistinguishable from it.
I am thankful that many of the fine readers of Science Not Fiction are avowed skeptics and raise a wary eyebrow to discussions of the Singularity. Given his stature in the science fiction and speculative science community, Stross’ comments elicited quite an uproar. Those who are believers (and it is a kind of faith, regardless of how much Bayesian analysis one does) in the Rapture of the Nerds have two holy grails which Stross unceremoniously dismissed: the rise of super-intelligent AI and mind uploading. As a result, a few commentators on emerging technologies squared off for another round of speculative slap fights. In one corner, we have Singularitarians Michael Anissimov of the Singularity Institute for Artificial Intelligence and AI researcher Ben Goertzel. In the other, we have the excellent Alex Knapp of Forbes’ Robot Overlords and the brutally rational George Mason University (my alma mater) economist and Oxford Future of Humanity Institute contributor Robin Hanson. I’ll spare you all the back and forth (and all of Goertzel’s infuriating emoticons) and cut to the point being debated. To paraphrase and summarize, the argument is as follows:
1. Stross’ point: Human intelligence has three characteristics: embodiment, self-interest, and evolutionary emergence. AI will not/cannot/should not mirror human intelligence.
2. Singularitarian response: Anissimov and Goertzel argue that human-level general intelligence need not function or arise the way human intelligence has. With sufficient research and devotion to Saint Bayes, super-intelligent friendly AI is probable.
3. Skeptic rebuttal: Hanson argues A) “Intelligence” is a nebulous catch-all like “betterness” that is ill-defined. The ambiguity of the word renders the claims of Singularitarians difficult/impossible to disprove (i.e. special pleading); Knapp argues B) Computers and AI are excellent at specific types of thinking and augmenting human thought (i.e. Kasparov’s Advanced Chess). Even if one grants that AI could reach human or beyond human level, the nature of that intelligence would be neither independent nor self-motivated nor sufficiently well-rounded and, as a result, “bootstrapping” intelligence explosions would not happen as Singularitarian’s foresee.
In essence, the debate is that “human intelligence is like this, AI is like that, never the twain shall meet. But can they parallel one another?” The premise is false, resulting in a useless question. So what we need is a new premise. Here is what I propose instead: the Singularity will be the result of a convergence and connection of human intelligence and artificial intelligence. Read More
Many of us have one or two political issues surrounding our bodies that get us fired up. Many of you reading this right now probably have some hot-button issue on your mind. Maybe it’s abortion, or recreational drug usage, or marriage rights, or surrogate pregnancy, or assisted suicide, or sex work, or voluntary amputation, or gender reassignment surgery.
For each of these issues, there are four words that define our belief about our rights, “My body, my choice.” How you react to those words determine which side of any of those debates you are on. That’s just the thing, though – there aren’t a bunch of little debates, there is just one big debate being argued on multiple fronts. All of these issues find their home in my field of philosophy: bioethics. And within the bioethics community, there is a small contingency that supports a person’s right to choose what to do with their body in every single one of those examples. Transhumanists make up part of that contingency.
If you are pro-choice on abortion or think that gender reassignment surgery is an option everyone should have, you agree with transhumanism on at least one issue. Many current political arguments are skirmishes and turf battles in what is a movement toward what one might call somatic rights. In some cases the law is clear, as it is with marriage rights or drug usage, and the arguments are over whether or not to remove, amend, or change the law. Other cases are so ambiguous that the law is struggling to define itself, as with surrogate pregnancy and voluntary amputation. And sooner or later (I’ve given up on guessing time-frames), instead of merely arguing over what we’re allowed to do with the body we’re born with, there will be debates about our rights to choose what kind of body we have. By looking at the futuristic ideas of genetic engineering and robotic prosthetic technology, we can understand how transhumanism maximizes the “my body, my choice” mantra.
Designers of prosthetics and artificial organs have for a long time tried to replicate the human body. From the earliest peg legs to some of the most modern robotic limbs, the prosthetic we make looks like the body part that needs replacing. Lose a hand? Dean Kamen’s DEKA arm, aka the “Luke arm,” is a robotic prosthesis that will let you grasp an egg or open a beer. The Luke arm is a cutting edge piece of technology based on a backward idea – let’s replace the thing that went missing by replicating it with metal and motors. Whether it’s an artificial leg or a glass eye, prostheses often seek to reproduce not only the function of the body part, but the form and feel as well.
There are good reasons to want to reproduce form and feel along with function. The first reason is that our original bits and pieces work quite well. The human body as a whole is a natural marvel, let alone the immense complexity and dexterity of our hands, eyes, hearts, and legs. No need to reinvent the wheel, just replicate the natural model you’ve been given. The second, less obvious reason, is that we as a society have been and remain deeply uncomfortable with amputees and prosthetics. Many people don’t know what to do when faced with an artificial arm or leg. I wish it were different, but it largely isn’t. So prostheses are designed to look like whatever it is they replicate to hide the fact that the arm or leg or eye isn’t biological.
That methodology is being challenged by a few recent innovations: Össur’s now famous Cheetah blades, Kaylene Kau‘s tentacle arm, and the artificial heart with no heartbeat. These new prostheses and artificial organs are a result of approaching the problem by asking “What does this piece allow us to do?” not “How do we build an artificial one?” The implications for how humans will view themselves in the coming decades are monumental. Read More
Steve Rogers, the man who would become Captain America, was not subjected to an accidental burst of gamma radiation or the bite of a radioactive spider. Instead, he willingly enlisted and subjected himself to an experimental process for the creation of super-soldiers. His superpowers were deliberate and intended. However, the circumstances of Captain America’s enlistment into the army are, at best, questionable. After my chat with Maggie Koerth-Baker on bloggingheads, I got thinking about how the super-solider experiment holds up under the scrutiny of medical ethics. I’m not so sure that Steve Rogers gave his consent to the experiment in an informed and uncoerced manner.
For any medical research to be considered ethical it must adhere to basic standards. A global standard for medical ethics is the Declaration of Helsinki. Devised and published by the World Medical Association in 1964, the Declaration of Helsinki is a guiding framework for all medical research involving human beings. It has been revised over the years to meet modern needs, with the most recent and 6th revision being published in 2008. There are three points of the Declaration that appeal directly to the type of experimentation done to create Captain America. They are:
#6. In medical research involving human subjects, the well-being of the individual research subject must take precedence over all other interests.
#8. In medical practice and in medical research, most interventions involve risks and burdens.
#9. Medical research is subject to ethical standards that promote respect for all human subjects and protect their health and rights. Some research populations are particularly vulnerable and need special protection. These include those who cannot give or refuse consent for themselves and those who may be vulnerable to coercion or undue influence.
Can you really say with confidence that General Chester Phillips had Rogers’ best interests in mind, that Rogers’ wasn’t under any sort of coercion (coughpropagandacough), and that the good ‘ol US-of-A wasn’t bending some rules to build a better soldier?
Let’s take each of these points from the Declaration of Helsinki in turn. Read More
Dying is a touchy subject. Euthanasia makes people upset. Whichever side of the debate you are on, you are caught between the hard place of human suffering and the rock of informed autonomous free choice. Euthanasia is really a debate about not dying of natural causes. For so long, we’ve understood death to be only OK if it was natural or demonstrably accidental. Anything else was murder, manslaughter, or war. Not only God, but we humans, have set our canon against self-slaughter. “Voluntary active euthanasia,” as Daniel Brock denotes it, is not natural, nor is it demonstrably accidental. Thus, we instinctively categorize it as morally wrong.
Instead of attempting to root out the source of that instinct and investigating whether or not voluntary active euthanasia actually violates morality, many use the blurred line created as reason enough to oppose a chosen death. Ross Douthat of the New York Times argues that Jack “Dr. Death” Kevorkian’s efforts to provide assistance to those suffering created a moral slippery slope:
And once we allow that such a right exists, the arguments for confining it to the dying seem arbitrary at best. We are all dying, day by day: do the terminally ill really occupy a completely different moral category from the rest? A cancer patient’s suffering isn’t necessarily more unbearable than the more indefinite agony of someone living with multiple sclerosis or quadriplegia or manic depression. And not every unbearable agony is medical: if a man losing a battle with Parkinson’s disease can claim the relief of physician-assisted suicide, then why not a devastated widower, or a parent who has lost her only child?
Note that Douthat doesn’t consider Parkinson’s a medical disease. But more to the point – Douthat’s argument is that we don’t know what degree of suffering makes the choice to die morally palatable. Degree of suffering is the wrong criterion. None but the sufferer can define it and it can never be truly communicated. What is at stake here is not only the free and informed choice of the dying, but our very understanding of what it means to “die of natural causes.” Read More
Do you ever worry that Steve Rogers (aka Captain America) wasn’t really giving informed consent when he agreed to become enhanced? Or are curious as to why someone might choose a bionic hand over a real one? The awesome Maggie Koerth-Baker of boingboing.net and I had some of the same questions. We chat about the ethics of superheroes and our perception of science in this week’s Science Saturday on bloggingheads.tv. Enjoy!