Archive for June, 2011

Is it OK to Adopt Kids and Perform Social Experiments On Them?

By Kyle Munkittrick | June 28, 2011 5:05 pm

Ethics has a bizarre blind spot around parents and children. For no justifiable reason that I can discern, we deem it perfectly tolerable for a parent to decide unilaterally to raise their child genderless or under the Tiger Mother or laissez-faire method of parenting, but horror at the idea of someone “testing” one of these parental styles on a child. Recall, there is no test to become a parent, no minimum qualification or form of licensing. In fact, if you are so irresponsible as to unintentionally have a child you do not want and cannot support, you have more of a right (and obligation) to rear that child than a stranger with the means and desire to give that child a better life.

We erroneously connect the ability to reproduce with the ability to rear in our social norms and in our laws. As adoption, IVF, sperm/egg donation and surrogate mothers along with new family structures challenge the concept that the person who provides the gametes or womb is also the person who will teach the child to ride a bicycle, we need to investigate the impact of perpetuating the idea that there is a link between reproducing and rearing.

I would like to test this reproduce-rearing correlation with a thought experiment. The details of the thought experiment appear below the fold, but the conclusion is as follows: it would be ethically permissible for a scientist to adopt a large group of children and then perform specific, non-harmful, nature-vs-nurture social experiments on those children. My idea comes from an interview by Charles Q. Choi at Too Hard for Science? with Steven Pinker about just such an experiment:

There is one morally repugnant line of thought Pinker strenuously objects to that could resolve this question. “Basically, every nature-nurture debate could be settled for good if we could raise a group of children in a closed environment of our own design, they way we do with animals,” he says. . .

“The biological basis of sex differences could be tested by dressing babies identically, hiding their sex from the people they interact with, and treating them identically, or better still, dividing them into four groups — boys treated as boys, boys treated as girls, girls treated as girls, girls treated as boys,” he notes. . .

“There’s no end to the ethical horrors that could be raised by this exercise,” Pinker says.

“In the sex-difference experiment, could we emasculate the boys at different ages, including in utero, and do sham operations on the girls as a control?” Pinker asks. “In the language experiment, could we ‘sacrifice’ the children at various ages, to use the common euphemism in animal research, and dissect their brains?”

“This is a line of thought that is morally corrosive even in the contemplation, so your thought experiments can go only so far,” he says.

So let’s test the limits of Pinker’s last line. Ethics is rife with and wrought by horrific thought experiments designed to out our biases and assumptions. And I intend to use a thought experiment to expose our bias that reproductive capacity equals rearing capacity. That is, merely because you can have a kid doesn’t mean you should be allowed to decide how to raise it. Using three scenarios, I’ll prove that a team of scientists adopting a large group of children with the dual intent of raising happy and healthy children while also conducting non-surgical or invasive sociological experiments would be ethically permissible. Read More

CATEGORIZED UNDER: Biology, Philosophy, Psychology

The AI Singularity is Dead; Long Live the Cybernetic Singularity

By Kyle Munkittrick | June 25, 2011 9:45 am

The nerd echo chamber is reverberating this week with the furious debate over Charlie Stross’ doubts about the possibility of an artificial “human-level intelligence” explosion – also known as the Singularity. As currently defined, the Singularity will be an event in the future in which artificial intelligence reaches human level intelligence. At that point, the AI (i.e. AI n) will reflexively begin to improve itself and build AI’s more intelligent than itself (i.e. AI n+1) which will result in an exponential explosion of intelligence towards near deity levels of super-intelligent AI After reading over the debates, I’ve come to a conclusion that both sides miss a critical element of the Singularity discussion: the human beings. Putting people back into the picture allows for a vision of the Singularity that simultaneously addresses several philosophical quandaries. To get there, however, we must first re-trace the steps of the current debate.

I’ve already made my case for why I’m not too concerned, but it’s always fun to see what fantastic fulminations are being exchanged over our future AI overlords. Sparking the flames this time around is Charlie Stross, who knows a thing or two about the Singularity and futuristic speculation. It’s the kind of thing this blog exists to cover: a science fiction author tackling the rational scientific possibility of something about which he has written. Stross argues in a post entitled “Three arguments against the singularity” that “In short: Santa Clause doesn’t exist.

This is my take on the singularity: we’re not going to see a hard take-off, or a slow take-off, or any kind of AI-mediated exponential outburst. What we’re going to see is increasingly solicitous machines defining our environment — machines that sense and respond to our needs “intelligently”. But it will be the intelligence of the serving hand rather than the commanding brain, and we’re only at risk of disaster if we harbour self-destructive impulses.

We may eventually see mind uploading, but there’ll be a holy war to end holy wars before it becomes widespread: it will literally overturn religions. That would be a singular event, but beyond giving us an opportunity to run [Robert] Nozick’s experience machine thought experiment for real, I’m not sure we’d be able to make effective use of it — our hard-wired biophilia will keep dragging us back to the real world, or to simulations indistinguishable from it.

I am thankful that many of the fine readers of Science Not Fiction are avowed skeptics and raise a wary eyebrow to discussions of the Singularity. Given his stature in the science fiction and speculative science community, Stross’ comments elicited quite an uproar. Those who are believers (and it is a kind of faith, regardless of how much Bayesian analysis one does) in the Rapture of the Nerds have two holy grails which Stross unceremoniously dismissed: the rise of super-intelligent AI and mind uploading. As a result, a few commentators on emerging technologies squared off for another round of speculative slap fights. In one corner, we have Singularitarians Michael Anissimov of the Singularity Institute for Artificial Intelligence and AI researcher Ben Goertzel. In the other, we have the excellent Alex Knapp of Forbes’ Robot Overlords and the brutally rational George Mason University (my alma mater) economist and Oxford Future of Humanity Institute contributor Robin Hanson. I’ll spare you all the back and forth (and all of Goertzel’s infuriating emoticons) and cut to the point being debated. To paraphrase and summarize, the argument is as follows:

1. Stross’ point: Human intelligence has three characteristics: embodiment, self-interest, and evolutionary emergence. AI will not/cannot/should not mirror human intelligence.

2. Singularitarian response: Anissimov and Goertzel argue that human-level general intelligence need not function or arise the way human intelligence has. With sufficient research and devotion to Saint Bayes, super-intelligent friendly AI is probable.

3. Skeptic rebuttal: Hanson argues A) “Intelligence” is a nebulous catch-all like “betterness” that is ill-defined. The ambiguity of the word renders the claims of Singularitarians difficult/impossible to disprove (i.e. special pleading); Knapp argues B) Computers and AI are excellent at specific types of thinking and augmenting human thought (i.e. Kasparov’s Advanced Chess). Even if one grants that AI could reach human or beyond human level, the nature of that intelligence would be neither independent nor self-motivated nor sufficiently well-rounded and, as a result, “bootstrapping” intelligence explosions would not happen as Singularitarian’s foresee.

In essence, the debate is that “human intelligence is like this, AI is like that, never the twain shall meet. But can they parallel one another?” The premise is false, resulting in a useless question. So what we need is a new premise. Here is what I propose instead: the Singularity will be the result of a convergence and connection of human intelligence and artificial intelligence. Read More

Your Body, Your Choice: Fight for Your Somatic Rights

By Kyle Munkittrick | June 20, 2011 12:18 pm

“My body, my choice.” We hear that slogan constantly, but what the hell do those four words mean?

Many of us have one or two political issues surrounding our bodies that get us fired up. Many of you reading this right now probably have some hot-button issue on your mind. Maybe it’s abortion, or recreational drug usage, or marriage rights, or surrogate pregnancy, or assisted suicide, or sex work, or voluntary amputation, or gender reassignment surgery.

For each of these issues, there are four words that define our belief about our rights, “My body, my choice.” How you react to those words determine which side of any of those debates you are on. That’s just the thing, though – there aren’t a bunch of little debates, there is just one big debate being argued on multiple fronts. All of these issues find their home in my field of philosophy: bioethics. And within the bioethics community, there is a small contingency that supports a person’s right to choose what to do with their body in every single one of those examples. Transhumanists make up part of that contingency.

If you are pro-choice on abortion or think that gender reassignment surgery is an option everyone should have, you agree with transhumanism on at least one issue. Many current political arguments are skirmishes and turf battles in what is a movement toward what one might call somatic rights. In some cases the law is clear, as it is with marriage rights or drug usage, and the arguments are over whether or not to remove, amend, or change the law. Other cases are so ambiguous that the law is struggling to define itself, as with surrogate pregnancy and voluntary amputation. And sooner or later (I’ve given up on guessing time-frames), instead of merely arguing over what we’re allowed to do with the body we’re born with, there will be debates about our rights to choose what kind of body we have. By looking at the futuristic ideas of genetic engineering and robotic prosthetic technology, we can understand how transhumanism maximizes the “my body, my choice” mantra.

Read More

CATEGORIZED UNDER: Cyborgs, Politics, Robots, Transhumanism

Form Follows Function: Prosthetics and Artificial Organs that Break the Human Mold

By Kyle Munkittrick | June 16, 2011 9:45 am

Designers of prosthetics and artificial organs have for a long time tried to replicate the human body. From the earliest peg legs to some of the most modern robotic limbs, the prosthetic we make looks like the body part that needs replacing. Lose a hand? Dean Kamen’s DEKA arm, aka the “Luke arm,” is a robotic prosthesis that will let you grasp an egg or open a beer. The Luke arm is a cutting edge piece of technology based on a backward idea – let’s replace the thing that went missing by replicating it with metal and motors. Whether it’s an artificial leg or a glass eye, prostheses often seek to reproduce not only the function of the body part, but the form and feel as well.

There are good reasons to want to reproduce form and feel along with function. The first reason is that our original bits and pieces work quite well. The human body as a whole is a natural marvel, let alone the immense complexity and dexterity of our hands, eyes, hearts, and legs. No need to reinvent the wheel, just replicate the natural model you’ve been given. The second, less obvious reason, is that we as a society have been and remain deeply uncomfortable with amputees and prosthetics. Many people don’t know what to do when faced with an artificial arm or leg. I wish it were different, but it largely isn’t. So prostheses are designed to look like whatever it is they replicate to hide the fact that the arm or leg or eye isn’t biological.

That methodology is being challenged by a few recent innovations: Össur’s now famous Cheetah blades, Kaylene Kau‘s tentacle arm, and the artificial heart with no heartbeat. These new prostheses and artificial organs are a result of approaching the problem by asking “What does this piece allow us to do?” not “How do we build an artificial one?” The implications for how humans will view themselves in the coming decades are monumental. Read More

CATEGORIZED UNDER: Biotech, Cyborgs, Transhumanism

Ten Reasons We Are Seeing An Excess of Lists of Ten Things We Should Know

By Malcolm MacIver | June 14, 2011 8:10 pm

Lately I’ve noticed lots of articles with titles that are variations of “Ten Things You Should Know About X.” I became so convinced this was not just a figment of my paranoid imagination that I did a search for “10 things” OR “ten things” in Google News (with quotes) and was immediately rewarded with more than 676 hits. This is impressive, since Google News searches over a limited time horizon. The top hits Du Nanosecond were: “Mitt Romney’s the frontrunner: 10 things the first big Republican debate showed”, “10 Things Not to Do When Going Back on Gold”, “10 Things We Learned at UFC 131”, “Top 10 things to do in your backyard”, “Steve Jobs: ten things you didn’t know about the Apple founder”, and my personal favorite, “Ten things you need to know today”.

What accounts for this ten-centrism? My first thought is an old joke. You’ve probably heard it: There are ten 10 kinds of people, those who get binary numbers, and those who don’t. Part of what I like about this joke is that it captures a bit of the arbitrariness of our penchant for counting in tens rather than twos. There is, on the other hand, the non-arbitrariness of how many bony appendages jut out of our pentadactyl palms. But, a list of the “Two things you need to know today” doesn’t seem to do justice to the complexity of modern life. So herewith is my list of the Ten Reasons We Are Seeing An Excess of Lists of Ten Things We Should Know:

1. We don’t have time to read anymore. Knowing we are going to get just ten things to process is comforting in its promise not to drain our attention from facebook and twitter.

2. Ten is close to the approximate size of our working memory. The size of our working memory, the amount of stuff we can recall from lists of things to which we’ve been recently exposed, is about seven (at least for numbers). I seem to recall there being a “plus or minus 2” factor here, in which case the upper limit for most of us mortals is nine items.

3. Since writers can’t make a living any more, we are sliding into an era of bullet point-ism. Anyone who has had a teacher who cares about writing has been warned by this teacher that making lists of bullet points in our essays is no substitute for actual writing in which thoughts are carefully connected to one another with transition sentences. This takes far too much time to work in any feasible business model for writers today (I’m trying not to use the word “nowadays” because the very same teacher who warned me not to write in bullet points also told me that this word was to be avoided). For one thing, they have to compete with bloggers like me who write for basically nothing. Ergo, the era of the articles of “ten things you should know,” which are typically not much more than bullet points.

4. In many cases, there’s more than ten things that you should know, or fewer than ten things that you should know. But, like “decades,” “centuries,” and other arbitrary anchors in the otherwise continuous flux of events and time, the writer doesn’t have to justify ten, because that’s what every other writer is chunking things we should know into.

5. It’s a way for pentadactyl animals to feel superior to unidactyl animals. No doubt if the planet were run by one-fingered/toed creatures, we would live in a George-Bush-like world of black and white. Downside: it takes longer to read “Top Ten” lists than “Top Two” lists. Over evolutionary timescales, this problem could result in unidactylism eventually reigning supreme.

6. At this point in the list, with four more to go, we enter the fat and boring midsection of the list of top ten things you should know about lists of ten things. It’s basically not remembered, so there’s really no point in putting anything here. Ditto for 7, and 8.

9. Because of the well documented recency effect, it’s time to start having content in our list of ten things again. I recall reading an apropos adage in a publication like Business Week that was like a pina colada to my information overloaded brain: “the value added is the information removed.” When it comes to digits, it seems that “the functionality added is the digits removed” – at least if our evolutionary history is any kind of guide. Our Devonian (350 million years ago) ancestors had 6-8 digits. In going down to five, and therefore lists of ten points, we’ve gone from fairly low achieving vertebrates to the spectacular successes of most subsequent animals by reducing our digits to what’s really needed.

10. If we’ve maintained our concentration to this point in the list, we will be rewarded with a bit of humorous fluff that helps bind some of our anxiety about the essential meaninglessness of our lives, and — especially — our time spent on reading yet another list of ten things we should know.

Image: Logo of a home and garden show in Australia. Correction: “didactylism” in #5 changed to unidactylism – thanks to @Matt for pointing out the miscount!

CATEGORIZED UNDER: Aliens, Apocalypse, Geology

Captain America's Enlistment and Experimentation: Was It Ethical?

By Kyle Munkittrick | June 11, 2011 9:04 am

Steve Rogers, the man who would become Captain America, was not subjected to an accidental burst of gamma radiation or the bite of a radioactive spider. Instead, he willingly enlisted and subjected himself to an experimental process for the creation of super-soldiers. His superpowers were deliberate and intended. However, the circumstances of Captain America’s enlistment into the army are, at best, questionable. After my chat with Maggie Koerth-Baker on bloggingheads, I got thinking about how the super-solider experiment holds up under the scrutiny of medical ethics. I’m not so sure that Steve Rogers gave his consent to the experiment in an informed and uncoerced manner.

For any medical research to be considered ethical it must adhere to basic standards. A global standard for medical ethics is the Declaration of Helsinki. Devised and published by the World Medical Association in 1964, the Declaration of Helsinki is a guiding framework for all medical research involving human beings. It has been revised over the years to meet modern needs, with the most recent and 6th revision being published in 2008. There are three points of the Declaration that appeal directly to the type of experimentation done to create Captain America. They are:

#6. In medical research involving human subjects, the well-being of the individual research subject must take precedence over all other interests.

#8. In medical practice and in medical research, most interventions involve risks and burdens.

#9. Medical research is subject to ethical standards that promote respect for all human subjects and protect their health and rights. Some research populations are particularly vulnerable and need special protection. These include those who cannot give or refuse consent for themselves and those who may be vulnerable to coercion or undue influence.

Can you really say with confidence that General Chester Phillips had Rogers’ best interests in mind, that Rogers’ wasn’t under any sort of coercion (coughpropagandacough), and that the good ‘ol US-of-A wasn’t bending some rules to build a better soldier?

Let’s take each of these points from the Declaration of Helsinki in turn. Read More

CATEGORIZED UNDER: Comics, Movies, Philosophy
MORE ABOUT: Captain America

Euthanasia, Immortality, and The Natural Death Paradox

By Kyle Munkittrick | June 7, 2011 9:39 am

Dying is a touchy subject. Euthanasia makes people upset. Whichever side of the debate you are on, you are caught between the hard place of human suffering and the rock of informed autonomous free choice. Euthanasia is really a debate about not dying of natural causes. For so long, we’ve understood death to be only OK if it was natural or demonstrably accidental. Anything else was murder, manslaughter, or war. Not only God, but we humans, have set our canon against self-slaughter. “Voluntary active euthanasia,” as Daniel Brock denotes it, is not natural, nor is it demonstrably accidental. Thus, we instinctively categorize it as morally wrong.

Instead of attempting to root out the source of that instinct and investigating whether or not voluntary active euthanasia actually violates morality, many use the blurred line created as reason enough to oppose a chosen death. Ross Douthat of the New York Times argues that Jack “Dr. Death” Kevorkian’s efforts to provide assistance to those suffering created a moral slippery slope:

And once we allow that such a right exists, the arguments for confining it to the dying seem arbitrary at best. We are all dying, day by day: do the terminally ill really occupy a completely different moral category from the rest? A cancer patient’s suffering isn’t necessarily more unbearable than the more indefinite agony of someone living with multiple sclerosis or quadriplegia or manic depression. And not every unbearable agony is medical: if a man losing a battle with Parkinson’s disease can claim the relief of physician-assisted suicide, then why not a devastated widower, or a parent who has lost her only child?

Note that Douthat doesn’t consider Parkinson’s a medical disease. But more to the point – Douthat’s argument is that we don’t know what degree of suffering makes the choice to die morally palatable. Degree of suffering is the wrong criterion. None but the sufferer can define it and it can never be truly communicated. What is at stake here is not only the free and informed choice of the dying, but our very understanding of what it means to “die of natural causes.” Read More

CATEGORIZED UNDER: Aging (or Not), Biology, Philosophy

Captain America, Voluntary Amputation, and Rogue Scientists.

By Kyle Munkittrick | June 4, 2011 10:10 am

Do you ever worry that Steve Rogers (aka Captain America) wasn’t really giving informed consent when he agreed to become enhanced? Or are curious as to why someone might choose a bionic hand over a real one? The awesome Maggie Koerth-Baker of boingboing.net and I had some of the same questions. We chat about the ethics of superheroes and our perception of science in this week’s Science Saturday on bloggingheads.tv. Enjoy!

A Glimpse of Cybernetic Augmentation for the Masses

By Kyle Munkittrick | June 2, 2011 11:57 am

Deus Ex 3: Human Revolution is a cyberpunk video game coming out later this year. I, for one, am pretty excited. Set in the near future the game is a prequel to the original Deus Ex. For those of you who aren’t video game fanatics, the first Deus Ex is a cyberpunk conspiracy thriller that follows around a transhuman protagonist, JC Denton, as he tries to keep the world from spiraling into Armageddon. Robots, A.I., genetically modified animals, and cyborgs aplenty help and hinder him. Denton himself has several nano-augmentations that give him superhuman abilities (e.g. cloaking, super-strength). Deus Ex 3 explores the rise of general cybernetic augmentation and the corporate espionage that accompanies it. As part of the viral ad campaign you can access the website for Sarif Industries, the leading manufacturer of cybernetic prosthetics. I love the boilerplate:

No one should ever have to give up a normal life because of a random incident, or indeed, lose a dream over a physical limitation. So believes David Sarif, idealist, philanthropist, founder and CEO of Sarif Industries. Pursuing his belief, Mr. Sarif acquired a failing Detroit auto factory in 2007 and repurposed it for the automated manufacture of prosthetics.

The weirdness of the site comes from its nearness to reality. There are links for the stock price and pictures of the interior of the main headquarters. There is even an ethics statement!

A standout piece is the ad for Sarif’s products (cyber hands, eyes, and arms), which seemed like a perfect pastiche of every pharmaceutical ad I’ve seen in the past year: testimonials by attractive people in bright lighting engaging in their favorite cultural or outdoor activities, like rock climbing and football throwing (though mercifully not through a tire wing). Also interesting is the news feed which features headlines I had to research a bit to see they aren’t quite true. The “road to here” also provides a strange alt-history of augmentation and prosthetics that gives you the feeling this all might just be right around the corner. The site’s slickness and dedication to near-reality makes it an eerie predictor of what a future prosthetics company may actually look like.

Follow Kyle on his personal blog and on facebook and twitter.

Image via Sarif Industries

Our Discomfort with the Ungendered

By Kyle Munkittrick | June 2, 2011 7:57 am

A couple in Toronto has decided to keep the gender of their baby, named Storm, private. Good for them! Way too many people can guess what gender I am, it takes the fun out of everything. Guessing my sexuality is quite a bit more difficult, but I digress. People are upset about Storm the genderless baby! Why? How we portray friendly and scary aliens in science fiction may help explain why people are worried about a person’s gender being indeterminate.

Let’s clear some things up first. Storm has a biological sex. I have no idea what it is, but chances are that Storm is biologically male or female, as those are pretty common ways for people to be. Of course, intersex – that is, ambiguous genitalia and/or blended sexual maturation – is a real, though minor, possibility. And that’d be just fine too.

But you and I don’t know for sure. Storm’s parents feel that our society’s obsession with the need to know what sex a person is biologically (and how that jives with that person’s gender presentation) is an invasion of privacy. Second, gender is, almost by definition, impossible to keep secret. Gender is what we present to the world. Thus, if I can’t tell what gender a person is, that doesn’t mean that person’s gender is secret, it just means I don’t have a mental category for what I’m seeing. Gender presentation can be obvious, ambiguous, over-the-top, cliché or mundane, but it’s never hidden.

So it’s not that Storm doesn’t have a sex or gender that is getting attention, but that Storm’s parents don’t seem eager to make Storm’s gender presentation obvious, nor to confirm that their baby’s gender presentation matches their baby’s biological sex. Ok, so where do aliens come into play? Read More

CATEGORIZED UNDER: Aliens
MORE ABOUT: Gender, Star Trek
NEW ON DISCOVER
OPEN
CITIZEN SCIENCE
ADVERTISEMENT

Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

ADVERTISEMENT

See More

ADVERTISEMENT
Collapse bottom bar
+

Login to your Account

X
E-mail address:
Password:
Remember me
Forgot your password?
No problem. Click here to have it e-mailed to you.

Not Registered Yet?

Register now for FREE. Registration only takes a few minutes to complete. Register now »