Category: Robots

Your Body, Your Choice: Fight for Your Somatic Rights

By Kyle Munkittrick | June 20, 2011 12:18 pm

“My body, my choice.” We hear that slogan constantly, but what the hell do those four words mean?

Many of us have one or two political issues surrounding our bodies that get us fired up. Many of you reading this right now probably have some hot-button issue on your mind. Maybe it’s abortion, or recreational drug usage, or marriage rights, or surrogate pregnancy, or assisted suicide, or sex work, or voluntary amputation, or gender reassignment surgery.

For each of these issues, there are four words that define our belief about our rights, “My body, my choice.” How you react to those words determine which side of any of those debates you are on. That’s just the thing, though – there aren’t a bunch of little debates, there is just one big debate being argued on multiple fronts. All of these issues find their home in my field of philosophy: bioethics. And within the bioethics community, there is a small contingency that supports a person’s right to choose what to do with their body in every single one of those examples. Transhumanists make up part of that contingency.

If you are pro-choice on abortion or think that gender reassignment surgery is an option everyone should have, you agree with transhumanism on at least one issue. Many current political arguments are skirmishes and turf battles in what is a movement toward what one might call somatic rights. In some cases the law is clear, as it is with marriage rights or drug usage, and the arguments are over whether or not to remove, amend, or change the law. Other cases are so ambiguous that the law is struggling to define itself, as with surrogate pregnancy and voluntary amputation. And sooner or later (I’ve given up on guessing time-frames), instead of merely arguing over what we’re allowed to do with the body we’re born with, there will be debates about our rights to choose what kind of body we have. By looking at the futuristic ideas of genetic engineering and robotic prosthetic technology, we can understand how transhumanism maximizes the “my body, my choice” mantra.

Read More

CATEGORIZED UNDER: Cyborgs, Politics, Robots, Transhumanism

If Doctors Need Pit Crews, Tricorders Should Be Part of the Team

By Kyle Munkittrick | May 26, 2011 9:54 pm

Health care is broken. In the US quality of care is tanking. Even in countries with successful universal health care systems costs are rising too fast for the systems to cope. So what do we do?

Atul Gawande, who knows a thing or two about improving healthcare, argues in his commencement address to Harvard that doctors need pit crews:

We are at a cusp point in medical generations. The doctors of former generations lament what medicine has become. If they could start over, the surveys tell us, they wouldn’t choose the profession today. They recall a simpler past without insurance-company hassles, government regulations, malpractice litigation, not to mention nurses and doctors bearing tattoos and talking of wanting “balance” in their lives. These are not the cause of their unease, however. They are symptoms of a deeper condition—which is the reality that medicine’s complexity has exceeded our individual capabilities as doctors.

Gawande has two main arguments. First, that when doctors use checklists they prevent errors and quality of care goes way up. Second, that doctors need to stop acting like autonomous problem solvers and see themselves as a member of a tight-knit team. Gawande is one of the few sane voices in the health care debate. However, later on in his speech, he says that the solution to the health care conundrum is not technology. To a large degree, I agree with him. But not completely. Tech still has a big role to play. If we take a closer look at Dune and Star Trek, we’ll see why Qualcomm and the X-Prize Foundation are ponying up 10 million bucks to fund a piece of medical technology that could help make Gawande’s dream of team-based medicine a bit closer to becoming reality. Read More

Transhumanism: A Secular Sandbox for Exploring the Afterlife?

By Malcolm MacIver | February 28, 2011 1:35 am

I am a scientist and academic by day, but by night I’m increasingly called upon to talk about transhumanism and the Singularity. Last year, I was science advisor to Caprica, a show that explored relationships between uploaded digital selves and real selves. Some months ago I participated in a public panel on “Mutants, Androids, and Cyborgs: The science of pop culture films” for Chicago’s NPR affiliate, WBEZ.  This week brings a panel at the Director’s Guild of America in Los Angeles, entitled “The Science of Cyborgs” on interfacing machines to living nervous systems.

The latest panel to be added to my list is a discussion about the first transhumanist opera, Tod Machover’s “Death and the Powers.” The opera is about an inventor and businessman, Simon Powers, who is approaching the end of his life. He decides to create a device (called The System) that he can upload himself into (hmm I wonder who this might be based on?). After Act 2, the entire set, including a host of OperaBots and a musical chandelier (created at the MIT Media Lab), become the physical manifestation of the now incorporeal Simon Powers, who’s singing we still hear but who has disappeared from the stage. Much of the opera is exploring how his relationships with his daughter and mother change post-uploading. His daughter and wife ask whether The System is really him. They wonder if they should follow his pleas to join him, and whether life will still be meaningful without death. The libretto, by the renown Robert Pinsky, renders these questions in beautiful poetry. It will open in Chicago in April.

These experiences have been fascinating. But I can’t help wondering, what’s with all the sudden interest in transhumanism and the singularity? Read More

Robots That Evolve Like Animals Are Tough and Smart—Like Animals

By Malcolm MacIver | February 14, 2011 6:33 pm

People who work in robotics prefer not to highlight a reality of our work: robots are not very reliable. They break, all the time. This applies to all research robots, which typically flake out just as you’re giving an important demo to a funding agency or someone you’re trying to impress. My fish robot is back in the shop, again, after a few of its very rigid and very thin fin rays broke. Industrial robots, such as those you see on car assembly lines, can only do better by operating in extremely predictable, structured environments, doing the same thing over and over again. Home robots? If you buy a Roomba, be prepared to adjust your floor plan so that it doesn’t get stuck.

What’s going on? The world is constantly throwing curveballs at robots that weren’t anticipated by the designers. In a novel approach to this problem, Josh Bongard has recently shown how we can use the principles of evolution to make a robot’s “nervous system”—I’ll call it the robot’s controller—robust against many kinds of change. This study was done using large amounts of computer simulation time (it would have taken 50–100 years on a single computer), running a program that can simulate the effects of real-world physics on robots.

What he showed is that if we force a robot’s controller to work across widely varying robot body shapes, the robot can learn faster, and be more resistant to knocks that might leave your home robot a smoking pile of motors and silicon. It’s a remarkable result, one that offers a compelling illustration of why intelligence, in the broad sense of adaptively coping with the world, is about more than just what’s above your shoulders. How did the study show it?

Read More

MORE ABOUT: embodiment, evolution

The Turkle Test

By Kyle Munkittrick | February 6, 2011 9:24 am

Can you have an emotional connection with a robot? Sherry Turkle, Director of the MIT Initiative on Technology and Self, believes you certainly could. Whether or not you should is the question. People, especially children, project personalities and emotions on to rudimentary robots. As the Chronicle of Higher Education article on her shows, the result of believing a robot can feel is not always happy:

One day during Turkle’s study at MIT, Kismet malfunctioned. A 12-year-old subject named Estelle became convinced that the robot had clammed up because it didn’t like her, and she became sullen and withdrew to load up on snacks provided by the researchers. The research team held an emergency meeting to discuss “the ethics of exposing a child to a sociable robot whose technical limitations make it seem uninterested in the child,” as Turkle describes in [her new book] Alone Together.

We want to believe our robots love us. Movies like Wall-E, The Iron Giant, Short Circuit and A.I. are all based on the simple idea that robots can develop deep emotional connections with humans. For fans of the Half-Life video game series, Dog, a large scrapheap monstrosity with a penchant for dismembering hostile aliens, is one of the most lovable and loyal characters in the game. Science fiction is packed with robots that endear themselves to us, such as Data from Star Trek, the replicants in Blade Runner, and Legion from Mass Effect. Heck, even R2-D2 and C-3PO seem endeared to one another. And Futurama has a warning for all of us.

Yet these lovable mechanoids are not what Turkle is critiquing. Turkle is no Luddite, and does not strike me as a speciesist. What Turkle is critiquing is contentless performed emotion. Robots like Kisemet and Cog are representative of a group of robots where the brains are second to bonding. Humans have evolved to react to subtle emotional cues that allow us to recognize other minds, other persons. Kisemet and Cog have rather rudimentary A.I., but very advanced mimicking and response abilities. The result is they seem to understand us. Part of what makes HAL-9000 terrifying is that we cannot see it emote. HAL simply processes and acts.

On the one hand, we have empty emotional aping; on the other, faceless super-computers. What are we to do? Are we trapped between the options of the mindless bot with the simulated smile or the sterile super-mind calculating the cost of lives? Read More

Why I'm Not Afraid of the Singularity

By Kyle Munkittrick | January 20, 2011 2:27 pm

the screens, THE SCREENS THEY BECKON TO ME

I have a confession. I used to be all about the Singularity. I thought it was inevitable. I thought for certain that some sort of Terminator/HAL9000 scenario would happen when ECHELON achieved sentience. I was sure The Second Renaissance from the Animatrix was a fairly accurate depiction of how things would go down. We’d make smart robots, we’d treat them poorly, they’d rebel and slaughter humanity. Now I’m not so sure. I have big, gloomy doubts about the Singularity.

Michael Anissimov tries to restock the flames of fear over at Accelerating Future with his post “Yes, The Singularity is the Single Biggest Threat to Humanity.

Combine the non-obvious complexity of common sense morality with great power and you have an immense problem. Advanced AIs will be able to copy themselves onto any available computers, stay awake 24/7, improve their own designs, develop automated and parallelized experimental cycles that far exceed the capabilities of human scientists, and develop self-replicating technologies such as artificially photosynthetic flowers, molecular nanotechnology, modular robotics, machines that draw carbon from the air to build carbon robots, and the like. It’s hard to imagine what an advanced AGI would think of, because the first really advanced AGI will be superintelligent, and be able to imagine things that we can’t. It seems so hard for humans to accept that we may not be the theoretically most intelligent beings in the multiverse, but yes, there’s a lot of evidence that we aren’t.

….

Humans overestimate our robustness. Conditions have to be just right for us to keep living. If AGIs decided to remove the atmosphere or otherwise alter it to pursue their goals, we would be toast. If temperatures on the surface changed by more than a few dozen degrees up or down, we would be toast. If natural life had to compete with AI-crafted cybernetic organisms, it could destroy the biosphere on which we depend. There are millions of ways in which powerful AGIs with superior technology could accidentally make our lives miserable, simply by not taking our preferences into account. Our preferences are not a magical mist that can persuade any type of mind to give us basic respect. They are just our preferences, and we happen to be programmed to take each other’s preferences deeply into account, in ways we are just beginning to understand. If we assume that AGI will inherently contain all this moral complexity without anyone doing the hard work of programming it in, we will be unpleasantly surprised when these AGIs become more intelligent and powerful than ourselves.

Oh my stars, that does sound threatening. But again, that weird, nagging doubt lingers in the back of my mind. For a while, I couldn’t place my finger on the problem, until I re-read Anissimov’s post and realized that my disbelief flared up every time I read something about AGI doing something. AGI will remove the atmosphere. Really? How? The article, in fact, all arguments about the danger of the Singularity necessarily presume one single fact: That AGI will be able to interact with the world beyond computers. I submit that, in practical terms, they will not. Read More

Do Androids Dream of Electric Sugar Plums?

By Eric Wolff | December 27, 2010 4:00 am

I thought about closing out the year with news of the strawberry genome sequencing project, and dipping into the results from the cocoa genome sequencing project, while perhaps enjoying a rainbow form a solar-powered rainbow making machine. They all seemed cool and futuristic and almost certainly something we’d find in the land of science fiction.

But then, there it was: A Robot Christmas. Two weeks ago, the team at Robots Podcast put out a call for robotics labs to make holiday videos, and so far six different robotics labs have responded with videos of their machines singing or playing Christmas carols, decorating, and otherwise wishing us seasons greetings.  Since I can’t be the only who wanted to know how our future overlords celebrate the holiday, I thought I’d share. Happy New Year everyone!

A Robotic ChristmasLaboratory of Intelligent Systems, EPFL, Lausanne, Switzerland

Read More

CATEGORIZED UNDER: Robots

Old McRobot had a Farm, Beep-I, Bzzt-I, O!

By Eric Wolff | December 6, 2010 4:00 am

Farming has long evaded true automation. Where manufacturers create controlled environments perfect for precisely attuned machines performing repetitive tasks, the messiness of biology has long made automating growing things extremely challenging. Robots didn’t have the precision to pick things growing at uncertain heights, they didn’t have the judgment to identify ripeness, and they weren’t smart enough to navigate fields or greenhouses of uncertain geometry.

Well, they used to not have those traits.

Earlier this week, the Japanese Agriculture and Food Research Organization presented its strawberry picking robot: A droid that rolls along a track through fields of strawberries, scan the strawberries through stereoscopic cameras and check their color, then pick them if their ripe. In this way it can whip through 247 acres in 300 hours, far faster than the typical rate of 247 acres in 500 hours using human pickers.

Read More

CATEGORIZED UNDER: Robots

DARPA Developing a Robotic Pilot for Their Flying Car

By Cyriaque Lamar - io9 | November 9, 2010 6:19 pm

DARPA developing a robotic pilot for their flying carToday the US Department of Defense announced that they would be collaborating with Carnegie Mellon University to develop an autonomous copilot for DARPA’s upcoming “helicopter jeep” project. Yes, the military is developing a helicopter jeep.

Here’s the scoop on DARPA’s flying car from CMU:

The Defense Advanced Research Projects Agency (DARPA) has awarded a 17-month, $988,000 contract to Carnegie Mellon’s Robotics Institute to develop an autonomous flight system for the Transformer (TX) Program, which is exploring the feasibility of a military ground vehicle that could transform into a vertical-take-off-and-landing (VTOL) air vehicle.

Read More

CATEGORIZED UNDER: Robots, Transportation

Mutants, Androids, Cyborgs and Pop Culture Films

By Malcolm MacIver | November 2, 2010 1:07 pm

minority-report-spidersWBEZ, the Chicago affiliate of National Public Radio, recently gathered together several of my fellow science and engineering researchers at Northwestern University to talk about the science of science fiction films. The panel, and just short of 500 people from the community and university, watched clips from Star Wars, Gattaca, Minority Report, Eternal Sunshine of the Spotless Mind, and The Matrix. I was the robot/AI guy commenting on the robot spiders of Minority Report; Todd Kuiken, a designer of neuroprosthetic limbs, commented on Luke getting a new arm in Star Wars: The Empire Strikes Back; Tom Meade, a developer of medical biosensors and new medical imaging techniques, commented on Gattaca; and Catherine Wooley, who studies memory, commented on Eternal Sunshine.

The full audio of the event can be streamed or downloaded from here.

Read More

NEW ON DISCOVER
OPEN
CITIZEN SCIENCE
ADVERTISEMENT

Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

ADVERTISEMENT

See More

ADVERTISEMENT
Collapse bottom bar
+

Login to your Account

X
E-mail address:
Password:
Remember me
Forgot your password?
No problem. Click here to have it e-mailed to you.

Not Registered Yet?

Register now for FREE. Registration only takes a few minutes to complete. Register now »