While it’s clear that we have a lot going for ourselves right out of the womb, it’s equally clear that one of our most admirable qualities is that we rapidly “get it” – we learn languages, skills for manipulating objects, hip hop dance moves, recipes for coconut mojitos, and how to charm people into liking us (ideally, in that order). Rather than experiential learning like this, early AI work focused on sophisticated reasoning problems. The touchstone for these efforts was Alan Turing’s original effort to mimic the reasoning processes of mathematicians engaged in solving a math problem – an effort that gave us many great things, particularly a distillation of what it means for something to be computable that stands as one of the great intellectual accomplishments of the twentieth century. That form of AI, while successful in particular domains — chess playing and expert systems, for example — has been less successful in solving problems of ongoing embodied activity, such as the aforementioned coconut mojito making. What if, instead of mimicking a mathematician trying to solve a math problem, Alan Turing had decided to mimic a scientist trying to determine the validity of a hypothesis? According to some developmental psychologists, in doing so we’d actually be emulating the reasoning processes of an infant, and thus, potentially, we’d be unlocking the great power of experiential learning.
Having robots with minds implementing the scientific process rather than math problem solving is essentially what’s happening in a few corners of robotics, most recently with the Xpero project, an effort to develop an embodied cognitive system that learns about its world much like an infant would. It’s one of a host of robo-infants being worked on (here’s a nice overview graphic). This approach has led to some very impressive achievements including an “evil starfish” robot that can quickly learn how to control its body after several of its “limbs” have been chopped off.
Hod Lipson (left) and yours truly pulling legs off the evil starfish in 2006.
In 2006, Hod Lipson and co-workers published a short paper with the sexy title “Resilient Machines Through Continuous Self-Modeling.” In it, he demonstrated how a small, starfish-like robot (aka, “the evil starfish”) could automatically learn its own body shape and movement capabilities. It did this through an automatic process of scientific inquiry. It worked something like this: first, make an arbitrary movement. What this means is that the robot sends out signals to its body, without knowing what those signals will do. While sending these movement signals out, the robot records sensory signals that tell it about what happened to the body due to that movement (scientific process analog: experiment). Second, generate a small number of models of the body that are compatible with movements resulting in the recorded sensory information (analog: hypothesis generation). Third, through some fast on-board simulation (aka, thinking), the robot figures out what movement(s) would give it the most information to distinguish between the different body models that are compatible with the information it has collected (analog: prioritizing hypotheses for testing). Fourth, the robot executes these movements, and uses the resulting sensory information for further refinement of its guess as to what its body is (analog: hypothesis testing and refinement).
What is great about this process, as I discovered when I visited Lipson’s lab some years ago to give a talk at Cornell, is that the robot has an amazing degree of robustness. The starfish robot shown in the photo has had one of its arms pulled off, and after a brief learning process, it figures out its new body shape and saunters off! It was slightly unnerving to witness this process. There is something about an animal recovering from damage that gives us a sense that it cares about its continued existence. In some sense, this is part of the essence of what it means to be a living organism: something that cares about its continued existence and acts so as to further that goal. When you see a machine act in this manner, it triggers certain associations that make it feel biological.
If indeed, as Alison Gopnik and others have argued, we all grow up absorbing all the important things we need to know through something like the scientific process, then the current work on making an algorithm that emulates the scientific process may be just the thing that AI needs for making breakthroughs on solving the problems we really want our robots to solve, such as making us a coconut mojito with just the right amount of muddled mint.
For more information on the European Xpero project, visit their website. A prior project, also EU-sponsored, was the iCub. A nice overview graphic of different robot infant approaches was in this July’s issue of IEEE Spectrum. Some interesting recent work on formalizing the discovery of regularities through experiments can be found in Hod Lipson’s “Selected Recent Publications.” Here is a thoughtful commentary on the starfish robot work by Chris Adami. Using data to automatically do science has also received attention in bioinformatics, most recently highlighted in articles about Sergey Brin’s datamining efforts to find a cure for Parkinson’s, in this podcast, and in academic circles here and here.
Image of robot in crib by Malcolm MacIver using free 3-D models on TurboSquid.