Robots That Evolve Like Animals Are Tough and Smart—Like Animals

By Malcolm MacIver | February 14, 2011 6:33 pm

People who work in robotics prefer not to highlight a reality of our work: robots are not very reliable. They break, all the time. This applies to all research robots, which typically flake out just as you’re giving an important demo to a funding agency or someone you’re trying to impress. My fish robot is back in the shop, again, after a few of its very rigid and very thin fin rays broke. Industrial robots, such as those you see on car assembly lines, can only do better by operating in extremely predictable, structured environments, doing the same thing over and over again. Home robots? If you buy a Roomba, be prepared to adjust your floor plan so that it doesn’t get stuck.

What’s going on? The world is constantly throwing curveballs at robots that weren’t anticipated by the designers. In a novel approach to this problem, Josh Bongard has recently shown how we can use the principles of evolution to make a robot’s “nervous system”—I’ll call it the robot’s controller—robust against many kinds of change. This study was done using large amounts of computer simulation time (it would have taken 50–100 years on a single computer), running a program that can simulate the effects of real-world physics on robots.

What he showed is that if we force a robot’s controller to work across widely varying robot body shapes, the robot can learn faster, and be more resistant to knocks that might leave your home robot a smoking pile of motors and silicon. It’s a remarkable result, one that offers a compelling illustration of why intelligence, in the broad sense of adaptively coping with the world, is about more than just what’s above your shoulders. How did the study show it?

Each (simulated) robot starts with a very basic body plan (like a snake), a controller (consisting of a neural network that is randomly connected with random strengths), and a sensor for light. Additional sensors report the position of body segments, the orientation of the body, and ground contact sensors for limbs, if the body plan has them. The task is to bring the body over to the light source, 20 meters away.

A bunch of these robots are simulated, and those that do poorly are eliminated, a kind of in-computo natural selection. The eliminated robots are replaced with versions of the ones that succeeded, after random tweaks (“mutations”) to these better controllers have been made. The process repeats until a robot that can get to the light is found. So far, there’s been no change in the shape of the body.

With the first successful robot-controller combination found (one that gets to the light), the body form changes from snake-like to something like a salamander, with short legs sticking out of the body. (All body shape changes are pre-programmed, rather than evolved.) The evolutionary process to find a successful controller-bot combination repeats, with random changes to the better controllers until, once again, a controller-bot combination is found that is able to claw its way to the light.

Then the short legs sticking out to the side slowly get longer, and rather than sticking out to the side, they progressively become more vertical. With each change in body shape, the evolutionary process to find a controller repeats. Eventually, the sim-bot evolves to something that looks like any four-legged animal.

That was all for round one of evolution. For round two, the best controller from round one was copied into the same starting snake-like body type that round one began with. But now, the change in body forms occurs more rapidly, so that by the time 2/3 of the “lifetime” of the robot is completed, it has reached its final dog-like form. For round three, this all happens within 1/3 of the robot’s lifetime. For round four, the body form starts off as dog-like and stays there.

So there are changes occurring at two different time scales: changes over the “lifetime” of the robot, similar to our own shape changes from fetus to adulthood; and changes that occur over generations, through which development during a lifetime occurs more rapidly. The short time scale is called “ontogenetic” and the long scale (between the different rounds) is “phylogenetic.”

The breakthrough of the work is that it found that having these variations in body shape occur over ontogenetic and phylogenetic time scales resulted in finding a controller that got the body over to the light much faster than if no such changes in body shape occurred. For example, when the system began with the final body type, the dog-like shape, it took much longer to evolve a solution than when the body shapes progressed from snake-like to salamander to dog-like. Not only was a controller evolved more rapidly, but the final solution was much more robust to being pushed and nudged.

The complexity of the interactions over 100 CPU years of simulated evolution makes the final evolved result difficult to untangle. Nonetheless, there is good evidence that the cause of accelerated learning in the shape-changing robots is that the controllers developed through changing bodies have gone through a set of “training-wheel” body shapes: a robot starting with a four-legged body plan and a simple controller quickly fails—it can’t control the legs well and simply tips over. Starting with something on the ground that slithers, as was the case in these simulations, is less prone to such failures. So not any old sequence of shape changes works: mimicking the sequence seen in evolution garners some of the advantages that presumably made this sequence actually happen in nature, such as higher mechanical stability of more ancient forms.

Less clear is the source of increased robustness—the ability to recover from being nudged and pushed in random ways. Bongard suggests that the increased robustness of controllers that have evolved with changing body shapes is due to those controllers having had to work under a wider range of sensor-motor relationships than the ones that evolved with no change in body shape. For example, any controller that’s particularly sensitive to a certain relationship between, say, a sensor that reports foot position, and one that reports spine position would fail (and thus be eliminated) as those relationships are systematically changed in shifting from salamander-like to dog-like body form and movement. So that means that if I suddenly pushed down the back of a four-legged dog-like robot, so that its legs would splay out and it would be forced to move more like a salamander, the winners of the evolutionary competition would still be able to work because the controllers had worked in salamander-like bodies as well as in dog-like bodies.

In support of this idea, the early controllers, that were purely based on moving the body axis (“spine”), appear to be still embedded in the more advanced controllers; so if something happens to the body (say, one leg gets knocked), the robot can revert to more basic spine-based motion patterns that don’t require precise limb control. Bongard observed that the controllers evolved through changing body shape exhibited more dependence on spinal movement, using the legs more for balance, than those evolved without changing body shape. (It would be interesting to try his approach with simulated aquatic robots, which can be neutrally buoyant like many aquatic animals are, and thus don’t have the “tipping over” problem that Bongard’s simulated terrestrial robots had).

To be fair to existing robots, even with a controller that worked under every conceivable body shape and environmental condition, they would still break all the time. This is because the materials we make them out of are not self-healing, in contrast to the biomaterials of animals. Animals are also constantly breaking (at least on a micro level), and the body constantly repairs this. Bones subjected to higher loads, like the racket arm of a tennis player, get measurably thicker. Not only is the body self-repairing, recent innovative computer simulations of real neurons that generate basic rhythms like walking and chewing have shown that the neurons keep generating the rhythm despite big variations in the functioning and connections of these neurons. These functions are so important to continued existence—the body’s version of too big to fail—that embedded within them are solutions to just about everything the world can throw at them.

This new work provides the fascinating and useful result that fashioning controllers that work through a sequence of body shapes mimicking those seen in evolution accelerates the learning of new movement tasks and increases robustness to all the hard knocks that life inevitably delivers. It suggests that without the sequence of body shapes that evolution and development bring about, we might have nervous systems that are much too finely tuned to our adult upright bipedal form. Instead of crawling to help after we twist our ankle in the woods, we’d be left with nothing but howling for help.

MORE ABOUT: embodiment, evolution

Comments (9)

  1. Can you make a passenger pigeon or an ivory billed woodpecker or a sea mink or a Stellers sea cow?

  2. Malcolm MacIver

    Unfortunately not at the moment. Nor electric fish, star-nosed moles, or pangolins. The paper only considers cubist snake-like, salmander-like, and dog-like forms. But, just wait for 2.0!

  3. John R Anderson

    Wonderful article! I’ve had a long-standing interest in AI, especially neural networks and genetic progamming, and thoroughly enjoyed the article. Thanks!

  4. Malcolm MacIver

    John Anderson – thanks! It’s fun to explain really exciting new work buried in professional journals to interested people who might not otherwise get a chance to see it.

  5. Jillinthebox

    Can Josh Bongard’ s simulated neural network controller be used to control a more complex robot? For example an Aibo? While the erector set robot on the video is an interesting demo of the simulation, it seems to me that one of the advantages of evolution in nature is elegance of movement. Nature is robust and at the same time graceful. Will we see elegance in the v2.0 ?

  6. Malcolm MacIver

    @Jillinthebox, good question. A good first pass measure on the complexity of the motion problem for a robot is how many “degrees of freedom” the robot has – how many independently controllable joints. Aibo had 20; Josh’s robots had 10 or 16 (10 for four legs, 16 for six). That’s pretty close. Lack of grace in the physical models can come from a bunch of things, including really severe limits we have on what kind of motors we can build and how responsive they can be while remaining compact. Thus, the lack of grace could also be from the non-controller aspects of the physical robots. If I recall from the movies though, the simulated robots were not all that elegant either – if this is lack of controller sophistication, it may be due to not enough generations of evolution, or limits to the genetic algorithm approach, or limits to how the physics was modeled.

  7. Adriaan

    Fascinating article, thanks!

    Just curious as to how the body shape progressions were chosen, as that seems to be the one element here that was ‘imposed’ rather than evolved. Of course, that doesn’t diminish from the current finding, but perhaps how the shapes progress affects the results? In line with the thought that the more complicated shapes have the simper controllers embedded in them, maybe a shape progression where the basic forms are more similar to a disabled final form would be even more stable?

    On a separate tangent, I’m just curious – do you know how the relative success of each controller was judged? Was it merely the absolute distance, or were there factors for, say, average movement speed in any direction, or similar?

  8. Malcolm MacIver

    I suspect it was chosen as an approximation to what we see through evolutionary history in terms of body form progression. I would expect the choice of these shapes would affect the results. For example, if the sequence had been from dog-like to snake like, probably one of the key results would not have been found (that going through shape changes increases the rate of learning). Your suggestion regarding an alternative sequence of shapes (disabled final form) is definitely one that would be encouraged by the paper — design the morphology sequence to maximize final robustness. I don’t think the point is specific to the (evolution-like) sequence.

    Fitness was assessed by the robot’s ability to approach the light. I can’t see a specification of whether this was something like continually decreasing distance to the light for a certain amount of time, or arrival at the light, or velocity to the light.

    UPDATE: Bongard stated in an email that the fitness function was the average of several light sensors. If the average exceeded a pre-set threshold, the controller is considered successful. The higher the threshold, the closer the robot has to get to the light to count as successful. The threshold was fixed at a level at which most of the time the robots got about half way to the light.

  9. p.udhaya shankar

    i want to know that how to make an robo like an dog with the steps please send it to me.


Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!


See More

Collapse bottom bar