None of our machines can do what a cuttlefish or octopus can do with its skin: change its pattern, colour, and texture to perfectly blend into its surroundings, in matter of milliseconds. Take a look at this classic video of an octopus revealing itself.
But Stephen Morin from Harvard University has been trying to duplicate this natural quick-change ability with a soft-bodied, colour-changing robot. For the moment, it comes nowhere near its natural counterparts – its camouflage is far from perfect, it is permanently tethered to cumbersome wires, and its changing colours have to be controlled by an operator. But it’s certainly a cool (and squishy) step in the right direction.
The camo-bot is an upgraded version of a soft-bodied machine that strode out of George Whitesides’ laboratory at Harvard University last year. That white, translucent machine ambled about on four legs, swapping hard motors and hydraulics for inflatable pockets of air. Now, Morin has fitted the robot’s back with a sheet of silicone containing a network of tiny tubes, each less than half a millimetre wide. By pumping coloured liquids through these “microfluidic” channels, he can change the robot’s colour in about 30 seconds.
One minute, a cockroach is running headfirst off a ledge. The next minute, it’s gone, apparently having plummeted to its doom. But wait! It’s actually clinging to the underside of the ledge! This cockroach has watched one too many action movies.
The roach executes its death-defying manoeuvre by turning its hind legs into grappling hooks and its body into a pendulum. Just as it is about to fall, it grabs the edge of the ledge with the claws of its hind legs, swings onto the underneath the ledge and hangs upside-down. In the wild, this disappearing act allows it to avoid falls and escape from predators. And in Robert Full’s lab at University of California, Berkeley, the roach’s trick is inspiring the design of agile robots.
Full studies how animals move, but his team discovered the cockroach’s behaviour by accident. “We were testing the animal’s athleticism in crossing gaps using their antennae, and were surprised to find the insect gone,” says Full. “After searching, we discovered it upside-down under the ledge. To our knowledge, this is a new behavior, and certainly the first time it has been quantified.”
Thomas Libby filmed rainbow agamas – a beautiful species with the no-frills scientific name of Agama agama – as they leapt from a horizontal platform onto a vertical wall. Before they jumped, they first had to vault onto a small platform. If the platform was covered in sandpaper, which provided a good grip, the agama could angle its body perfectly. In slow motion, it looks like an arrow, launching from platform to wall in a smooth arc (below, left)
If the platform was covered in a slippery piece of card, the agama lost its footing and it leapt at the wrong angle. It ought to have face-planted into the wall, but Libby found that it used its long, slender tail to correct itself (below, right). If its nose was pointing down, the agama could tilt it back up by swinging its tail upwards.
This is where we are now: at Duke University, a monkey controls a virtual arm using only its thoughts. Miguel Nicolelis had fitted the animal with a headset of electrodes that translates its brain activity into movements. It can grab virtual objects without using its arms. It can also feel the objects without its hands, because the headset stimulates its brain to create the sense of different textures. Monkey think, monkey do, monkey feel – all without moving a muscle.
And this is where Nicolelis wants to be in three years: a young quadriplegic Brazilian man strolls confidently into a massive stadium. He controls his four prosthetic limbs with his thoughts, and they in turn send tactile information straight to his brain. The technology melds so fluidly with his mind that he confidently runs up and delivers the opening kick of the 2014 World Cup.
This sounds like a far-fetched dream, but Nicolelis – a big soccer fan – is talking to the Brazilian government to make it a reality. He has created an international consortium called the Walk Again Project, consisting of non-profit research institutions in the United States, Brazil, Germany and Switzerland. Their goal is to create a “high performance brain-controlled prosthetic device that enables patients to finally leave the wheelchair behind.”
Two spiders are walking along a track – a seemingly ordinary scene, but these are no ordinary spiders. They are molecular robots and they, like the tracks they stride over, are fashioned from DNA. One of them has four legs and marches over its DNA landscape, turning and stopping with no controls from its human creators. The other has four legs and three arms – it walks along a miniature assembly line, picking up three pieces of cargo from loading machines (also made of DNA) and attaching them to itself. All of this is happening at the nanometre scale, far beyond what the naked eye can discern. Welcome to the exciting future of nanotechnology.
The two robots are the stars of two new papers that describe the latest advances in making independent, programmable nano-scale robots out of individual molecules. Such creations have featured in science-fiction stories for decades, from Michael Crichton’s Prey to Red Dwarf, but in reality, there are many barriers to creating such machines. For a start, big robots can be loaded with masses of software that guides their actions – no such luck at the nano-level.
The two new studies have solved this problem by programming the robots’ actions into their environment rather than their bodies. Standing on the shoulders of giants, both studies fuse two of the most interesting advances in nanotechnology: the design of DNA machines, fashioned from life’s essential double helix and possessing the ability to walk about; and the invention of DNA origami, where sets of specially constructed DNA molecules can be fused together into beautiful sheets and sculptures. Combine the two and you get a robot walker and a track for it to walk upon.
In a Swiss laboratory, a group of ten robots is competing for food. Prowling around a small arena, the machines are part of an innovative study looking at the evolution of communication, from engineers Sara Mitri and Dario Floreano and evolutionary biologist Laurent Keller.
They programmed robots with the task of finding a “food source” indicated by a light-coloured ring at one end of the arena, which they could “see” at close range with downward-facing sensors. The other end of the arena, labelled with a darker ring was “poisoned”. The bots get points based on how much time they spend near food or poison, which indicates how successful they are at their artificial lives.
They can also talk to one another. Each can produce a blue light that others can detect with cameras and that can give away the position of the food because of the flashing robots congregating nearby. In short, the blue light carries information, and after a few generations, the robots quickly evolved the ability to conceal that information and deceive one another.
Their evolution was made possible because each one was powered by an artificial neural network controlled by a binary “genome”. The network consisted of 11 neurons that were connected to the robot’s sensors and 3 that controlled its two tracks and its blue light. The neurons were linked via 33 connections – synpases – and the strength of these connections was each controlled by a single 8-bit gene. In total, each robot’s 264-bit genome determines how it reacts to information gleaned from its senses.
In the experiment, each round consisted of 100 groups of 10 robots, each competing for food in a separate arena. The 200 robots with the highest scores – the fittest of the population – “survived” to the next round. Their 33 genes were randomly mutated (with a 1 in 100 chance that any bit with change) and the robots were “mated” with each other to shuffle their genomes. The result was a new generation of robots, whose behaviour was inherited from the most successful representatives of the previous cohort.
In a laboratory at Aberystwyth University, Wales, a scientist called Adam is doing some experiments. He is trying to find the genes responsible for producing some important enzymes in yeast, and he is going about it in a very familiar way. Based on existing knowledge, Adam is coming up with new hypotheses and designing experiments to test them. He carries them out, records and evaluates the results, and comes up with new questions. All of this is part and parcel of a typical scientist’s life but there is one important difference that sets Adam apart – he’s a robot.
Adam is the brainchild of Ross King and colleagues at Aberystwyth, who have described it as a “Robot Scientist“. The name is “almost an acronym” for “A Discovery Machine” and it also references Scottish economist Adam Smith and the obvious Biblical character. It has been loaded with equipment and software that allows it to independently design and carry out genetics experiments without any human intervention. And it has already begun to contribute to our scientific knowledge.
In a space the size of a small van, Adam contains a library of yeast strains in a freezer, two incubators, three pipettes for transferring liquid (one of which can manage 96 channels at once), three robot arms, a washer, a centrifuge, several cameras and sensors, and no less than four computers controlling the whole lot. All of this kit allows Adam to carry out his own research and to do it tirelessly – carrying out over 1000 experiments and making over 200,000 observations every day. All a technician needs to do is to keep Adam stocked up with fresh ingredients, take away waste and run the occasional clean.
The fast and prolific nature of robotic research assistants like Adam will undoubtedly become more and more important. Even now, science finds itself in the odd position of having more data than it knows what to do with. Experimental technology is becoming quicker, cheaper and more powerful and it’s generating a wealth of data that needs to be analysed – think of the flood of information coming in from genome sequencing projects alone. Data are being produced faster than it can be examined, but computers like Adam can play a significant role in coping with this glut.
Moving robots are becoming more and more advanced, from Honda’s astronaut-like Asimo to the dancing Robo Sapien, a perennial favourite of Christmas stockings. But these advances are still fairly superficial. Most robots still move using pre-defined programmes and making a single robot switch between very different movements, such as walking or swimming, is very difficult. Each movement type would require significant programming effort.
Robotics engineers are now looking to nature for inspiration. Animals, of course, are capable of a multitude of different styles of movement. They have been smoothly switching from swimming to walking for hundreds of millions of years, when our distant ancestors first invaded the land from the sea.
This ancient pioneer probably looked a fair bit like the salamanders of today’s rivers and ponds. On the land, modern salamanders walk by stepping forward with diagonally opposite pairs of legs, while its body sways about its hips and shoulders. In the water, they use a different tactic. Their limbs fold back and they swim by rapidly sending S-like waves down their bodies.
I am walking strangely. About a week ago, I pulled something to my left ankle, which now hurts during the part of each step just before the foot leaves the ground. As a result, my other muscles are compensating for this to minimise the pain and my gait has shifted to something subtly different from the norm. In similar ways, all animal brains can compensate for injuries by computing new ways of moving that are often very different. This isn’t a conscious process and as such, we often take it for granted.
But we can get a sense of how hard it actually is by trying to program a robot to do the same thing. It’s far from straightforward. Robots have been used for years to perform structured, repetitive tasks and as engineering has advanced, their movements have become more life-like and more stable. But they still have severe limitations, not the least of which is inflexibility in the face of injury or changes to their body shape. If a robot’s leg falls off, it becomes as useful as so much scrap metal.
So for robots, adaptiveness is a desirable virtue, especially if they are to be used in the field. Modern bots can independently develop complex behaviours without any previous programming but usually, this requires trial and error and lots of time. But not always. Josh Bongard and colleagues at Cornell University have developed an adaptable bot that’s programmed to continuously assesses its body structure and develop new ways of moving if anything changes.
It differs from other models in that it has no built-in redundancy plans, no strategies for dealing with anticipated problems. It’s simply programmed to examine itself and adapt accordingly. The concept of a robot that can adapt to new situations is often the precursor to nightmare scenarios in many a science-fiction film. So it is fortunate that Bongard’s robot isn’t armed or threatening, but instead looks more like a four-armed starfish.
With their latest film WALL-E, Pixar Studios have struck cinematic gold again, with a protagonist who may be the cutest thing to have ever been committed to celluloid. Despite being a blocky chunk of computer-generated metal, it’s amazing how real, emotive and characterful WALL-E can be. In fact, the film’s second act introduces a entire swarm of intelligent, subservient robots, brimming with personality.
Whether or not you buy into Pixar’s particular vision of humanity’s future, there’s no denying that both robotics and artificial intelligence are becoming ever more advanced. Ever since Deep Blue trounced Garry Kasparov at chess in 1996, it’s been almost inevitable that we will find ourselves interacting with increasingly intelligent robots. And that brings the study of artificial intelligence into the realm of psychologists as well as computer scientists.
Jianqiao Ge and Shihui Han from Peking University are two such psychologists and they are interested in the way our brains cope with artificial intelligence. Do we treat it as we would human intelligence, or is it processed differently? The duo used brain-scanning technology to answer this question, and found that there are indeed key differences. Watching human intelligence at work triggers parts of the brain that help us to understand someone else’s perspective – areas that don’t light up when we respond to artificial intelligence.