It must be nice to have a car like KITT that can, amongst his many other handy abilities, transform. Sure it’s handy for crime fighting and all, but being able to turn into a van or a truck means Michael Knight never needs to rent a moving truck or worry about delivery when there’s a big Ikea sale. But since KITT’s ability to rearrange himself at the molecular level means that he can transform himself into any number of car-like shapes, even ones he’s never experienced before. And that means that he — and his deceased creator Dr. Graiman — has solved the problem of getting an artificial intelligence to use newly added parts. Typically a robot has to have a whole new set of code to be able to handle a new tool or sensor. Sure, most computers can handle plug-and-play attachments these days, but they still require a set of pre-written code to drive the newly added part. Artificial intelligence designers want the robot to be able to design that code itself.
At Robert Gordon University in Aberdeen, Scotland, researchers have adapted a technique using artificial neural networks that can help a robot actively evolve to understand its own body. A neural net tries to mimic the human brain by using discreet processing centers, known as neurones, and letting them link themselves to accomplish programming goals. Sethuraman Muthuraman, in Aberdeen, wanted to make a robot that could teach itself how to walk, regardless of the configuration of its legs. He started with a torso that had two unjointed legs. The robot used a neural net to evolve the means to walk from one point or another by testing different sets of neurone connections and killing them off if they failed. When the robot solved that task, he attached another leg segment to the robot, essentially giving the robot a two sectioned leg with knees. The robot used the original neural net program it had already devised and then added additional neurones to solve the problem of the newly jointed leg. In this way, the robot taught itself to walk with it’s newly enhanced body.
The whole movement toward self-programming machines is both exciting and a bit unnerving to anyone raised on the original Battlestar Galactica or Terminator movies. Hans Moravec, one of robotics’ pioneers and leading thinkers, argues that artificial intelligence evolution is mirroring the evolution of life, only far faster. In a 2003 talk available online he writes:
I see a strong parallel between the evolution of robot intelligence and the biological intelligence that preceded it. The largest nervous systems doubled in size about every fifteen million years since the Cambrian explosion 550 million years ago. Robot controllers double in complexity (processing power) every year or two. They are now barely at the lower range of vertebrate complexity, but should catch up with us within a half century.
He includes an excellent chart comparing the two rates of growth. He argues that the standard issue G3 Macintosh had the same computing power as a lizard brain, but that it will only be another 20 years before computers with same computational power as the human brain appear on the market. This is not the same as saying there will be human-intelligent robots as that time, since there’s still a long way to go on the programming and theory of AI. Those, he says, won’t come until 2050.
Links to this Post
- BigMIke | April 24, 2009