The unparalleled motion and manipulation abilities of soft-bodied animals such as the octopus have intrigued biologists for many years. How can an animal that has no bones transform its tentacles from a soft state to a one stiff enough to catch and even kill prey?
A group of scientists and engineers has attempted to answer this question in order to replicate the abilities of an octopus tentacle in a robotic surgical tool. Last week, members of this EU-funded project known as STIFF-FLOP (STIFFness controllable Flexible and Learnable manipulator for surgical OPerations) unveiled the group’s latest efforts.
Technology enhanced with artificial intelligence is all around us. You might have a robot vacuum cleaner ready to leap into action to clean up your kitchen floor. Maybe you asked Siri or Google—two apps using decent examples of artificial intelligence technology—for some help already today. The continual enhancement of AI and its increased presence in our world speak to achievements in science and engineering that have tremendous potential to improve our lives.
Or destroy us.
At least, that’s the central theme in the new Avengers: Age of Ultron movie with headliner Ultron serving as exemplar for AI gone bad. It’s a timely theme, given some high-profile AI concerns lately. But is it something we should be worried about?
Andrea Kuszewski is a behavior therapist and consultant, science writer, and robopsychologist at Syntience in San Francisco. She is interested in creativity, intelligence, and learning, in both humans and machines. Find her on Twitter at @AndreaKuszewski.
Before you read this post, please see “I, Robopsychologist, Part 1: Why Robots Need Psychologists.”
A current trend in AI research involves attempts to replicate a human learning system at the neuronal level—beginning with a single functioning synapse, then an entire neuron, the ultimate goal being a complete replication of the human brain. This is basically the traditional reductionist perspective: break the problem down into small pieces and analyze them, and then build a model of the whole as a combination of many small pieces. There are neuroscientists working on these AI problems—replicating and studying one neuron under one condition—and that is useful for some things. But to replicate a single neuron and its function at one snapshot in time is not helping us understand or replicate human learning on a broad scale for use in the natural environment.
We are quite some ways off from reaching the goal of building something structurally similar to the human brain, and even further from having one that actually thinks like one. Which leads me to the obvious question: What’s the purpose of pouring all that effort into replicating a human-like brain in a machine, if it doesn’t ultimately function like a real brain?
If we’re trying to create AI that mimics humans, both in behavior and learning, then we need to consider how humans actually learn—specifically, how they learn best—when teaching them. Therefore, it would make sense that you’d want people on your team who are experts in human behavior and learning. So in this way, the field of psychology is pretty important to the successful development of strong AI, or AGI (artificial general intelligence): intelligence systems that think and act the way humans do. (I will be using the term AI, but I am generally referring to strong AI.)
Basing an AI system on the function of a single neuron is like designing an entire highway system based on the function of a car engine, rather than the behavior of a population of cars and their drivers in the context of a city. Psychologists are experts at the context. They study how the brain works in practice—in multiple environments, over variable conditions, and how it develops and changes over a lifespan.
The brain is actually not like a computer; it doesn’t always follow the rules. Sometimes not following the rules is the best course of action, given a specific context. The brain can act in unpredictable, yet ultimately serendipitous ways. Sometimes the brain develops “mental shortcuts,” or automated patterns of behavior, or makes intuitive leaps of reason. Human brain processes often involve error, which also happens to be a very necessary element of creativity, innovation, and human learning in general. Take away the errors, remove serendipitous learning, discount intuition, and you remove any chance of any true creative cognition. In essence, when it gets too rule-driven and perfect, it ceases to function like a real human brain.
To get a computer that thinks like a person, we have to consider some of the key strengths of human thinking and use psychology to figure out how to foster similar thinking in computers.
Andrea Kuszewski is a behavior therapist and consultant, science writer, and robopsychologist at Syntience in San Francisco. She is interested in creativity, intelligence, and learning, in both humans and machines. Find her on Twitter a @AndreaKuszewski.
“My brain is not like a computer.”
The day those words were spoken to me marked a significant milestone for both me and the 6-year-old who uttered them. The words themselves may not seem that profound (and some may actually disagree), but that simple sentence represented months of therapy, hours upon hours of teaching, all for the hope that someday, a phrase like that would be spoken at precisely the right time. When he said that to me, he was showing me that the light had been turned on, the fire ignited. And he was letting me know that he realized this fact himself. Why was this a big deal?
I began my career as a behavior therapist, treating children on the autism spectrum. My specialty was Asperger syndrome, or high-functioning autism. This 6-year-old boy, whom I’ll call David, was a client of mine that I’d been treating for about a year at that time. His mom had read a book that had recently come out, The Curious Incident of the Dog in the Night-Time, and told me how much David resembled the main character in the book (who had autism), in regards to his thinking and processing style. The main character said, “My brain is like a computer.”
David heard his mom telling me this, and that quickly became one of his favorite memes. He would say things like “I need input” or “Answer not in the database” or simply “You have reached an error,” when he didn’t know the answer to a question. He truly did think like a computer at that point in time—he memorized questions, formulas, and the subsequent list of acceptable responses. He had developed some extensive social algorithms for human interactions, and when they failed, he went into a complete emotional meltdown.
My job was to change this. To make him less like a computer, to break him out of that rigid mindset. He operated purely on an input-output framework, and if a situation presented itself that wasn’t in the database of his brain, it was rejected, returning a 404 error.