I, Robopsychologist, Part 2: Where Human Brains Far Surpass Computers

By Andrea Kuszewski | February 9, 2012 10:08 am

Andrea Kuszewski is a behavior therapist and consultant, science writer, and robopsychologist at Syntience in San Francisco. She is interested in creativity, intelligence, and learning, in both humans and machines. Find her on Twitter at @AndreaKuszewski

Before you read this post, please see “I, Robopsychologist, Part 1: Why Robots Need Psychologists.”

A current trend in AI research involves attempts to replicate a human learning system at the neuronal level—beginning with a single functioning synapse, then an entire neuron, the ultimate goal being a complete replication of the human brain. This is basically the traditional reductionist perspective: break the problem down into small pieces and analyze them, and then build a model of the whole as a combination of many small pieces. There are neuroscientists working on these AI problems—replicating and studying one neuron under one condition—and that is useful for some things. But to replicate a single neuron and its function at one snapshot in time is not helping us understand or replicate human learning on a broad scale for use in the natural environment.

We are quite some ways off from reaching the goal of building something structurally similar to the human brain, and even further from having one that actually thinks like one. Which leads me to the obvious question: What’s the purpose of pouring all that effort into replicating a human-like brain in a machine, if it doesn’t ultimately function like a real brain?

If we’re trying to create AI that mimics humans, both in behavior and learning, then we need to consider how humans actually learn—specifically, how they learn best—when teaching them. Therefore, it would make sense that you’d want people on your team who are experts in human behavior and learning. So in this way, the field of psychology is pretty important to the successful development of strong AI, or AGI (artificial general intelligence): intelligence systems that think and act the way humans do. (I will be using the term AI, but I am generally referring to strong AI.)

Basing an AI system on the function of a single neuron is like designing an entire highway system based on the function of a car engine, rather than the behavior of a population of cars and their drivers in the context of a city. Psychologists are experts at the context. They study how the brain works in practice—in multiple environments, over variable conditions, and how it develops and changes over a lifespan.

The brain is actually not like a computer; it doesn’t always follow the rules. Sometimes not following the rules is the best course of action, given a specific context. The brain can act in unpredictable, yet ultimately serendipitous ways. Sometimes the brain develops “mental shortcuts,” or automated patterns of behavior, or makes intuitive leaps of reason. Human brain processes often involve error, which also happens to be a very necessary element of creativity, innovation, and human learning in general. Take away the errors, remove serendipitous learning, discount intuition, and you remove any chance of any true creative cognition. In essence, when it gets too rule-driven and perfect, it ceases to function like a real human brain.

To get a computer that thinks like a person, we have to consider some of the key strengths of human thinking and use psychology to figure out how to foster similar thinking in computers.

Why Is a Human-like Brain So Desirable?

One of the great strengths of the human brain is its impressive efficiency. There are two types of systems for thinking or knowledge representation: implicit and explicit, or sometimes described as “system 1″ and “system 2″ thinking.

System 1, or the implicit system [PDF] is the automated and unconscious system, based in heuristics, emotion, and intuition. This is system used for generating the mental shortcuts I mentioned earlier. System 2, or the explicit system, is the conscious, logic- and information-based system, and the type of knowledge representation most AI researchers use. These are the step-by-step instructions, the system that stores every possible answer and has it readily available for computation and matching.

There are advantages to both systems, depending on what the task is. When accuracy is paramount, and you need to consciously think your way through a detailed problem, the explicit system is more useful. But sometimes being conscious of every single move and thought in the process of completing a task makes it more inefficient, or even downright impossible.

Consider a simple human action, such as standing up and walking across the room. Pretty effortless, right? Now imagine if you were conscious (explicit system) of every single muscle activation, shift of balance, movement, have to judge/measure distance, determine amount of force, etc. You would be mentally exhausted by the time you crossed half the distance. I’m exhausted just thinking about it. You wouldn’t be doing it very gracefully, either. When actually walking, the brain’s implicit system takes over, and you stand up and walk with barely a thought as to how your body is making that happen on a physiological level.

Now imagine programming AI to stand up and walk across the room. You need to instruct it to do every single motion and action that it takes to complete that task. There is a reason why it is so difficult to get robots to move as humans do: the implicit system is just better at it. The explicit system is a resource hog—especially in tasks that involve replicating actions in machines that are automated in humans.

Now consider the act of thinking, or generating an answer to a problem. And let’s say you had every possible answer given to you, in a list. But let’s say to answer the question, you had to go through all possible answers in that list, no matter how long that list was, and compare it to the question, until you came upon the correct solution. You would probably be quite accurate with your answers using this method, but it would take a very long time. Your brain intuitively knows there’s a better way—sometimes you may just have a hunch that turns out to be correct, or figure it out after trying out only a couple of potential solutions, rather than all of them.

AI systems that use the explicit system of computation get around this time issue by generating faster computations. They think if only they can get the AI to run through all those possible solutions faster, they can replicate the speed of human thought processing, and thus make the machine more human-like. The biggest problem with these systems is the resource issue. Also, that’s just not how humans think, so the application of this type of system is limited.

But what if you could teach AI to operate using the implicit system, based on intuition, rather than having to run through endless computations to come up with a single solution?

AI: Artificial Intuition

To get AI to use intuition-based thinking would truly bring us closer to real human-like machines. In fact, some researchers are working on this technology right now. Monica Anderson, founder, CEO, and lead researcher at Syntience Labs, has been working on an AI learning process called artificial intuition, which aims to teach machines how to think like humans. This system is learning based—no pre-programmed knowledge or rules. It receives novel information, processes it, then takes away the relevant bits, and uses that knowledge to build on the next solution. For example, artificial intuition is currently being used to understand semantics in language. The computer has no dictionary of words to compare the text to, just the text itself—the computer is actually deriving the meaning from the context, understanding the language.   By going about things in this fashion, it has the ability to learn, not just make faster computations.

Future Directions

Artificial intuition is very different in theory than most of the other AI research being done. Because it mimics the way humans actually learn, it can be used to develop the types of thinking systems that humans are currently better at, ones that use the implicit system. No one thus far has been able to do this successfully, until now. What did Monica Anderson do differently? She was one of the few forward-thinking researchers that recognized from the very beginning the importance of psychology to developing human-like AI, and has always had psychologists both on her board of advisors and on staff.

An AI with the ability to think intuitively opens the doors for all kinds of new developments in replicating human-like abilities for AI, but the one I’m most excited about is one that has eluded AI researchers for years: creativity. Many have attempted to engineer creative behaviors, such as getting a computer to paint or write music, but no one has figured out how to successfully engineer creative cognition. I think that now we are rapidly approaching this possibility now.

Will we ever have AI that is truly intelligent—learning, thinking, and feeling, just as humans do? Possibly. Sentient, human-like robots and machines that we see in the movies are still a ways off from reality. When that time comes, the field of robopsychology will likely expand and focus more on ethics and morality, as well as learning and emotion. But while we aren’t quite there yet, I do know that the field of robopsychology and AI psychology will play a critical role in a future that includes truly intelligent, human-like machines.

 

References:

Implicit Learning as an Ability by Kaufman, DeYoung, Gray, Jimenez, Brown, and Mackintosh
The Creativity of Dual Process “System 1″ Thinking by Scott Barry Kaufman, Scientific American.com
Reduction Considered Harmful by Monica Anderson, Hplusmagazine.com
The Reticular Activating Hypofrontality (RAH) Model of Acute Exercise by Dietrich and Audiffrin

CATEGORIZED UNDER: Mind & Brain, Technology, Top Posts
  • Anonymous

    Interesting developments, thanks for the article. In your opinion, what year will we have human-like artificial brain?

  • Al Cibiades

    While from the perspective of evolutionary biology much of this (and the notes on the Syntience site) is assumptive without basis, there is an enticing hint of a different kind of understanding. Most of what I have seen here seems to start with the assumption of, if not the positronic brain itself, then the comprehension of consciousness as a deliverable entity from which one works backward to explain the proprietary IP of Syntience.
    Since Monica Anderson has started Syntience on a shoestring, some PR hype is to be expected to develop funding sources, so I am lenient on the exact content thusfar delivered. However, as yet I cannot tell if these are the ideas of Tesla or Fleischmann–Pons.
    I welcome the interaction and insight of Greg Fish to this discussion.

  • http://daedalus2u.blogspot.com/ Dave Whitlock

    I can tell; they are Tesla-like.

    We already know that assemblies of matter can generate biological intelligence (so the analogy to Fleishmann-Pons is not apt). There is no datum that indicates that there is anything special about brains that cannot be replicated in other substrates.

  • Gordon Wells

    I recently read “Inside Jokes” by Hurley and co. If I understood it right everything is actually system 1, but for the heavy lifting of modes of thinking represented by system 2 the brain is driven by “epistemic” emotions. One of the book’s predictions which seems to resonate with this post is that true AI will have to driven by emotions, not merely simulate them.

  • Brian Too

    Re: ” I think that now we are rapidly approaching this possibility now.”

    Sure we are. And in 20 years we will still be rapidly approaching, and 20 years after that, and 20 years after that.

    AI researchers have no shortage of hubris and this shows the pattern continuing. Anderson is going to deploy “Artificial intuition” without any firm definition, with no noticeable methodology, and with no reportable results. Ah, it’s because it’s “intuition”, so I guess we don’t need any of those things! We can intuit them into the machine!

    Let’s talk about something substantive. IBM’s Watson is one of the most successful AI projects in years and is truly impressive. And yet, I submit, it barely resembles human/sentient thinking methods and strategies at all. It is an invented technology, created out of whole cloth. It was done without reference to a biological model.

    The reason is simple. We still have a laughably simple understanding of consciousness and intelligence.

    We will understand one day, I’m sure. That day is not close though.

  • Al Cibiades

    >Dave W – I can see by your blog Daedalus you would share a commonality of thinking with the presentation of this thesis.
    But “assemblies of matter can generate biological intelligence” means what? Biological assemblies (organisms) can be intelligent? I take that as self evident.
    Nothing unique in the brain which cannot be replicated in other substrates? I thought one of the themes of Monica Anderson (Syntience) was that the whole CAN be greater than the sum of its parts. Suffocate the organism and it dies – how to re-animate it as it is still the same substrate? Yet NOT the same.
    The program ELIZA demonstrated that what we THINK of as intelligence, can be played well by artifice. The point being that “intelligence” is a slippery term, indeed.

  • Daniel

    Computers will need the ability to have varying degrees of memory. Important, repeated and/or recent things are strong, and some things are forgotten.
    To be more human-like, a computer will need to be able to grow. Bad news is that it can’t. Brains change over time as connections are broken or formed, but a computer can’t do this. AI in machines will never happen. Only real intelligence in a biological organism can be achieved. Genetic engineering to create brains similar to our own in animals.

  • Pingback: the amazing intuitive, understanding a.i.? » weird things()

  • http://worldofweirdthings.com Greg Fish

    A current trend in AI research involves attempts to replicate a human learning system at the neuronal level—beginning with a single functioning synapse, then an entire neuron, the ultimate goal being a complete replication of the human brain.

    Not at all. Certainly neurons are an important thing to study for any AI system because they make up the brain so studying them to understand how the brain can come together is as important to AI research as studying subatomic components is to a particle physicist. But we don’t actually want to replicate the brain neuron by neuron because then we’d be dealing with hundreds of billions of them without any guarantee of how well they’ll actually come together. Each person’s brain is wired differently and so should each synthetic cognitive entity’s. When dealing with large enough artificial neural networks designed to tackle a complex enough problem, you’re going to see marked differences in the weights between neurons across multiple systems designed to tackle the same problem. And we’re fine with that despite what Anderson supposes.

    Basing an AI system on the function of a single neuron is like designing an entire highway system based on the function of a car engine, rather than the behavior of a population of cars and their drivers in the context of a city.

    And that’s why no one actually does that. Code only makes sense in context. How to tackle a problem only makes sense in a context. You’re doing the equivalent of saying that plumbers just lay pipe with no concern for the house in which they’re doing it while right behind you the very same plumbers are trying to figure out how to best fit the pipes between beams and studs to keep it out of the homeowner’s way. Neurons are just components. We plug them into a system, give it a problem to tackle and see what happens as it learns. That’s been the standard approach for decades.

    Take away the errors, remove serendipitous learning, discount intuition, and you remove any chance of any true creative cognition.

    Define errors. When we talk about errors in the AI world, we mean that a machine meant to solve a maze is stuck and can’t get out. We talk about a device that calculates 2 + 2 as “lightly smoked haddock.” There have been numerous experiments in which robots arrived at novel solutions to problems such as chasing another robot or developing a strategy to recharge themselves. Those are not considered errors. They’re actually considered to be significant breakthroughs, appear in papers, and cause “oohs” and “ahhs” in grad school comp sci classes.

    Now imagine programming AI to stand up and walk across the room. You need to instruct it to do every single motion and action that it takes to complete that task.

    No you don’t. You set up the basic environmental inputs as variables and goad it into standing up and walking across the room step by step, building on every successful trial. Considering that you’ll have to calculate the weight of motors, the power of the actuators, how the machine will behave when walking, etc, it’s just easier to have the machine do its own calculating. Again, this is now fast becoming the norm and there’s been a small explosion of papers since 2005 trying to figure out the beast ways to teach robots and robot swarms how to move. You may want to familiarize yourself with the work of Josh Bongard for starters. There are also three or four ongoing projects on self-learning robot locomotion running since 2002 which generated more than 100 papers on the subject combined.

    But what if you could teach AI to operate using the implicit system, based on intuition, rather than having to run through endless computations to come up with a single solution?

    And again, single solution AIs are generally used in high level undergraduate AI courses to give students the hang of how to use and program ANNs. Most AI labs are focused on using that reductionist model to induce emergent behavior such as being able to tell which group of objects is bigger than another while trying instead to learn counting (Stoianov, Zorzi doi:10.1038/nn.2996), or successfully finding an unexpected way for the robot to move efficiently (Bondarg, Zykov, Lipson DOI: 10.1126/science.1133687). If you confine the domain space to a single solution, of course you’ll lose emergence. But yet again, we don’t do that. We broaden the domain to give interesting things a chance to happen.

    What Anderson is doing is hacking away at a strawman, presenting AI researchers as stuck with their noses in code, programming robots like they’d program an enterprise system, but in reality, what she calls “intuitive” AI has been the standard research model for more than a decade now, built and programmed by those reductionist, linear coders. I don’t know if she’s doing this to promote her lab or if she really thinks that AI people have no clue what they’re doing while she arrived at similar conclusions they did about 12 to 15 years after they started implementing them and because she hasn’t kept up with the literature doesn’t know it.

  • Kirk

    A reductionist argument is existential — as soon as I have working AI I will be able to reverse engineer it (and it may be made of a million tiny robots). My crystal ball works as well as yours.

  • http://daedalus2u.blogspot.com/ Dave Whitlock

    Al Cibiades, living flesh and dead flesh are very different substrates.

    It isn’t that intelligent biological entities are “self-evident”, that they exist is “data”, an example.

    My understanding of what Monica Anderson is trying to achieve are emergent properties of assemblies of elements without those properties. Sort of like how an object can be built out of Legos. The properties of the object are not contained in the Legos, they are an emergent property of how the Legos are put together.

  • Ramona Patrick

    Brain research is absolutely fascinating to me. As an educator who has been in the field for over 30 years, I am always interested in what is going on inside the heads of my students
    At the same time, however, I am constantly amazed by their never-ending strategies for problem solving and task avoidance. I really had a high opinion of myself during my high school years, but I would have much to learn to maintain the the pace with today’s scholar. I believe that is the essence of where this AI will fail. AI may be able to do many things, but at it’s soul level, it will still be a machine. I really wish that all AI monies expended on creating the one thing humans are achieving quite successfully without science (creating human brains) would alter its focus and join efforts with those who are diligently seeking to rid this world of horrible maladies of the brain that strip meaningful life of humans who are already here.

  • Al Cibiades

    >Dave – Not to put too fine a point on it but “dead” is a relative term — The cardiac catheter patient “dead” for 90 seconds but who regains consciousness/life; the child frozen in a lake for 30 minutes – drowned “dead”, but resusitated… Not sure where on the continuum this reaches (and I am not defending spiritualism), but “while there is more than is dreamnt of” in my philosophies, implied catchphrases do not an edifice make. Intelligence in humans is “self-evident” simply because that is how “we” define it. Are dogs, cats, whales, chimpanzees, spiders, planaria, rats, etc…intelligent? Now the definition reprises itself.

  • http://daedalus2u.blogspot.com/ Dave Whitlock

    Al Cibiades, dead is not a relative term. ATP levels below what is necessary to sustain viability is irreversible death.

    You say intelligence is self-evident. If you can’t tell about rats and spiders, then what about intelligence is “self-evident”?

    I say verification of the existence of intelligence in biological entities is a matter of data, which requires instruments and measurement. It is important to try and avoid artifice, which is why calling things “self-evident” is poor form. Many things are in the eye of the beholder. If you want to study something and recreate it, subjective definitions are less useful than objective definitions.

  • Bill Ries

    Lets not forget about the required human psychology. When machines can think and feel as humans do, it becomes a moral requirement to treat them like humans. It is difficult to imagine the average person thinking of a robot as being more than a machine and deserving of human rights. I can imagine most people caring more about the treatment of lower order mammal than that of a high level machine, even if the robot has far more advanced emotions. Do we really want them to think and feel as humans do? It is the emotional state of a being that makes them human, not flesh and blood. At what emotional level does “Robota” go from the czech word for slave to actual slavery?

NEW ON DISCOVER
OPEN
CITIZEN SCIENCE
ADVERTISEMENT

Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

The Crux

A collection of bright and big ideas about timely and important science from a community of experts.
ADVERTISEMENT

See More

ADVERTISEMENT
Collapse bottom bar
+

Login to your Account

X
E-mail address:
Password:
Remember me
Forgot your password?
No problem. Click here to have it e-mailed to you.

Not Registered Yet?

Register now for FREE. Registration only takes a few minutes to complete. Register now »