I, Robopsychologist, Part 1: Why Robots Need Psychologists

By Andrea Kuszewski | February 7, 2012 1:38 pm

Andrea Kuszewski is a behavior therapist and consultant, science writer, and robopsychologist at Syntience in San Francisco. She is interested in creativity, intelligence, and learning, in both humans and machines. Find her on Twitter a @AndreaKuszewski.

“My brain is not like a computer.”

The day those words were spoken to me marked a significant milestone for both me and the 6-year-old who uttered them. The words themselves may not seem that profound (and some may actually disagree), but that simple sentence represented months of therapy, hours upon hours of teaching, all for the hope that someday, a phrase like that would be spoken at precisely the right time. When he said that to me, he was showing me that the light had been turned on, the fire ignited. And he was letting me know that he realized this fact himself. Why was this a big deal?

I began my career as a behavior therapist, treating children on the autism spectrum. My specialty was Asperger syndrome, or high-functioning autism. This 6-year-old boy, whom I’ll call David, was a client of mine that I’d been treating for about a year at that time. His mom had read a book that had recently come out, The Curious Incident of the Dog in the Night-Time, and told me how much David resembled the main character in the book (who had autism), in regards to his thinking and processing style. The main character said, “My brain is like a computer.”

David heard his mom telling me this, and that quickly became one of his favorite memes. He would say things like “I need input” or “Answer not in the database” or simply “You have reached an error,” when he didn’t know the answer to a question. He truly did think like a computer at that point in time—he memorized questions, formulas, and the subsequent list of acceptable responses. He had developed some extensive social algorithms for human interactions, and when they failed, he went into a complete emotional meltdown.

My job was to change this. To make him less like a computer, to break him out of that rigid mindset. He operated purely on an input-output framework, and if a situation presented itself that wasn’t in the database of his brain, it was rejected, returning a 404 error.

In the course of therapy, David opened himself up to new ways of looking at problems and deriving solutions. When I asked him a question he had never been exposed to, it forced him to think about the content of my question itself, not just search the database of his brain for the correct response that he had memorized. When he failed to come up with an appropriate answer to a problem (which happened quite a bit initially), we discussed the reasons why it was wrong. Then, after a series of inappropriate attempts at a solution, he started to see the patterns that made up “possible correct responses” and the varying degrees of correctness, as well as the “always incorrect answers” and the “sometimes correct, sometimes incorrect, but depends on context”.

He was no longer operating on a pure input-output or match-to-sample framework, he was learning how to think. So the day gave me a completely novel, creative, and very appropriate response to one of my questions followed by those simple words, “My brain is not like a computer”, it was pure joy. He was learning how to think creatively. Not only that, but he knew the answer was an appropriate, creative response, and that right there—the self-awareness of his mental shift from purely logical to creative—was a very big deal.

My experience in teaching children with autism to think more creatively really got me to reverse engineer the learning process itself, recognizing all the necessary components for both creativity and increasing cognitive ability. There is a difference between memorizing question-answer sets and really learning how to solve a novel problem. There are times when memorization or a linear approach to problem-solving is appropriate, but there are other times when creative problem-solving and problem-finding are needed. And in order to teach a linear circuit how to think creatively, you absolutely must have a good understanding of the learning process itself, as well as know how to reach those ends successfully.

The time I spent making humans “less like robots” made me start thinking about how this learning paradigm could be applied to actual robots and thinking machines. In order to create artificial intelligence (AI) that can actually think like a human, you need to teach it to learn like humans do. That brought me to my current interest—and job—in robopsychology, or AI psychology.

 

Robopsychology: Bridging Humanity and Technology

Robopsychology, loosely defined as the study of personality and behavior of intelligent machines, was first made popular by sci-fi writer Isaac Asimov in his series Robot Short Stories. Susan Calvin—the very first robopsychologist—was a character that appeared in several of Asimov’s works. Dr. Calvin was the expert on human nature, the nature of machines, and their ultimate intersection: artificial intelligence.

Robots and human-like machines are gaining popularity in many diverse fields, for a wide variety of uses. The more they resemble actual human thinking and behavior, the more useful they can be. They are being used for teaching, companionship, therapy, and even entertainment. For example, Heather Knight, a social roboticist, is teaching her robot Data how to do stand-up comedy, routinely performing in public. In order for a robot to be successful at this very human-like task, understanding human behavior (and humor) is extremely helpful.

What Does a Robopsychologist Actually Do?

Similar to the way we have a variety of psychology professionals that deal with the spectrum of human behavior, there is a range of specialties/duties for robopsychologists as well. Unfortunately, not every robopsychologist is a modern-day Susan Calvin (although it does sound pretty sweet, and could make for fun conversation at parties). But in reality, depending on the type of machines being developed and for what purpose, the duties and skills of robopsychologists could vary quite a bit, just as practitioners in human psychology do.

Some examples of the potential responsibilities of a robopsychologist:

  • Assisting in the design of cognitive architectures
  • Developing appropriate lesson plans for teaching  AI targeted skills
  • Create guides to help the AI through the learning process
  • Address any maladaptive machine behaviors
  • Research the nature of ethics and how it can be taught and/or reinforced
  • Create new and innovative therapy approaches for the domain of computer-based intelligences

In the work I do, there is a constant back-and-forth between robopsychology and human psychology—a mutually beneficial relationship of teaching and learning from each other. For example, at Syntience, the AI research lab that I’ve recently joined, we are working on developing AI that can understand the semantics of natural language. In order to be able to do this, they must first be taught—much in the same way humans are taught.

A baby is born without a database of facts. It is in some ways a blank slate, but also (don’t worry, Steven Pinker fans) has a genetic code that acts as a set of instructions on how to learn when exposed to new things. In the same way, our AI is born completely empty of knowledge, a blank slate. We give it an algorithm for learning, then expose it to the material in needs to learn (in this case, books to read) and track progress. If children are left to learn without any assistance or monitoring for progress, over time, they can run into problems that need correcting. Because our AI learns in the same fashion, it can run into the same kinds of problems. When we notice that learning slows, or the AI starts making errors—the robopsychologist will step in, evaluate the situation, and determine where the learning process broke down, then make necessary changes to the AI lesson plan in order to get learning back on track.

Likewise, we can also use the AI to develop and test various teaching models for human learning tasks. Let’s say we wanted to test a series of different human teaching paradigms for learning a foreign language. We could create a different learning algorithm based on each teaching model, program one into each AI, then test for efficiency, speed, retention, generalization, etc.

In I, Robot, the blockbuster movie adapted from Asimov’s robot stories, Calvin sums up her job description by saying, “I make robots seem more human.” If I had to sum up my main goal as a robopsychologist, it would be “to make machines think and learn like humans,” and ultimately, replicate creative cognition in AI. Lofty goal? Perhaps. Possible? I believe it is. I’ll be honest, though—I haven’t always thought this way.

The main reason for my past disbelief is because most of the people working on AI discounted the input of psychology. They erroneously thought they could replicate humanity in a machine without actually understanding human psychology. Seems like a no-brainer to me: If you want to replicate human-like thinking, collaborate with someone who understands human thinking on a fundamental and psychological level, and knows how to create a lesson plan to teach it. But things are changing. The field of AI is finally, slowly starting to appreciate the important role psychology needs to play in their research.

Robopsychology may have started out as a fantasy career in the pages of a sci-fi novel, but it illustrated a very smart and useful purpose. In the rapidly advancing and expanding field of artificial intelligence, the most forward-thinking research labs are beginning to recognize the important—some even say critical—role psychology plays in the quest to engineer human-like machines.

 

Andrea Kuszewski’s exploration of robopsychology will soon continue in “I, Robopsychologist, Part 2: Why We Want Robots That Think Like Humans.”

 

References:

Implicit Learning as an Ability by Kaufman, DeYoung, Gray, Jimenez, Brown, and Mackintosh
The Creativity of Dual Process “System 1″ Thinking by Scott Barry Kaufman, Scientific American.com
Reduction Considered Harmful by Monica Anderson, Hplusmagazine.com
The Reticular Activating Hypofrontality (RAH) Model of Acute Exercise by Dietrich and Audiffrin

CATEGORIZED UNDER: Mind & Brain, Technology, Top Posts
  • araraazul

    Does the author regret trying to alter the child’s way of thinking so that he conforms to expected behavior? I don’t see anything wrong with a child being fascinated by the computer inside him.

  • Eduardo Corona

    Fascinating article: I am especially drawn to two topics:

    The first, how we can gain understanding about creativity, thinking and consciousness via two fronts: studying the human brain and making computers learn and behave in human-like ways.

    The second (and perhaps more in tune with my field of expertise) is that in computer science and applied mathematics, “learning” is often achieved through statistical algorithms that incorporate training with data and interacting with the world. I sense that in general the original paradigm was to automatize this process completely: what you indicate as “leaving the child / robot unsupervised”. However, I think in vision and other areas this is shifting towards including human feedback (crowd sourcing, or in your case an expert’s analysis ).

    @araraazul – from my reading of the story, the problem wasn’t that the child was fascinated by the “computer inside him”, but that he was stunting his ability to think creatively, to function in ways *other* than those of a simple input-output machine.

  • Sneeral

    Then there is the real possibility that this bright young boy learned that the proper response to the his therapist in that situation was “My brain is not a computer” and simply added it to his data base.

  • IW

    “Andrea Kuszewski’s exploration of robopsychology will soon continue in “I, Robopsychologist, Part 2″

    You mean robots come in parts and we have to assemble them? Isn’t there a robot that can do that?

  • http://worldofweirdthings.com Greg Fish

    Maybe this is because I program computers (and yes, I worked with artificial neural networks), but I found the description of a robopsychologist to be extremely confusing. Here’s why…

    “A baby is born without a database of facts.”

    If it’s “database” is a blank slate, how does it know to cry when it’s hungry or unhappy or just wants your attention? Why does it want to pick things up, put them in its mouth, and play with them when it gets the strength to do so? Why does it then try to crawl and walk? On the flip side, cognitive computing doesn’t work on a database of facts. It works by using a set of probabilistic connections to calculate the proper response to an input rather than reaching into a database of facts (though I’m trying to find out whether a hybrid model of ANN/DB will work with some experiments), which brings us to…

    “When we notice that learning slows, or the AI starts making errors—the robopsychologist will step in, evaluate the situation, and determine where the learning process broke down, then make necessary changes to the AI lesson plan in order to get learning back on track.”

    Why would we call in a robopsychologist when we have a mathematical formula that does the exact same job? In ANNs we use backpropagation to adjust which artificial neurons fire until we get the right response. True, there is the local minima problem, but we just reset the seed values for the inputs and try again until we get the error rate down to acceptable levels. Feedforward networks are even easier because the formula to correct them doesn’t involve Sigmoid functions, it’s just a statistical balancing act between the artificial neurons.

    “The main reason for my past disbelief is because most of the people working on AI discounted the input of psychology. They erroneously thought they could replicate humanity in a machine without actually understanding human psychology.”

    I know of no one currently in the AI field who seriously wants to replicate humans in machine form, much less someone who thinks it can be done and that it can be done with no assistance from someone who understands how human minds and brains work. That doesn’t mean that such people don’t exist of course, but that sentence makes it sound that the AI field thinks it can spawn human minds on computers while very few researchers would harbor such illusions after 8 or 12 years of education on the subject.

    And here’s the big problem with the entire premise of robopsychology for me. We need to have a psychologist for humans because we don’t really know how our cognitive processes work to a tee and the sheer amount of what’s going on in our minds and how our hormones and neurotransmitters affect out behavior as their levels vary from this or that, adds an immense level of complexity to our thoughts and actions. We can’t just look into a human brain and know exactly what’s going on at any given time. However, computers have no hormones or emotions, and we can look at exactly what’s going on inside their minds on a neuron by neuron level.

    Debugging artificial neural networks in AI would be like a therapist being able to pause a human’s cognition at will and look at how every neuron fires, where, why, and what effect it has at her own pace, with access to every bit of information she can think of to help her find the problem. With that level of control, why would I need someone to behaviorally train my AI if I can just tell it what to do, and again, why would I want a machine to think like I do if I need it to solve problems I don’t have the time or the capacity to solve in a timely and accurate manner? But since it will be your next post, I may be jumping the gun there…

  • Kirk

    I, for one, welcome our well-adjusted, self-actualized robot overlords.

  • Andrea Kuszewski

    @ Sneeral: I didn’t get into too much background detail for this short piece, but while I understand what you mean by the possibility of him “adding the phrase to his database”, that isn’t what happened in this case. Over the course of his therapy, during times that he couldn’t come up with the answer and would begin getting upset, I explained to him that his brain was _not_ like a computer, and that there are other ways to find out answers. Because to his mind, computers always followed explicit instructions and had the answers given to it in some way, and I explained to him that’s not how humans work. Sometimes we need to be a little creative, and it’s ok to come up with a new answer, as long as it fit the parameters of the question.

    You see, I _wanted_ him to “break the rules” or to give me a completely new answer, because that would show me he was truly learning. The day he thought up a creative answer and said to me “My brain is not a computer”, it was his way of telling me, “Hey, did you notice? This is a new, creative answer, and I thought of it on my own!” The phrase itself may have been a little ‘robotic’, but the meaning was clear.

  • Andrea Kuszewski

    @araraazul: No, I did not regret teaching David to think differently. In this case, his rigidity and inflexible thinking style was impeding his social functioning and his learning process in general. There is a spectrum of convergent (narrow, inflexible) and divergent (broad, flexible) thinking, and he was WAY over on the very end of the convergent side. I was happy with him moving just a little closer to center, not necessarily all the way on the other end. He was probably always going to be a ‘more inflexible than average’ person, but by opening his mind a little bit to creative thinking, it made all the difference in his learning, as well as his ability to form social relationships.

  • Andrea Kuszewski

    @Greg: Your entire argument is based on a reductionist model of AI. The AI algorithms we work on are radically different than the ones you are used to seeing, so your resistance is a typical response. Our AI systems learn in a different way, but I describe this more in Part 2.

    The purpose of psychology in AI (especially AGI) is to understand how a human learns and replicate the human process in a machine. Some things humans are just better at for good reason, one of them being creative thinking and problem solving. Replicating a human learning process is difficult to do if you don’t fundamentally understand human learning in the first place. But in all honesty, many people in the field of psychology itself don’t fully understand the human learning process or get it right, so I’m not surprised that the entire community of AI researchers don’t, either.

    I discuss this concept in more detail in the next part, also. If you still have additional questions after Part 2 gets published, I’ll address them at that time, but I don’t want to jump the gun and spoil all the surprise. ;)

  • Andrea Kuszewski

    @Greg: Also, regarding the baby knowing how to cry, etc.. that’s why I mentioned the genetic code that acts as the set of instructions on how to behave/react in response to stimuli. That is all included.

  • Wendy Langer

    Great article – a fascinating read!

  • http://daedalus2u.blogspot.com/ Dave Whitlock

    araraazul human brains are fundamentally non-computer like. Attempting to emulate a computer with a human brain is very difficult, does not work very well and causes great problems in relating with other humans.

    There is an article on the major difference.

    http://www.technologyreview.com/computing/39669/

    Computers use “memory” that can store either data or programs. Brains do not. There is no “program” in a nervous system, there is only “hardware”, but that hardware can self-modify on a sub-second time scale. I think that looking at genetics as something that “programs” a brain is not a good way of looking at it.

    Genes code for processes which in response to the environment generate a phenotype.

    Humans are social animals. Humans need to relate and socialize with other humans. A very large part of “growing up” is learning how to do that. To relate to another human, you need to be able to understand their thinking the way that they understand themselves. Many people cannot do this, even when both people are neurologically typical. Constricting your thinking to only computer-like thinking modes is to be unable to communicate.

    I suggest that people read the references. I think that Monica Anderson does have a good approach, I think it doesn’t go quite far enough. I think that in addition to “robot psychology” that strong AI will need (metaphorically) “robot physiology” and also (metaphorically) “robot nitric oxide” (my own favorite signaling molecule in physiology). ;)

  • http://worldofweirdthings.com Greg Fish

    Your entire argument is based on a reductionist model of AI.

    Beg your pardon? What kind of model of AI am I supposed to use? After all, it’s code running bits and bytes unless you hooked up a living brain to computers and rely on it for pattern and object recognition. And even then there’s a spark of what’s known was “vitalism” in this statement.

    The AI algorithms we work on are radically different than the ones you are used to seeing, so your resistance is a typical response.

    Hmm… I wonder where I heard this refrain before from Silicon Valley. Oh yeah, all the time. Please do tell me what kind of algorithms you use that are so radically different from the state of the art neural networks deployed today for virtually every system meant to learn, from cognitive computer chips to advanced robotics. I’m serious. I’d like to see a paper on the subject.

    But in all honesty, many people in the field of psychology itself don’t fully understand the human learning process or get it right, so I’m not surprised that the entire community of AI researchers don’t, either.

    A computer is not a human. To expect that you can make it behave like one is simply not a realistic goal. You have to program it to have attachments, emotions, goals, motivations, and so on. On top of that, they don’t work like humans which became very clear after several decades of attempts to build computers to think more like we do. To declare that the entire AI community working on creating mathematical models of cognition for more than half a century is wrong and your company has somehow found the magic key to machine learning, so much so that it needs a behavioral therapist to treat it like a human, is suspect to say the least.

    Then again, maybe you have found something huge. I’ll hold my final judgement until part two but again, the secretive revolutionary algorithms that are apparently way over the head of the entire AI community are not a good underpinning for your argument. You’re basically saying that you can rebut all my counterpoints with whatever you have in a mystery box, but you can’t open the mystery box and let me look inside.

    … regarding the baby knowing how to cry, etc.. that’s why I mentioned the genetic code that acts as the set of instructions on how to behave/react in response to stimuli.

    Behavior is not a function of the genetic code. Brain development is governed by many complex biological factors and DNA is just one player in an entire symphony of chemical signals and environmental pressures. You do not have a “cry” gene or gene combination. You have a brain that wired itself to respond this way because of a certain common sequence of chemical signals during development.

  • Andrea Kuszewski

    @Greg: Your thinking about the subject is so linear, you aren’t allowing yourself to imagine other possibilities. New ways do exist, and they are working, right now. There is still work to be done in order to perfect it, but it works. I guess the proof will be in the product, which you are sure to happen upon eventually.

    Regarding the baby, it was an analogy, not meant to be taken literally. Again, inflexible, too-literal thinking hindered your imagination, and the message was lost in the medium.

    Sometimes in order to make progress, we need to take some creative risks—dream the unthinkable and attempt the impossible—and sometimes, it just might work out. In order to break new ground, old ‘rules’ must be broken, and sometimes paradigms need to be shifted or redefined. That doesn’t mean it’s impossible, it just means it’s new. That’s the nature of creativity. :)

  • http://worldofweirdthings.com Greg Fish

    Your thinking about the subject is so linear, you aren’t allowing yourself to imagine other possibilities. New ways do exist, and they are working, right now.

    Obfuscation and condescension an argument do not make. Certainly I don’t know everything about computing and AI, but telling me “new ways exist! new ways exist!” without telling me what those new ways are is highly suspect. As for the product, when will it be ready? Will I have to pay for it to see if it works? Is there published academic literature on which it’s based?

    Sometimes in order to make progress, we need to take some creative risks—dream the unthinkable and attempt the impossible—and sometimes, it just might work out. In order to break new ground, old ‘rules’ must be broken, and sometimes paradigms need to be shifted or redefined. That doesn’t mean it’s impossible, it just means it’s new.

    Yes, yes, I know, I know. We must spread our wings and fly, dream the dream, live the life, size the day, the night, and bathe in the fountain of discovery while twisting paradigms into a cheese pretzel, or whatever.

    But seriously, paper? Working website? Product demo videos? Open source APIs? Wikipedia entry? Anything? If my thinking is so linear and limited, give me something new to learn.

  • Andrea Kuszewski

    @Greg: It’s so cute when you try and play Mr Skeptic on me. I’m immune to those tactics, so you might as well drop the performance and act in a respectful, non-shouty manner, or I’ll simply ignore your comments from this point on.

    And you should really check out the links I provided.

    Also, if you think I’m going to give trade secrets away because you demand them, you are out of your mind.

    With that said, some of this will be discussed in the next part (which I already mentioned).

  • http://worldofweirdthings.com Greg Fish

    It’s so cute when you try and play Mr Skeptic on me.

    Wow, and I’m the disrespectful one in this exchange? I detailed some of my objections with as much detail as seemed reasonable and your response was to basically demean them as stuck in the past and discard the entire AI community along just for good measure, then say that it’s “cute” when I ask for something more than vague PR about shifting paradigms.

    And you should really check out the links I provided.

    All right, three Wikipedia entries with vocabulary definitions and three summarizing Asimov’s wonderful body of literary work (growing up reading his novels I have nothing but admiration for the man and his writing), two books on Amazon, and one actually interesting project teaching robots how to respond to an audience using the very same neural networks I described in some detail and you decried as being “too linear and reductionist” to process the input from said audience. Which link is the one that’s supposed to at least point me in the right direction?

    Also, if you think I’m going to give trade secrets away because you demand them, you are out of your mind.

    Ah so it’s a trade secret. So what exactly are you here to do then? You can’t tell us what you do in anything more than general terms or risk violating an NDA. You can’t tell us how you do it because again, NDA. After both parts all we’ll know is that there’s a company in Silicon Valley that hired a therapist to analyze it’s AI prototypes and that AI thinking like a human with be great for the reason you’ll outline and that’s it.

    Now, funny thing is that a lot of companies will publish white papers that show their general ideas on a particular topic. Symantec won’t tell you exactly how it catches viruses and spyware, but it will tell you that it targets suspicious actions such as unauthorized access to the web or modifying deep folder structures, or trying to hide itself in a system folder with an awfully similar file name to a critical system file like rundll32.exe on Windows. That’s what I was talking about, not spilling out your source code.

    Certainly it’s your right to ignore my comments, but as far as I see it, I asked some questions, pressed for detail when I felt they were sidestepped, and when I pushed one more time, I was met with derision and an author who promised to ignore me with a huff and a puff. How other readers see this is, of course, up to them…

  • Jotaf

    Interesting read, can’t wait for the next part!

    As a researcher in pattern recognition I’m intrigued by the idea of collaborating with someone who is dedicated to designing the system’s training procedures — they can be as critical to good performance as the algorithm’s themselves :)

  • James Naranjo

    On a separate issue, the fact that all children can accurately say, “My brain is not like a computer,” and given that, “There is a difference between memorizing  question-answer sets and really learning how to solve a novel problem.” explains why 30 years of governmental effort to reform and improve our schools have resulted in the steady decline of American education.  The dependence upon Japanese style standardized test driven competition has eliminated creative problem solving and replaced it with memorizing question-answer sets.  Ironically, the Japanese were well aware of this weakness in their system of education  as early as the 1970’s.  At that time they tried to make their a schools more like ours.  At the same time, we began our crusade to adopt the worst of their system.  We have succeeded, making our children’s educational  experience  more and more tailored to programming  computers and less and less effective in teaching human children to find novel solutions to novel problems.

    I should note that I am a retired school principal.

  • John

    I think the psychologist is properly needed to temper the ego of what is created in the robot. So many times I’ve seen just how the message of a creation is diverted just for the sake of the personal in a project. This psychology on the other hand helps keep the message to the point, and allows for the dynamics to be more fully explored, and grown to. This is how people can better understand the subject is alive, as long as one stays relevant to its topics. To overemphasize its details makes the computer the more relevant – whereas one should note the human was the initial detailer, and should decide for her or himself where that stops, or goes on…

  • Wesley

    “and that quickly became one his favorite memes”, That should be one OF is his favourite memes. Oh wait.. did I just check grammer and spelling and miss the point of the article? I guess we all have a little computer in us… or maybe it’s just me. :P

    Ed: Fixed

NEW ON DISCOVER
OPEN
CITIZEN SCIENCE
ADVERTISEMENT

Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

The Crux

A collection of bright and big ideas about timely and important science from a community of experts.
ADVERTISEMENT

See More

ADVERTISEMENT
Collapse bottom bar
+

Login to your Account

X
E-mail address:
Password:
Remember me
Forgot your password?
No problem. Click here to have it e-mailed to you.

Not Registered Yet?

Register now for FREE. Registration only takes a few minutes to complete. Register now »