I, Robopsychologist, Part 1: Why Robots Need Psychologists

By Andrea Kuszewski | February 7, 2012 1:38 pm

Andrea Kuszewski is a behavior therapist and consultant, science writer, and robopsychologist at Syntience in San Francisco. She is interested in creativity, intelligence, and learning, in both humans and machines. Find her on Twitter a @AndreaKuszewski.

“My brain is not like a computer.”

The day those words were spoken to me marked a significant milestone for both me and the 6-year-old who uttered them. The words themselves may not seem that profound (and some may actually disagree), but that simple sentence represented months of therapy, hours upon hours of teaching, all for the hope that someday, a phrase like that would be spoken at precisely the right time. When he said that to me, he was showing me that the light had been turned on, the fire ignited. And he was letting me know that he realized this fact himself. Why was this a big deal?

I began my career as a behavior therapist, treating children on the autism spectrum. My specialty was Asperger syndrome, or high-functioning autism. This 6-year-old boy, whom I’ll call David, was a client of mine that I’d been treating for about a year at that time. His mom had read a book that had recently come out, The Curious Incident of the Dog in the Night-Time, and told me how much David resembled the main character in the book (who had autism), in regards to his thinking and processing style. The main character said, “My brain is like a computer.”

David heard his mom telling me this, and that quickly became one of his favorite memes. He would say things like “I need input” or “Answer not in the database” or simply “You have reached an error,” when he didn’t know the answer to a question. He truly did think like a computer at that point in time—he memorized questions, formulas, and the subsequent list of acceptable responses. He had developed some extensive social algorithms for human interactions, and when they failed, he went into a complete emotional meltdown.

My job was to change this. To make him less like a computer, to break him out of that rigid mindset. He operated purely on an input-output framework, and if a situation presented itself that wasn’t in the database of his brain, it was rejected, returning a 404 error.

In the course of therapy, David opened himself up to new ways of looking at problems and deriving solutions. When I asked him a question he had never been exposed to, it forced him to think about the content of my question itself, not just search the database of his brain for the correct response that he had memorized. When he failed to come up with an appropriate answer to a problem (which happened quite a bit initially), we discussed the reasons why it was wrong. Then, after a series of inappropriate attempts at a solution, he started to see the patterns that made up “possible correct responses” and the varying degrees of correctness, as well as the “always incorrect answers” and the “sometimes correct, sometimes incorrect, but depends on context”.

He was no longer operating on a pure input-output or match-to-sample framework, he was learning how to think. So the day gave me a completely novel, creative, and very appropriate response to one of my questions followed by those simple words, “My brain is not like a computer”, it was pure joy. He was learning how to think creatively. Not only that, but he knew the answer was an appropriate, creative response, and that right there—the self-awareness of his mental shift from purely logical to creative—was a very big deal.

My experience in teaching children with autism to think more creatively really got me to reverse engineer the learning process itself, recognizing all the necessary components for both creativity and increasing cognitive ability. There is a difference between memorizing question-answer sets and really learning how to solve a novel problem. There are times when memorization or a linear approach to problem-solving is appropriate, but there are other times when creative problem-solving and problem-finding are needed. And in order to teach a linear circuit how to think creatively, you absolutely must have a good understanding of the learning process itself, as well as know how to reach those ends successfully.

The time I spent making humans “less like robots” made me start thinking about how this learning paradigm could be applied to actual robots and thinking machines. In order to create artificial intelligence (AI) that can actually think like a human, you need to teach it to learn like humans do. That brought me to my current interest—and job—in robopsychology, or AI psychology.

 

Robopsychology: Bridging Humanity and Technology

Robopsychology, loosely defined as the study of personality and behavior of intelligent machines, was first made popular by sci-fi writer Isaac Asimov in his series Robot Short Stories. Susan Calvin—the very first robopsychologist—was a character that appeared in several of Asimov’s works. Dr. Calvin was the expert on human nature, the nature of machines, and their ultimate intersection: artificial intelligence.

Robots and human-like machines are gaining popularity in many diverse fields, for a wide variety of uses. The more they resemble actual human thinking and behavior, the more useful they can be. They are being used for teaching, companionship, therapy, and even entertainment. For example, Heather Knight, a social roboticist, is teaching her robot Data how to do stand-up comedy, routinely performing in public. In order for a robot to be successful at this very human-like task, understanding human behavior (and humor) is extremely helpful.

What Does a Robopsychologist Actually Do?

Similar to the way we have a variety of psychology professionals that deal with the spectrum of human behavior, there is a range of specialties/duties for robopsychologists as well. Unfortunately, not every robopsychologist is a modern-day Susan Calvin (although it does sound pretty sweet, and could make for fun conversation at parties). But in reality, depending on the type of machines being developed and for what purpose, the duties and skills of robopsychologists could vary quite a bit, just as practitioners in human psychology do.

Some examples of the potential responsibilities of a robopsychologist:

  • Assisting in the design of cognitive architectures
  • Developing appropriate lesson plans for teaching  AI targeted skills
  • Create guides to help the AI through the learning process
  • Address any maladaptive machine behaviors
  • Research the nature of ethics and how it can be taught and/or reinforced
  • Create new and innovative therapy approaches for the domain of computer-based intelligences

In the work I do, there is a constant back-and-forth between robopsychology and human psychology—a mutually beneficial relationship of teaching and learning from each other. For example, at Syntience, the AI research lab that I’ve recently joined, we are working on developing AI that can understand the semantics of natural language. In order to be able to do this, they must first be taught—much in the same way humans are taught.

A baby is born without a database of facts. It is in some ways a blank slate, but also (don’t worry, Steven Pinker fans) has a genetic code that acts as a set of instructions on how to learn when exposed to new things. In the same way, our AI is born completely empty of knowledge, a blank slate. We give it an algorithm for learning, then expose it to the material in needs to learn (in this case, books to read) and track progress. If children are left to learn without any assistance or monitoring for progress, over time, they can run into problems that need correcting. Because our AI learns in the same fashion, it can run into the same kinds of problems. When we notice that learning slows, or the AI starts making errors—the robopsychologist will step in, evaluate the situation, and determine where the learning process broke down, then make necessary changes to the AI lesson plan in order to get learning back on track.

Likewise, we can also use the AI to develop and test various teaching models for human learning tasks. Let’s say we wanted to test a series of different human teaching paradigms for learning a foreign language. We could create a different learning algorithm based on each teaching model, program one into each AI, then test for efficiency, speed, retention, generalization, etc.

In I, Robot, the blockbuster movie adapted from Asimov’s robot stories, Calvin sums up her job description by saying, “I make robots seem more human.” If I had to sum up my main goal as a robopsychologist, it would be “to make machines think and learn like humans,” and ultimately, replicate creative cognition in AI. Lofty goal? Perhaps. Possible? I believe it is. I’ll be honest, though—I haven’t always thought this way.

The main reason for my past disbelief is because most of the people working on AI discounted the input of psychology. They erroneously thought they could replicate humanity in a machine without actually understanding human psychology. Seems like a no-brainer to me: If you want to replicate human-like thinking, collaborate with someone who understands human thinking on a fundamental and psychological level, and knows how to create a lesson plan to teach it. But things are changing. The field of AI is finally, slowly starting to appreciate the important role psychology needs to play in their research.

Robopsychology may have started out as a fantasy career in the pages of a sci-fi novel, but it illustrated a very smart and useful purpose. In the rapidly advancing and expanding field of artificial intelligence, the most forward-thinking research labs are beginning to recognize the important—some even say critical—role psychology plays in the quest to engineer human-like machines.

 

Andrea Kuszewski’s exploration of robopsychology will soon continue in “I, Robopsychologist, Part 2: Why We Want Robots That Think Like Humans.”

 

References:

Implicit Learning as an Ability by Kaufman, DeYoung, Gray, Jimenez, Brown, and Mackintosh
The Creativity of Dual Process “System 1” Thinking by Scott Barry Kaufman, Scientific American.com
Reduction Considered Harmful by Monica Anderson, Hplusmagazine.com
The Reticular Activating Hypofrontality (RAH) Model of Acute Exercise by Dietrich and Audiffrin

CATEGORIZED UNDER: Mind & Brain, Technology, Top Posts
ADVERTISEMENT
NEW ON DISCOVER
OPEN
CITIZEN SCIENCE
ADVERTISEMENT

Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

ADVERTISEMENT

See More

ADVERTISEMENT
Collapse bottom bar
+