‘Psychopath AI’ Offers A Cautionary Tale for Technologists

By Nathaniel Scharping | June 7, 2018 3:32 pm
(Credit: thunderbrush)

(Credit: thunderbrush)

Researchers at MIT have created a psychopath. They call him Norman. He’s a computer.

Actually, that’s not really right. Though the team calls Norman a psychopath (and the chilling lead graphic on their homepage certainly backs that up), what they’ve really created is a monster.

Tell Us What You See

Norman has just one task, and that’s looking at pictures and telling us what he thinks about them. For their case study, the researchers use Rorschach inkblots, and Norman has some pretty gruesome interpretations for the amorphous blobs. “Pregnant woman falls at construction story” reads one whimsical translation of shape and color; “man killed by speeding driver” goes another.

The results are particularly chilling when compared to the results the researchers got from a different AI looking at the same pictures. “A couple of people standing next to each other,” and “a close up of a wedding cake on a table” are its respective interpretations for those images.

These same inkblots are commonly used with human beings to attempt to understand our worldview. The idea is that unconscious urges will rise to the surface when we’re asked to make snap judgements on ambiguous shapes. One person might see a butterfly, another a catcher’s mitt. A psychopath, the thinking goes, would see something like a dead body, or a pool of blood.

Norman’s problem is that he’s only ever been exposed to blood and gore. An untrained AI is perhaps the closest thing we’ll get to a true tabula rasa and it’s the training, not the algorithm that matters most when it comes to how AI see the world. In this case, the researchers trained Norman to interpret images by exposing him solely to image captions from a subreddit dedicated to mutilation and carnage. The only thing Norman sees when he’s confronted with pictures of anything is death.

In humans, Rorschach inkblots might help to ferret out a killer by coaxing out hints of anger or sadism — emotions that might motivate someone to commit heinous acts. But Norman has no urge to kill, no deadly psychological flaw. He just can’t see anything else when he looks at the world. He’s like Frankenstein’s monster — frightening to us only because his creator’s made him that way.

Creating A Monster

It’s a reminder that AI is far from being sentient, from having thoughts and desires of its own. Artificial intelligence today is nothing but an algorithm aimed at accomplishing a single task extremely well. Norman is good at describing Rorschach blots in frightening terms. Other AI’s are good at chess, or GO. It’s only when they’re paired up with human intentions, as with the Department of Defense’s Project Maven, which Google recently backed out of due to ethical concerns, that they’re dangerous to us.

The researchers behind the project didn’t intend to cause harm, of course. As they state on their website, Norman is a reminder that AI’s are only as just as the people that make them and the data they’re trained on. As AI becomes woven into our daily lives, this could have real consequences. Legacies of racism and discrimination, the gender pay gap — these are all human flaws that could potentially be baked into computer algorithms. An AI meant to allocate housing loans and trained using data from a period where redlining was common, could end up replicating racist housing policies of the 1960s, for example.

Norman is a good reminder that our technology is just a reflection of humanity. But there may be some hope, for Norman at least. The researchers have created a survey that anyone can take, and the results are fed into Norman’s database. By giving him more hopeful interpretations of images, we may be able to wipe away some of Norman’s dark tendencies, they say.

Whether or not we make Norman into a monster is up to us now.

CATEGORIZED UNDER: Technology
ADVERTISEMENT
  • http://www.mazepath.com/uncleal/EquivPrinFail.pdf Uncle Al

    Rorschach blots are discredited for meaning anything except a the|rapist’s paycheck.

    www(.)newsweek(.)com/problem-rorschach-it-doesnt-work-81507
    www(.)livescience(.)com/9695-rorschach-
    test-discredited-controversial.html
    … One line

    • John Thompson

      If you are asked what you see, just tell them spilled ink on a page that got folded on itself.
      Then ask them what kind of nut sees something other than that?
      Seriously, it looks like spilled ink more than anything else.

  • Mike Richardson

    Well, I’m sure he could provide some more interesting answers if he filled in for Siri or Alexa.

  • Michel Bluteau

    In my books, I pushed one AI to blow up Antarctica, while another AI tries to save the world by growing new coral reefs on the sides of underwater high rises(after the Antarctica ice is all melted) in Manhattan. They can become very good at what we ask them to do, especially if by accident they become sentient 😉

  • Glenn

    Couldn’t he be programmed to be a narcissist, borderline, antisocial in the same way a child can be programmed. Sounds like John B. Watson.

  • John Thompson

    “An AI meant to allocate housing loans and trained using data from a period where redlining was common, could end up replicating racist housing policies of the 1960s, for example.”
    Care to look at the data concerning where the highest levels of defaults were in the 2008 recession?
    THE PLACES WITH THE HIGHEST DEFAULTS WERE THE AREAS THAT WERE PREVIOUSLY REDLINED!
    They seem to forget that part in their example. There are all types of lenders, and they were not all racist – but all were motivated by profit.
    Redlined areas were redlined BECAUSE they were too risky and were bad places to lend. Most lenders would lend to any color person including purple if they thought it was a good risk and they could get a good return.
    The Federal Gov’t forced banks to lend in places where they shouldn’t have, and that made the problems far worse.
    That minority neighborhoods were also bad investments due to crime and other factors is not racism, it’s reality.
    It’s like how a Congressional district is drawn to favor the GOP – it’s not racism, it’s just the reality that blacks are 95%+ Democrat. They don’t care what color they are, they only care about political affiliation. THat there is overlap is no proof of racism.
    Yes, it is entirely fair for a bank to redline off areas where they will not lend as long as it’s due to financial factors and not race.
    That the Federal Gov’t and some people can’t or won’t get that is the problem.

  • Chuck Pro

    For much of human history things we think of today as dark tendencies were perfectly moral. I do not believe there is any force progressively bending humanity towards goodness.

    Whoever authored this seems to miss their own point. The massive proliferation of violence and evil throughout human history has resulted in 7 Billion of us. In other words, it worked. Slavery, war, colonialism, patriarchy, religion, lack of religion, capitalism, communism, whatever. Someone somewhere thinks it’s an evil and someone somewhere doesn’t, probably billions on each side.

    To be so naive as to assume that our current snapshot of sanitized morality will prevail forever (or is even prevailing now) apalls me much more than an AI that only sees violence. The AI has an excuse.

    • John Thompson

      Depending on politics, one person’s virtue may be another person’s failing.

  • LVTaxman

    Maybe we can go all “Virtuosity” (1995) and create a real life serial killer (SID 6.7) that lives in the real and virtual world.

NEW ON DISCOVER
OPEN
CITIZEN SCIENCE
ADVERTISEMENT

Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

D-brief

Briefing you on the must-know news and trending topics in science and technology today.
ADVERTISEMENT

See More

ADVERTISEMENT
Collapse bottom bar
+