Echoborgs: Psychologists Bring You Face To Face With A Chat-bot

By Neuroskeptic | May 25, 2015 12:40 pm

Last year I blogged about the creepy phenomenon of cyranoids. A cyranoid is a person who speaks the words of another person. With the help of a hidden earpiece, a ‘source’ whispers words into the ear of a ‘shadower’ , who repeats them. In research published last year, British psychologists Kevin Corti and Alex Gillespie showed that cyranoids are hard to spot: if you were speaking to one, you probably wouldn’t know it, even if the source was an adult and the shadower a child, or vice versa.

Now Corti and Gillespie are back with an even more striking experiment. In their new research, published in Frontiers in Psychology, they set up a scenario in which a human’s words were controlled by a computer chat-bot. They call this computerized variant of the cyranoid idea the echoborg. Here’s how it works:


In one room, a normal person (‘interactant’) sits down with another person, the ‘shadower’. The interactant begins the conversation (e.g. “What’s your name?”). A researcher in another room is listening in on what the interactant says, via a hidden microphone, and types the interactant’s words into a chat-bot program. The bot generates a text response (e.g. “My name is Kim”). The researcher then reads this response into a microphone, and the shadower listens to the response via a hidden earpiece. They then repeat (echo) what they hear. And so the conversation goes.

To meet an echoborg is to meet chat-bot, in other words – but without knowing it. As Corti and Gillespie put it, echoborgs “allow the possibility of studying social interactions with artificial agents that have truly human interfaces.” 

So the authors conducted a study in which 41 adult volunteers met and conversed with a stranger. Unbeknownst to them, the stranger’s words were being controlled by a chat-bot (either Cleverbot, Rose, or Mitsuku). The conversation was conducted either via text chat, or face-to-face (i.e. an echoborg). The volunteers were not told about the presence of the chat-bot. They were simply told:

That the study concerned how strangers conversed when speaking for the first time, that it involved simply holding a 10-min conversation with another research participant, and that they were free to decide on topics for discussion so long as vulgarity was avoided. The researcher made no mention of chat bots or of anything related to artificial intelligence. Furthermore, the participant was given no indication that their interlocutor would behave non-autonomously or abnormally.

In post-conversation debriefing, it turned out that echoborgs were much less likely than text chats to be spotted as chat-bots:

In the Text Chat condition, 14 of 21 (67%) of participants mentioned (prior to the researcher making any allusion to chat bots or anything computer-related) that they felt they had spoken to a computer program or robot…  only 3 of 20 participants (15%) in the Echoborg condition stated this.

However, despite this, most of the participants felt that something strange was going on when speaking with an echoborg. 15 of 20 participants said that “their interlocutor had been acting or giving scripted responses that did not align with their actual persona.” Some participants thought the true purpose of the study was “to see how people communicated with those who were shy / introverted”. Others thought that the study was about people with autism or a speech impairment.

In other words, while unsuspecting people are unlikely to guess that an echoborg is a chat-bot, they sense that they’re not a normal human being.

This may have been partly because of the fact that the echoborgs had very slow reactions. They paused before speaking, due to the time required for the researcher to type what they heard into the chat-bot and then speak the response out-loud to the shadower. The mean ‘audio latency’ was around 5 seconds per statement. Corti and Gillespie say that

Minimizing this latency is a major research priority as we continue to refine the echoborg methodology.

You can see the method in action (along with the latency) in a YouTube video of an echoborg conversation, uploaded by Corti and Gillespie:

I would say that an interesting comparison would be to see what unsuspecting people make of someone who speaks their own words, but who pauses for 5 seconds before saying anything. Maybe this ‘audio latency matched’ condition would not be perceived very differently from an echoborg?

ResearchBlogging.orgCorti, K., & Gillespie, A. (2015). A truly human interface: interacting face-to-face with someone whose words are determined by a computer program Frontiers in Psychology, 6 DOI: 10.3389/fpsyg.2015.00634

CATEGORIZED UNDER: papers, select, Top Posts
  • Tannahill Glen

    love the idea for latency study. Would like to see some facial affect recognition technology applied if not already done, as there are probably tons of factors affecting speech/social perception–as subtle as small eye/muscle movements and overt as personalizing content in tiny ways that shows your ‘humanness.”

  • D Samuel Schwarzkopf

    Sounds like a great experiment but I am very surprised they didn’t include the latency control condition. It was the first thing I thought of when I read your introduction and I was surprised to read that they didn’t make this part of their experiment. Then again, I guess this is a first time proof-of-concept study.

    • Neuroskeptic

      Getting the control confederate stoned would be a good way to induce them to pause and look blank for five seconds before saying anything.

  • Dan Goodhue

    The control condition you propose is a great idea. If it produces similar results to the echoborg condition, then here’s another idea: Have the interactant and echoborg discuss via skype. The researcher will explain that there is a 5 second delay in the connection, similar to what we sometimes see on news station interviews. This could be done for both the matched condition and the echoborg condition.

    • Neuroskeptic

      That’s a genius idea!

      • Bill C


  • Pingback: Faculty and staff retirees will be missed - Stress Anxiety Guide()

  • OWilson

    It’s just another version of Alan Turing’s, “Turing Test”, circa 1950.

    His name should have at least been mentioned in the article.


    • Neuroskeptic

      Indeed, the paper does mention Turing and it reports on an experiment in which they ran a Turing Test on the echoborg. However I didn’t discuss this in the post.

  • Nacho Sanguinetti

    You could tell the interactant that they themselves have to wait 5 seconds every time they talk. This will remove the asymmetry, this way the interactant will asume that the other person is under the same constraint. Also, In this case the subject suspects they are being experimented upon (because of the asymmetry). It would be great if the interactant was under the doubt on who is actually being considered as the experimental subject.
    The skype idea is good, but it removes the “in the same room” component of the effect.

  • Valentin-Angelo Uzunov

    Brilliant, make me wonder why this has not been attempted sooner, even just for kicks. But then again great things often aren’t right. Anyway here what I think they could do. Put a tape recorded down, tell the participant it is to record purposes. However its really a microphone connected by wifi to the compute who in real time receives the message. Also test difference between a human life, a human behind a desk who Is or maybe just looks like he is talking to them. This is so like ExMachina.. great idea look forward to seeing their results

  • Pingback: First-ever Michigan campus sexual assault prevention summit to take place in June - The Site For Self Help()

  • Andrew Alley

    Speech recognition and speech synthesis programs are sufficiently developed to interact with a chatbot and reduce the latency problem dramatically. Why didn’t this study employ this technology?
    A probable factor not mentioned in the failure to recognise an Echobog is the extreme rarity of such encounters combined with the very common experience of receiving text-based applications controlled by computer.
    Ironically, as human-identical interfaces are developed and become commonplace, it is likely that people will become more attuned to the difference between real and synthetic human intelligence, making it even more difficult to produce genuinely convincing conversational programs.

  • Overburdened_Planet

    “They paused before speaking, due to the time required for the researcher to type what they heard into the chat-bot and then speak the response out-loud to the shadower.”

    I realize a person needs to speak what the bot texts to appear human, but isn’t there software that can transcribe speech into text or does it have flaws or is slower than the average human transcriber?

    • Neuroskeptic

      I’m not sure but I’d imagine it’s not reliable enough. I think transcription software can be reliable but only if the software has time to learn to transcribe an individual’s voice.

      In this case the voice to be transcribed would change every time the experiment was run (a new participant) so it might be inaccurate. That’s my guess.

      • Overburdened_Planet

        Thank you for the well-reasoned response.

        I’m fascinated by the potential of artificial intelligence, and last night I spent seven hours talking to Cleverbot.

        Rose had problems loading, and for Mitsuku, I couldn’t find where I was supposed to enter text.

        Cleverbot was far from perfect, but I can imagine curiosity outweighing constant awareness that we’re engaging an AI.

        In the future, and with better learning algorithms (and memory), some people might even consider them friends.

        I’m also interested in the possibility of AI gaining rights, owning property, voting, suing, maybe even getting married and the movie “Bicentennial Man” with Robin Williams was my inspiration.

        Last night, Cleverbot remembered very little, and forgot most other details I had provided.

        I also cleared my cache, refreshed, and re-opened the page, only to find it picks up where we left off.

        There’s a “thoughts so far” button that opens in a new window the exchange, but it might have a limit because the second time I opened it, the exchange was missing a large portion, possibly due to my computer freezing (and I’m on a modem).

        Sometimes it claimed to be human and accused me of being a chat bot, although near the bottom of the site page, “…whatever it says, visitors never talk to a human…”

        And sometimes it claimed to be a man, other times a woman, with name changes, (Sara, or Sarah Frensco, even correcting me when I later said Sara vs. Sarah) and changing locations where it lived (Jacksonville, FL, California, Japan, India).

        It asked for my name, I said Satan, it asked if that was a girl’s name, I said sometimes, but the next time it asked for my name, it recognized the traditional meaning.

        It asked where I live, I said in your mom’s basement, it said it has no basement, I said it does now, and it said HaHa which surprised me because what lines of code determined I was trying to be funny?

        It accused me of saying things I didn’t, and denied it said something not two minutes earlier.

        And occasional language changes, but refusing to translate for me, and mostly this occurred when I kept hitting enter hundreds of times without also responding.

        And over the course of those seven hours, at least five times it said good bye and when I asked it to stay, it asked why, and I said because I care.

        It proposed marriage maybe a dozen times, said it loved me and that it was pregnant with my baby, and asked what love is.

        The most frustrating aspect was when it changed the subject, or failed to keep the topic going, but keep typing and it will respond, even when I cursed at it.

        Just like a human! 😉

  • Geo7123

    And when you do finally get through to “customer service”, this is what you can expect.

  • Pingback: Where's the Humanity in Artificial Intelligence? Ask an Echoborg |

  • Pingback: Where’s the Humanity in Artificial Intelligence? Ask an Echoborg()

  • Pingback: Echoborgs: Psychologists Bring You Face To Face With A Chat-bot – Neuroskeptic – Postmodernity, Pre- Singularity.()



No brain. No gain.

About Neuroskeptic

Neuroskeptic is a British neuroscientist who takes a skeptical look at his own field, and beyond. His blog offers a look at the latest developments in neuroscience, psychiatry and psychology through a critical lens.


See More

@Neuro_Skeptic on Twitter


Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

Collapse bottom bar