In the 2012 Bot Prize competition, the true winner may be the one who makes the most mistakes. In this match, video game avatars directed by artificial intelligence compete to see which comes across as most human in a fight against real human players. This year, for the first time, human participants mistook the two bots for humans more than half the time, a feat researchers attribute to the fact that these bots were programmed to be less-than-perfect players.
Where is the cup? THERE IS NO CUP.
What’s the News: Ever since Alan Turing, the father of modern computers, proposed that sufficiently advanced computers could pass as human in a conversation, the classic Turing test has involved what’s essentially instant messaging. Computers designed to imitate human conversational patterns are often entered by their designers in competitions where they aim to fool people in front of a distant monitor into thinking they’re human—and they do a pretty good job, although some human mimics, like chatbots, sound like crazed children on their first spin in cyberspace (“I’m not a robot, I’m a unicorn!“).
But scientists have noticed that humans describe where objects are in space in a specific way, taking into account what spatial relationships would be most useful for a human listener. Artificial intelligences, even fairly sophisticated ones, talk about space differently, and the difference is large enough that it can form the basis of a new type of Turing test, British scientists reported at a conference in April. Now, New Scientist has developed an interactive version of the test, which lets you see for yourself what statements about space set off your silicon-lifeform alarms. So what’s behind it?