In the 2012 Bot Prize competition, the true winner may be the one who makes the most mistakes. In this match, video game avatars directed by artificial intelligence compete to see which comes across as most human in a fight against real human players. This year, for the first time, human participants mistook the two bots for humans more than half the time, a feat researchers attribute to the fact that these bots were programmed to be less-than-perfect players.
A robot has learned a handful of simple words in the same general way that infants do: by listening to the speech, and feedback, of human adults.
Human teachers—who ran the gamut in terms of age, occupation, and experience with kids—worked with a humanoid, toddler-sized robot, describing the colors and shapes on a toy block, as seen in the video above and described in a new study in PLoS ONE. The robot babbled back, learning which combinations of sounds are correct based both on what it had heard and on how the human responded, much like babies do when learning to speak. Giving the robot a childlike form, the researchers suggest, let people interact with it more like they would an actual baby, helping it better model language learning than having people talk to a screen or a box.
It’s pretty cool that the robots could pick up words from human-like interactions. But it’s important to keep in mind that we can only build robots to imitate what it looks like when babies learn, because we don’t know exactly what’s going on in babies’ brains when they learn language—and we certainly don’t understand it well enough to build a program that would work just the same way.
[via Wired Science]
Where is the cup? THERE IS NO CUP.
What’s the News: Ever since Alan Turing, the father of modern computers, proposed that sufficiently advanced computers could pass as human in a conversation, the classic Turing test has involved what’s essentially instant messaging. Computers designed to imitate human conversational patterns are often entered by their designers in competitions where they aim to fool people in front of a distant monitor into thinking they’re human—and they do a pretty good job, although some human mimics, like chatbots, sound like crazed children on their first spin in cyberspace (“I’m not a robot, I’m a unicorn!“).
But scientists have noticed that humans describe where objects are in space in a specific way, taking into account what spatial relationships would be most useful for a human listener. Artificial intelligences, even fairly sophisticated ones, talk about space differently, and the difference is large enough that it can form the basis of a new type of Turing test, British scientists reported at a conference in April. Now, New Scientist has developed an interactive version of the test, which lets you see for yourself what statements about space set off your silicon-lifeform alarms. So what’s behind it?
NOTE: Before tonight’s big match begins, check out our feature, “Who’s Smarter, a Human or a Computer? Round 9: Jeopardy,” on the other human games that AI programmers have tried to perfect—and the ones where humans maintain the advantage.
I can already hear the Jeopardy theme music (which isn’t my ringtone, I swear!). Tonight, one of the highest-profile man versus machine contests in years begins, as Jeopardy will air the matches pitting former flesh-and-blood champions Ken Jennings and Brad Rutter against Watson, the question-and-answer supercomputer by IBM.
Since traveling to IBM’s research center for the practice/demonstration match (which Watson led when play stopped after 15 clues), we here at DISCOVER have been simultaneously excited for the match and anxious about the prospects for our species’ chosen representatives to come out on top. Jennings apparently feels the same way:
Live, from IBM’s Thomas J. Watson Research Center: This is Jeopardy!
Today, IBM rolled out its Jeopardy-playing computer, a whiz machine named Watson that was four years in the works. In today’s demonstration match for the media Watson played against Brad Rutter and Ken Jennings, the two great (human) Jeopardy champions who will provide opposition for Watson in a two-day exhibition match. That man versus machine faceoff will air in February, and carries a prize of a million dollars. Bad news, humans: In today’s exhibition of about 15 questions, Watson tallied $4,400, compared to $3,400 for Jennings and $1,200 Rutter.
On stage, Watson was represented by a screen displaying its avatar (pictured) behind a podium, where a human player’s torso would be. Its avatar is a mobile graphic of the Earth with aurora-like lines swirling around it. When Watson was confident in its answer those swirls were shaded green; when it wasn’t they turned an orange hue. The questions were fed in plain text to Watson, but it had to wait the same amount of time to ring in as the human players did. To make the game fair, it also had to trigger a mechanical signaling button. Watson spoke in a stilted computerized voice–and was almost never wrong.
The machine started off on a roll in the category “Chicks Dig Me,” about women and archaeology. Jehrico. Agatha Christie. Mary Leakey. Crete. Watson fired off the answers so quickly it looked like it might blow its puny human competition off the stage. Fortunately for our species pride, Ken and Brad recovered with some right answers of their own.
Is the human brain in final jeopardy?
Last April IBM announced its newest plan to crush humans in the gaming sphere: After taking us to task at chess, it would conquer us at “Jeopardy!” Since then the game show-playing robot, Watson, has been in development (cue 80′s training montage featuring computer programmers). J-Day approaches, and this fall the battle should commence.
In a lengthy New York Times Magazine feature on Watson this week, some of the details of the match became clear. It will take place this fall on national television.
Watson will not appear as a contestant on the regular show; instead, “Jeopardy!” will hold a special match pitting Watson against one or more famous winners from the past. If the contest includes Ken Jennings — the best player in “Jeopardy!” history, who won 74 games in a row in 2004 — Watson will lose if its performance doesn’t improve. It’s pretty far up in the winner’s cloud, but it’s not yet at Jennings’s level… The show’s executive producer, Harry Friedman, will not say whom it is picking to play against Watson, but he refused to let Jennings be interviewed for this story, which is suggestive [The New York Times].
If you’ve always yearned for “Jeopardy!” to feature the same kind of obsessive game-planning and secrecy as professional sports, this is your time. While folks on the show’s side won’t reveal the human contestants, the folks on the IBM side are busy testing Watson against humanity in mock “Jeopardy!” games. They even found a fake Alex Trebek (and no, it’s not Will Ferrell).
I.B.M.’s scientists began holding live matches last winter. They mocked up a conference room to resemble the actual “Jeopardy!” set, including buzzers and stations for the human contestants, brought in former contestants from the show and even hired a host for the occasion: Todd Alan Crain, who plays a newscaster on the satirical Onion News Network [The New York Times].
An artificial brain as powerful as a human’s remains a distant goal, but scientists are inching closer. This week IBM announced that by using a brain-simulating algorithm called BlueMatter, researchers created an artificial brain simulation that packs more brainpower than a cat.
Researchers used an IBM supercomputer at the Lawrence Livermore Lab to model the movement of data through a structure with 1 billion neurons and 10 trillion synapses, which allowed them to see how information “percolates” through a system that’s comparable to a feline cerebral cortex [San Jose Mercury News]. The team’s previous effort two years ago, modeled after a rat brain, simulated only about 55 million neurons.
The staggering surge in computing power has engineers like IBM’s Dharmendra Modha drooling over the possibilities for more brain-like computers. By reverse engineering [the] cortical structure, Modha says, researchers could give machines the ability to interpret biological senses such as sight, hearing and touch. And artificial machine brains could process, intelligently, senses that don’t currently exist in the natural world, such as radar and laser range-finding [Popular Mechanics].
It should come as no surprise that the design suggests such military applications, as DARPA provided much of the funding. But like the Internet and other technologies originally developed for the military, BlueMatter’s abilities could lead in a multitude of directions. “As our digital and physical worlds collide, there is a tsunami of information,” Modha said. “There is a need for a new kind of intelligence that can sort through, prioritize and extract the most important information, much like how the brain deals with sight, sounds, tastes, touch and smell” [San Jose Mercury News].
80beats: Watson, an IBM Supercomputer, Could be the Next “Jeopardy!” Champion
80beats: At the New Singularity University, Ray Kurzweil Will Train Young Futurists
80beats: Computers Take the Turing Test for Artificial Intelligence, But Fall Short
Image: IBM Almaden research lab, Stanford University
A computer is being prepared to compete in the quiz show Jeopardy, and if its developers at IBM have their way, it could well become the next great contestant to beat. The computer, called Watson, will have to interpret the question, process puns and other word games, search through its database and determine the correct answer, all within less than a second—the reaction time of “Jeopardy” players [PCMag.com].
Its developers are aiming not at a true thinking machine but at a new class of software that can “understand” human questions and respond to them correctly. Such a program would have enormous economic implications [The New York Times]. Watson will not be connected to the Internet, and instead will have to rely on its own content database–just as a human contestant must rely on her own store of knowledge.
Researchers have built a robot that doesn’t just perform pre-programmed tasks like a factory worker, but instead is capable of generating its own hypotheses and then running experiments to test them–like a scientist. The robot, named Adam, was set to work investigating the genetics of brewer’s yeast, and made 12 small discoveries. Lead researcher Ross King says that Adam’s results were modest, but real. “It’s certainly a contribution to knowledge. It would be publishable,” he says [New Scientist].
Adam isn’t a humanoid robot; instead it’s comprised of a sophisticated software program run on four computers, and a room full of lab equipment to carry out commands. The researchers gave Adam a freezer full of yeast strains and a database containing information about the yeast’s genes and enzymes, and asked Adam to determine which genes code for specific enzymes. The robot came up with hypotheses, devised experiments to test them, ran the experiments, and interpreted the results. In all, Adam formulated and tested 20 hypotheses about genes coding for 13 enzymes. Twelve hypotheses were confirmed. For instance, Adam correctly hypothesised that three genes it identified encode an enzyme important in producing the amino acid lysine. The researchers confirmed Adam’s work with their own experiments [New Scientist].