Will Artificial Intelligence Take Down a Master Go Player?

By Carl Engelking | March 8, 2016 5:51 pm
go-board-game

(Credit: yanugkelid/Shutterstock)

Match #1 Update: AlphaGo has won its first match against Lee Sedol. Throughout most of the game, Sedol appeared to have the advantage, but AlphGo took the lead in the final 20 minutes. Sedol ultimately forfeited, and handed AlphaGo its first victory in the 5-match tournament. Google DeepMind’s Demis Hassabis shared the news early Wednesday morning:


From playing chess to bowling, man-versus-machine showdowns haven’t favored flesh-and-blood lately. Open a Google search and type the word “machine” or “computer” before the names Garry Kasparaov, Ken Jennings, David Boys or Chris Barnes to see how humans have fared lately.

Come March 14, will Lee Sedol, a 33-year-old world champion Go player, join that list? Go is a ridiculously difficult, 3,000-year-old Chinese board game played with circular black and white stones on a 19-by-19 grid. It’s a game of complexity, depth and nuance, and computers weren’t expected to solve it for at least another decade. It appears we’re ahead of schedule. 

Sedol will attempt to take down Google DeepMind’s AlphaGo algorithm, which utilizes artificial intelligence and deep learning to play the complicated game. Sedol and AlphaGo will square off in a five-match Go tournament March 8-14 in Seoul, South Korea. One match will be played each day starting at 11 p.m. EST. Each match is expected to last about four or five hours, and you can watch Sedol battle silicon live throughout the week right here.

Playing Go

Go was considered the “holy grail” of artificial intelligence, because it’s a game with an outstanding number of possible moves on a given turn. For example, in chess a player can consider 35 moves on a turn. In Go, a player has more than 300 moves to consider.

To conquer Go, DeepMind’s Demis Hassabis and David Silver combined deep learning with tree search capabilities to pare down the amount of information AlphaGo sifts through. Deep learning algorithms rely on artificial neural networks that operate similarly to the connections in our brain, and they allow computers to identify patterns from mounds of data at a speed humans could never obtain.

Hassabis and Silver fed AlphaGo a collection of 30 million moves from games played by skilled human Go players, until it could correctly predict a player’s next move 57 percent of the time; the previous record was 44 percent. Then AlphaGo played thousands of games against its own neural networks to improve its skills through trial and error.

AlphaGo’s success is in its combination of two networks: a value network and a policy network. The policy network cuts down the number of possibilities that AlphaGo needs to examine with any one move. The value network allows AlphaGo to cut short the depth of the search. AlphaGo uses its networks to evaluate board positions in isolation and determine who’s winning.

Tale of the Tape

In October 2015, AlphaGo handily defeated back-to-back-to-back European Go champion Fan Hui five games to zero. Based on Go’s complicated ranking system, Sedol is a more formidable opponent than Hui. However, over the past five months AlphaGo has been playing millions of games with itself to improve its game. Still, AlphaGo is really only as good as the professional players its simulations built upon.

If AlphaGo wins, it’s another feather in the hat for DeepMind’s artificial intelligence research. If the DeepMind team proves that computers can master Go, it’s further evidence that deep-learning algorithms may be ready to tackle other problems that require long, complex calculations — discovering new drugs or self-driving vehicles, for example.

The winner of the match will receive $1 million, and if AlphaGo wins, the money will be donated to charity. But in the end, everyone wins when humans match wits with machines. Should Sedol win, it’s a win for gray matter. Should AlphaGo win, it’s a testament to human engineering.

CATEGORIZED UNDER: Technology, top posts
MORE ABOUT: computers
ADVERTISEMENT
  • OWilson

    Evaluating “what if” scenarios is what AI (a frequently misunderstood misnomer for computers) do best.

    Chess, whatever, no contest. There are a mathematically limited number of options. It is a “closed” system, some 64 squares.

    But you’ll still need human help to figure out the power interruption and push that loose plug back in, during a match) :)

    • Don Huntington

      Right. It will be a red flag, I think, when the computer can push the loose plug in.

  • boonteetan

    Even if human player is of equal or higher standard, the computer would finally win. The essence lies in human mental fatigue, especially after long hours of intense concentration. Whereas computer remains fresh practically all the time, unless someone keeps switching on and off its electrical source. Try and see.

NEW ON DISCOVER
OPEN
CITIZEN SCIENCE
ADVERTISEMENT

D-brief

Briefing you on the must-know news and trending topics in science and technology today.
ADVERTISEMENT

See More

ADVERTISEMENT

Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

Collapse bottom bar
+