AlphaGo
AlphaGo is a computer program developed by Google DeepMind in London to play the board game Go.[1] In October 2015, it became the first computer Go program to beat a professional human Go player without handicaps on a full-sized 19×19 board.[2][3] In March 2016, it beat Lee Sedol in the first three games in a five-game match, the first time a computer Go program has beaten a 9-dan professional without handicaps.[4]. However, it lost to Lee Sedol in the fourth game.
AlphaGo's algorithm uses a combination of machine learning and tree search techniques, combined with extensive training, both from human and computer play.
History and competitions
Go is considered much more difficult for computers to win than other games such as chess, because its much larger branching factor makes it prohibitively difficult to use traditional AI methods such as Alpha–beta pruning, Tree traversal and heuristic search.[2][5]
Almost two decades after IBM's computer Deep Blue beat world chess champion Garry Kasparov in the 1997 match, the strongest Go programs using artificial intelligence techniques only reached about amateur 5 dan level,[6] and still could not beat a professional Go player without handicaps.[2][3][7] In 2012, the software program Zen, running on a four PC cluster, beat Masaki Takemiya (9p) two times at five and four stones handicap.[8] In 2013, Crazy Stone beat Yoshio Ishida (9p) at four-stones handicap.[9]
AlphaGo represents a significant improvement over previous Go programs. In 500 games against other available Go programs, including Crazy Stone and Zen,[10] AlphaGo running on a single computer won all but one.[11] In a similar matchup, AlphaGo running on multiple computers won all 500 games played against other Go programs, and 77% of games played against AlphaGo running on a single computer. The distributed version in October 2015 was using 1,202 CPUs and 176 GPUs.[6]
Match against Fan Hui
In October 2015, the distributed version of AlphaGo defeated the European Go champion Fan Hui,[12] a 2 dan (out of 9 dan possible) professional, five to zero.[3][13] This is the first time a computer Go program has beaten a professional human player on a full-sized board without handicap.[14] The announcement of the news was delayed until 27 January 2016 to coincide with the publication of a paper in the journal Nature[6] describing the algorithms used.[3]
Match against Lee Sedol
AlphaGo is currently challenging South Korean professional Go player Lee Sedol, who is ranked 9 dan,[7][needs update] with five games taking place at the Four Seasons Hotel in Seoul, South Korea on 9, 10, 12, 13, and 15 March 2016,[15][16] which will be video streamed live.[17] Aja Huang, a DeepMind team member and amateur 6-dan Go player, will place stones on the Go board for AlphaGo, which will be running through Google's cloud computing with its servers located in the United States.[18] The match will adopt the Chinese rules with a 7.5-point komi, and each side will have two hours of thinking time plus three 60-second byoyomi periods.[19] The version of AlphaGo playing against Lee uses 1,920 CPUs and 280 GPUs.[20]
Four games of the match have been played so far, the first three of which were won by AlphaGo following resignations by Lee Sedol.[21][22] Lee Sedol managed to beat AlphaGo in the fourth game, winning by resignation at move 180.
The prize is $1M. Since AlphaGo has won three out of five and thus the series, the prize will be donated to charities, including UNICEF.[23] Lee Sedol will receive at least $150,000 for participating in all the five games and an additional $20,000 for each win.[19]
Hardware
AlphaGo was tested on hardware with various numbers of CPUs and GPUs, running in asynchronous or distributed mode. Two seconds of thinking time is given to each move. The resulting Elo ratings are listed below.[6]
Configuration | Search threads |
No. of CPU | No. of GPU | Elo rating |
---|---|---|---|---|
Asynchronous | 40 | 48 | 1 | 2,151 |
Asynchronous | 40 | 48 | 2 | 2,738 |
Asynchronous | 40 | 48 | 4 | 2,850 |
Asynchronous | 40 | 48 | 8 | 2,890 |
Distributed | 12 | 428 | 64 | 2,937 |
Distributed | 24 | 764 | 112 | 3,079 |
Distributed | 40 | 1,202 | 176 | 3,140 |
Distributed | 64 | 1,920 | 280 | 3,168 |
Algorithm
AlphaGo's algorithm uses a combination of machine learning and tree search techniques, combined with extensive training, both from human and computer play. It uses Monte Carlo tree search, guided by a "value network" and a "policy network," both implemented using deep neural network technology.[2][6] A limited amount of game-specific feature detection pre-processing is used to generate the inputs to the neural networks.[6]
The system's neural networks were initially bootstrapped from human gameplay expertise. AlphaGo was initially trained to mimic human play by attempting to match the moves of expert players from recorded historical games, using a database of around 30 million moves.[12] Once it had reached a certain degree of proficiency, it was trained further by being set to play large numbers of games against other instances of itself, using reinforcement learning to improve its play.[2]
Style of play
AlphaGo has been described by the 9-dan player Myungwan Kim as playing "like a human" in its games against Fan Hui.[24] The match referee Toby Manning has described the program's style as "conservative."[25]
Responses to 2016 victory
AI community
AlphaGo's March 2016 victory was a major milestone in artificial intelligence research.[26] Go had previously been regarded as a hard problem in machine learning that was expected to be out of reach for the technology of the time.[26][27][28] Most experts thought a Go program as powerful as AlphaGo was at least five years away;[29] some experts thought that it would take at least another decade before computers would beat Go champions.[30][31] Most observers at the beginning of the 2016 matches expected Lee to beat AlphaGo, and even the designers of AlphaGo were surprised by its success: Google DeepMind co-founder Demis Hassabis said he was "stunned and speechless".[26]
With games such as checkers, chess, and now Go won by computer players, victories at popular board games can no longer serve as major milestones for artificial intelligence in the way that they used to. Deep Blue's Murray Campbell called AlphaGo's victory "the end of an era... board games are more or less done and it's time to move on."[26]
When compared with Deep Blue or with Watson, AlphaGo's underlying algorithms are potentially more general-purpose, and may be evidence that the scientific community is making progress toward artificial general intelligence.[32] Some commentators believe AlphaGo's victory makes for a good opportunity for society to start discussing preparations for the possible future impact of machines with general purpose intelligence. (Note that AlphaGo itself only knows how to play Go, and doesn't possess general purpose intelligence: AlphaGo will not wake up one morning and decide it wants to learn how to use firearms.[26]) In March 2016, AI researcher Stuart Russell stated that "AI methods are progressing much faster than expected, (which) makes the question of the long-term outcome more urgent," adding that "in order to ensure that increasingly powerful AI systems remain completely under human control... there is a lot of work to do."[33] Some scholars, such as Stephen Hawking, warn that some future self-improving AI could gain actual general intelligence, leading to an unexpected AI takeover; other scholars disagree: AI expert Jean-Gabriel Ganascia believes that "Things like 'common sense'... may never be reproducible",[34] and says "I don't see why we would speak about fears. On the contrary, this raises hopes in many domains such as health and space exploration."[35] Computer scientist Richard Sutton said "I don't think people should be scared... but I do think people should be paying attention."[36]
Go community
Go is a popular game in South Korea, China and Japan, and the 2016 matches were watched and analyzed by millions of people worldwide.[26] Many top Go players characterized AlphaGo's unorthodox plays as seemingly-questionable moves that initially befuddled onlookers, but made sense in hindsight:[30] "All but the very best Go players craft their style by imitating top players. AlphaGo seems to have totally original moves it creates itself."[26] AlphaGo appeared to have unexpectedly become much stronger, even when compared with its October 2015 match[37] where a computer had beat a Go professional for the first time ever without the advantage of a handicap.[38] China's number one player, teenager Ke Jie, claims that he would be able to beat AlphaGo, but has so far refused to play against it for fear that it would "copy my style".[39]
Toby Manning, the referee of AlphaGo's match against Fan Hui, and Hajin Lee, secretary general of the International Go Federation, both reason that in the future, Go players will get help from computers to learn what they have done wrong in games and improve their skills.[38]
Lee apologized for his losses, stating that "I misjudged the capabilities of AlphaGo and felt powerless."[26] He emphasized that the defeat was "Lee Se-dol's defeat" and "not a defeat of (all) human beings," and pointed out that the program does have weaknesses. Lee said his eventual loss to a machine was "inevitable" but stated that "robots will never understand the beauty of the game the same way that we humans do."[34]
Similar systems
Facebook has also been working on their own Go-playing system darkforest, also based on combining machine learning and tree search.[25][40] Although a strong player against other computer Go programs, as of early 2016, it had not yet defeated a professional human player.[41] darkforest has lost to CrazyStone and Zen and is estimated to be of similar strength to CrazyStone and Zen.[42]
Example game
AlphaGo (black) v. Fan Hui, Game 4 (8 October 2015), AlphaGo won by resignation.[6]
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
First 99 moves (96 at 10) |
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Moves 100-165. |
See also
- Go and mathematics
- Deep Blue (chess computer)
- Chinook (draughts player), draughts playing program
- TD-Gammon, backgammon neural network
References
- ^ Artificial intelligence: Google's AlphaGo beats Go master Lee Se-dol
- ^ a b c d e "Research Blog: AlphaGo: Mastering the ancient game of Go with Machine Learning". Google Research Blog. 27 January 2016.
- ^ a b c d "Google achieves AI 'breakthrough' by beating Go champion". BBC News. 27 January 2016.
- ^ "Match 1 - Google DeepMind Challenge Match: Lee Sedol vs AlphaGo". 8 March 2016.
- ^ Schraudolph, Nicol N.; Terrence, Peter Dayan; Sejnowski, J., Temporal Difference Learning of Position Evaluation in the Game of Go (PDF)
- ^ a b c d e f g Silver, David; Huang, Aja; Maddison, Chris J.; Guez, Arthur; Sifre, Laurent; Driessche, George van den; Schrittwieser, Julian; Antonoglou, Ioannis; Panneershelvam, Veda (2016). "Mastering the game of Go with deep neural networks and tree search". Nature. 529 (7587): 484–489. doi:10.1038/nature16961. PMID 26819042.
- ^ a b "Computer scores big win against humans in ancient game of Go". CNN. 28 January 2016. Retrieved 28 January 2016.
- ^ "Zen computer Go program beats Takemiya Masaki with just 4 stones!". Go Game Guru. Retrieved 28 January 2016.
- ^ "「アマ六段の力。天才かも」囲碁棋士、コンピューターに敗れる 初の公式戦". MSN Sankei News. Retrieved 27 March 2013.
- ^ "Artificial intelligence breakthrough as Google's software beats grandmaster of Go, the 'most complex game ever devised'". Daily Mail. 27 January 2016. Retrieved 29 January 2016.
- ^ "Google AlphaGo AI clean sweeps European Go champion". ZDNet. 28 January 2016. Retrieved 28 January 2016.
- ^ a b Metz, Cade (2016-01-27). "In Major AI Breakthrough, Google System Secretly Beats Top Player at the Ancient Game of Go". WIRED. Retrieved 2016-02-01.
- ^ "Sepcial Computer Go insert covering the AlphaGo v Fan Hui match" (PDF). British Go Journal. Retrieved 2016-02-01.
{{cite web}}
: Cite has empty unknown parameter:|month=
(help) - ^ "Première défaite d'un professionnel du go contre une intelligence artificielle". Le Monde (in French). 27 January 2016.
- ^ "Google's AI AlphaGo to take on world No 1 Lee Sedol in live broadcast". The Guardian. 5 February 2016. Retrieved 15 February 2016.
{{cite web}}
: Italic or bold markup not allowed in:|publisher=
(help) - ^ "Google DeepMind is going to take on the world's best Go player in a luxury 5-star hotel in South Korea". Business Insider. 22 February 2016. Retrieved 23 February 2016.
- ^ Novet, Jordan (February 4, 2016). "YouTube will livestream Google's AI playing Go superstar Lee Sedol in March". VentureBeat. Retrieved 2016-02-07.
- ^ "李世乭:即使Alpha Go得到升级也一样能赢" (in Chinese). JoongAng Ilbo. 23 February 2016. Retrieved 24 February 2016.
{{cite web}}
: Italic or bold markup not allowed in:|publisher=
(help) - ^ a b "이세돌 vs 알파고, '구글 딥마인드 챌린지 매치' 기자회견 열려" (in Korean). Korea Baduk Association. 22 February 2016. Retrieved 22 February 2016.
- ^ "Showdown". The Economist. March 12, 2016.
- ^ "Google's AI beats world Go champion in first of five matches - BBC News". BBC Online. Retrieved 9 March 2016.
- ^ "Google AI wins second Go game against world champion - BBC News". BBC Online. Retrieved 10 March 2016.
- ^ "Human champion certain he'll beat AI at ancient Chinese game". AP News. 22 February 2016. Retrieved 22 February 2016.
- ^ David, Eric (February 1, 2016). "Google's AlphaGo "plays just like a human," says top ranked Go player". SiliconANGLE. Retrieved 2016-02-03.
- ^ a b Gibney, Elizabeth (27 January 2016). "Google AI algorithm masters ancient game of Go". Nature News & Comment. Retrieved 2016-02-03.
- ^ a b c d e f g h http://www.latimes.com/world/asia/la-fg-korea-alphago-20160312-story.html
- ^ Connor, Steve (27 January 2016). "A computer has beaten a professional at the world's most complex board game". The Independent. Retrieved 28 January 2016.
- ^ "Google's AI beats human champion at Go". CBC News. 27 January 2016. Retrieved 28 January 2016.
- ^ http://www.popsci.com/googles-alphago-beats-world-champion-in-third-match-to-win-entire-series
- ^ a b http://www.cbc.ca/news/technology/go-google-alphago-lee-sedol-deepmind-1.3488913
- ^ http://money.cnn.com/2016/03/12/technology/google-deepmind-alphago-wins/
- ^ http://www.abc.net.au/news/2016-03-08/google-artificial-intelligence-to-face-board-game-champion/7231192
- ^ http://phys.org/news/2016-03-machines-eye-ai-experts.html
- ^ a b http://phys.org/news/2016-03-game-ai-human-smarts.html
- ^ http://phys.org/news/2016-03-machines-eye-ai-experts.html
- ^ http://www.businessinsider.com/what-does-googles-deepmind-victory-mean-for-ai-2016-3
- ^ http://www.pcworld.com/article/3043211/big-win-for-ai-as-google-alphago-program-trounces-korean-player-in-go-tournament.html
- ^ a b Gibney, Elizabeth (2016). "Go players react to computer defeat". Nature. doi:10.1038/nature.2016.19255.
- ^ http://www.telegraph.co.uk/news/worldnews/asia/china/12190917/Google-AlphaGo-cant-beat-me-says-China-Go-grandmaster.html
- ^ Tian, Yuandong; Zhu, Yan (2015). "Better Computer Go Player with Neural Network and Long-term Prediction". arXiv:1511.06410v1 [cs.LG].
- ^ HAL 90210 (2016-01-28). "No Go: Facebook fails to spoil Google's big AI day". The Guardian. ISSN 0261-3077. Retrieved 2016-02-01.
{{cite news}}
: CS1 maint: numeric names: authors list (link) - ^ http://livestream.com/oxuni/StracheyLectureDrDemisHassabis