AlphaGo Zero is a version of DeepMind's Go software AlphaGo. AlphaGo's team published an article in the journal Nature on 19 October 2017, introducing AlphaGo Zero, a version created without using data from human games, and stronger than any previous version. By playing games against itself, AlphaGo Zero surpassed the strength of AlphaGo Lee in three days by winning 100 games to 0, reached the level of AlphaGo Master in 21 days, and exceeded all the old versions in 40 days.
Training artificial intelligence (AI) without datasets derived from human experts has significant implications for the development of AI with superhuman skills because expert data is "often expensive, unreliable or simply unavailable." Demis Hassabis, the co-founder and CEO of DeepMind, said that AlphaGo Zero was so powerful because it was "no longer constrained by the limits of human knowledge". David Silver, one of the first authors of DeepMind's papers published in Nature on AlphaGo, said that it is possible to have generalised AI algorithms by removing the need to learn from humans.
In December 2017, a generalized version of AlphaGo Zero, named AlphaZero, beat the 3-day version of AlphaGo Zero by winning 60 games to 40, and with 8 hours of training it outperformed AlphaGo Lee on an Elo scale, as well as a top chess program (Stockfish) and a top Shōgi program (Elmo).
AlphaGo Zero's neural network was trained using TensorFlow, with 64 GPU workers and 19 CPU parameter servers. Only four TPUs were used for inference. The neural network initially knew nothing about Go beyond the rules. Unlike earlier versions of AlphaGo, Zero only perceived the board's stones, rather than having some rare human-programmed edge cases to help recognize unusual Go board positions. The AI engaged in reinforcement learning, playing against itself until it could anticipate its own moves and how those moves would affect the game's outcome. In the first three days AlphaGo Zero played 4.9 million games against itself in quick succession. It appeared to develop the skills required to beat top humans within just a few days, whereas the earlier AlphaGo took months of training to achieve the same level.
For comparison, the researchers also trained a version of AlphaGo Zero using human games, AlphaGo Master, and found that it learned more quickly, but actually performed more poorly in the long run. DeepMind submitted its initial findings in a paper to Nature in April 2017, which was then published in October 2017.
According to Hassabis, AlphaGo's algorithms are likely to be of the most benefit to domains that require an intelligent search through an enormous space of possibilities, such as protein folding or accurately simulating chemical reactions. AlphaGo's techniques are probably less useful in domains that are difficult to simulate, such as learning how to drive a car. DeepMind stated in October 2017 that it had already started active work on attempting to use AlphaGo Zero technology for protein folding, and stated it would soon publish new findings.
AlphaGo Zero was widely regarded as a significant advance, even when compared with its groundbreaking predecessor, AlphaGo. Oren Etzioni of the Allen Institute for Artificial Intelligence called AlphaGo Zero "a very impressive technical result" in "both their ability to do it—and their ability to train the system in 40 days, on four TPUs". The Guardian called it a "major breakthrough for artificial intelligence", citing Eleni Vasilaki of Sheffield University and Tom Mitchell of Carnegie Mellon University, who called it an impressive feat and an “outstanding engineering accomplishment" respectively. Mark Pesce of the University of Sydney called AlphaGo Zero "a big technological advance" taking us into "undiscovered territory".
Gary Marcus, a psychologist at New York University, has cautioned that for all we know, AlphaGo may contain "implicit knowledge that the programmers have about how to construct machines to play problems like Go" and will need to be tested in other domains before being sure that its base architecture is effective at much more than playing Go. In contrast, DeepMind is "confident that this approach is generalisable to a large number of domains".
In response to the reports, South Korean Go professional Lee Sedol said, "The previous version of AlphaGo wasn’t perfect, and I believe that’s why AlphaGo Zero was made." On the potential for AlphaGo's development, Lee said he will have to wait and see but also said it will affect young Go players. Mok Jin-seok, who directs the South Korean national Go team, said the Go world has already been imitating the playing styles of previous versions of AlphaGo and creating new ideas from them, and he is hopeful that new ideas will come out from AlphaGo Zero. Mok also added that general trends in the Go world are now being influenced by AlphaGo’s playing style. "At first, it was hard to understand and I almost felt like I was playing against an alien. However, having had a great amount of experience, I’ve become used to it," Mok said. "We are now past the point where we debate the gap between the capability of AlphaGo and humans. It’s now between computers." Mok has reportedly already begun analyzing the playing style of AlphaGo Zero along with players from the national team. "Though having watched only a few matches, we received the impression that AlphaGo Zero plays more like a human than its predecessors," Mok said. Chinese Go professional, Ke Jie commented on the remarkable accomplishments of the new program: "A pure self-learning AlphaGo is the strongest. Humans seem redundant in front of its self-improvement."
Comparison with predecessors
|Versions||Playing hardware||Elo rating||Matches|
|AlphaGo Fan||176 GPUs, distributed||3,144||5:0 against Fan Hui|
|AlphaGo Lee||48 TPUs, distributed||3,739||4:1 against Lee Sedol|
|AlphaGo Master||4 TPUs, single machine||4,858||60:0 against professional players;|
|AlphaGo Zero (40 days)||4 TPUs, single machine||5,185||100:0 against AlphaGo Lee
89:11 against AlphaGo Master
|AlphaZero (34 hours)||4 TPUs, single machine||4,000 (est.)||60:40 against a 3-day AlphaGo Zero|
On 5 December 2017, DeepMind team released a preprint on arXiv, introducing AlphaZero, a program using generalized AlphaGo Zero's approach, which achieved within 24 hours a superhuman level of play in chess, shogi, and Go, defeating world-champion programs, Stockfish, Elmo, and 3-day version of AlphaGo Zero in each case.
- AZ has hard-coded rules for setting search hyperparameters.
- The neural network is now updated continually.
- Go (unlike Chess) is symmetric under certain reflections and rotations; AGZ was programmed to take advantage of these symmetries. AZ is not.
- Chess (unlike Go) can end in a tie; therefore AZ can take into account the possibility of a tie game.
- Silver, David; Schrittwieser, Julian; Simonyan, Karen; Antonoglou, Ioannis; Huang, Aja; Guez, Arthur; Hubert, Thomas; Baker, Lucas; Lai, Matthew; Bolton, Adrian; Chen, Yutian; Lillicrap, Timothy; Fan, Hui; Sifre, Laurent; Driessche, George van den; Graepel, Thore; Hassabis, Demis (19 October 2017). "Mastering the game of Go without human knowledge". Nature. 550 (7676): 354–359. Bibcode:2017Natur.550..354S. doi:10.1038/nature24270. ISSN 0028-0836. Retrieved 10 December 2017.
- Hassabis, Demis; Siver, David (18 October 2017). "AlphaGo Zero: Learning from scratch". DeepMind official website. Retrieved 19 October 2017.
- "Google's New AlphaGo Breakthrough Could Take Algorithms Where No Humans Have Gone". Yahoo! Finance. 19 October 2017. Retrieved 19 October 2017.
- "AlphaGo Zero: Google DeepMind supercomputer learns 3,000 years of human knowledge in 40 days". Telegraph.co.uk. 18 October 2017. Retrieved 19 October 2017.
- "DeepMind AlphaGo Zero learns on its own without meatbag intervention". ZDNet. 19 October 2017. Retrieved 20 October 2017.
- Silver, David; Hubert, Thomas; Schrittwieser, Julian; Antonoglou, Ioannis; Lai, Matthew; Guez, Arthur; Lanctot, Marc; Sifre, Laurent; Kumaran, Dharshan; Graepel, Thore; Lillicrap, Timothy; Simonyan, Karen; Hassabis, Demis (5 December 2017). "Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm". arXiv: [cs.AI].
- "Entire human chess knowledge learned and surpassed by DeepMind's AlphaZero in four hours".
- Greenemeier, Larry. "AI versus AI: Self-Taught AlphaGo Zero Vanquishes Its Predecessor". Scientific American. Retrieved 20 October 2017.
- "Computer Learns To Play Go At Superhuman Levels 'Without Human Knowledge'". NPR. 18 October 2017. Retrieved 20 October 2017.
- "Google's New AlphaGo Breakthrough Could Take Algorithms Where No Humans Have Gone". Fortune. 19 October 2017. Retrieved 20 October 2017.
- "This computer program can beat humans at Go—with no human instruction". Science | AAAS. 18 October 2017. Retrieved 20 October 2017.
- Alpert, Bill (4 November 2017). "Artificial Intelligence's Winners and Losers". barrons.com. Retrieved 8 December 2017.
- "Google Artificial Intelligence 'Alpha Go Zero' Just Pressed Reset On How To Learn". Inc.com. 23 October 2017. Retrieved 8 December 2017.
- "The latest AI can work things out without being taught". The Economist. Retrieved 20 October 2017.
- Sample, Ian (18 October 2017). "'It's able to create knowledge itself': Google unveils AI that learns on its own". The Guardian. Retrieved 20 October 2017.
- "'It's able to create knowledge itself': Google unveils AI that learns on its own". The Guardian. 18 October 2017. Retrieved 26 December 2017.
- Knapton, Sarah (18 October 2017). "AlphaGo Zero: Google DeepMind supercomputer learns 3,000 years of human knowledge in 40 days". The Telegraph. Retrieved 26 December 2017.
- "How Google's new AI can teach itself to beat you at the most complex games". Australian Broadcasting Corporation. 19 October 2017. Retrieved 20 October 2017.
- "Go Players Excited About 'More Humanlike' AlphaGo Zero". Korea Bizwire. 19 October 2017. Retrieved 21 October 2017.
- "New version of AlphaGo can master Weiqi without human help". China News Service. 19 October 2017. Retrieved 21 October 2017.
- "【柯洁战败解密】AlphaGo Master最新架构和算法，谷歌云与TPU拆解" (in Chinese). Sohu. 24 May 2017. Retrieved 1 June 2017.
- Hardware used during training may be substantially more powerful