A general reinforcement learning algorithm that masters chess, shogi and Go through self-play

Size: px
Start display at page:

Download "A general reinforcement learning algorithm that masters chess, shogi and Go through self-play"

Transcription

1 A general reinforcement learning algorithm that masters chess, shogi and Go through self-play David Silver, 1,2 Thomas Hubert, 1 Julian Schrittwieser, 1 Ioannis Antonoglou, 1,2 Matthew Lai, 1 Arthur Guez, 1 Marc Lanctot, 1 Laurent Sifre, 1 Dharshan Kumaran, 1,2 Thore Graepel, 1,2 Timothy Lillicrap, 1 Karen Simonyan, 1 Demis Hassabis 1 1 DeepMind, 6 Pancras Square, London N1C 4AG. 2 University College London, Gower Street, London WC1E 6BT. These authors contributed equally to this work. Abstract The game of chess is the longest-studied domain in the history of artificial intelligence. The strongest programs are based on a combination of sophisticated search techniques, domain-specific adaptations, and handcrafted evaluation functions that have been refined by human experts over several decades. By contrast, the AlphaGo Zero program recently achieved superhuman performance in the game of Go by reinforcement learning from selfplay. In this paper, we generalize this approach into a single AlphaZero algorithm that can achieve superhuman performance in many challenging games. Starting from random play and given no domain knowledge except the game rules, AlphaZero convincingly defeated a world champion program in the games of chess and shogi (Japanese chess) as well as Go. The study of computer chess is as old as computer science itself. Charles Babbage, Alan Turing, Claude Shannon, and John von Neumann devised hardware, algorithms and theory to analyse and play the game of chess. Chess subsequently became a grand challenge task for a generation of artificial intelligence researchers, culminating in high-performance computer chess programs that play at a super-human level (1,2). However, these systems are highly tuned to their domain, and cannot be generalized to other games without substantial human effort, whereas general game-playing systems (3, 4) remain comparatively weak. A long-standing ambition of artificial intelligence has been to create programs that can instead learn for themselves from first principles (5, 6). Recently, the AlphaGo Zero algorithm achieved superhuman performance in the game of Go, by representing Go knowledge using deep convolutional neural networks (7,8), trained solely by reinforcement learning from games 1

2 of self-play (9). In this paper, we introduce AlphaZero: a more generic version of the AlphaGo Zero algorithm that accomodates, without special-casing, to a broader class of game rules. We apply AlphaZero to the games of chess and shogi as well as Go, using the same algorithm and network architecture for all three games. Our results demonstrate that a general-purpose reinforcement learning algorithm can learn, tabula rasa without domain-specific human knowledge or data, as evidenced by the same algorithm succeeding in multiple domains superhuman performance across multiple challenging games. A landmark for artificial intelligence was achieved in 1997 when Deep Blue defeated the human world chess champion (1). Computer chess programs continued to progress steadily beyond human level in the following two decades. These programs evaluate positions using handcrafted features and carefully tuned weights, constructed by strong human players and programmers, combined with a high-performance alpha-beta search that expands a vast search tree using a large number of clever heuristics and domain-specific adaptations. In (10) we describe these augmentations, focusing on the 2016 Top Chess Engine Championship (TCEC) season 9 world-champion Stockfish (11); other strong chess programs, including Deep Blue, use very similar architectures (1, 12). In terms of game tree complexity, shogi is a substantially harder game than chess (13, 14): it is played on a larger board with a wider variety of pieces; any captured opponent piece switches sides and may subsequently be dropped anywhere on the board. The strongest shogi programs, such as the 2017 Computer Shogi Association (CSA) world-champion Elmo, have only recently defeated human champions (15). These programs use an algorithm similar to those used by computer chess programs, again based on a highly optimized alpha-beta search engine with many domain-specific adaptations. AlphaZero replaces the handcrafted knowledge and domain-specific augmentations used in traditional game-playing programs with deep neural networks, a general-purpose reinforcement learning algorithm, and a general-purpose tree search algorithm. Instead of a handcrafted evaluation function and move ordering heuristics, AlphaZero uses a deep neural network (p, v) = f θ (s) with parameters θ. This neural network f θ (s) takes the board position s as an input and outputs a vector of move probabilities p with components p a = Pr(a s) for each action a, and a scalar value v estimating the expected outcome z of the game from position s, v E[z s]. AlphaZero learns these move probabilities and value estimates entirely from self-play; these are then used to guide its search in future games. Instead of an alpha-beta search with domain-specific enhancements, AlphaZero uses a generalpurpose Monte Carlo tree search (MCTS) algorithm. Each search consists of a series of simulated games of self-play that traverse a tree from root state s root until a leaf state is reached. Each simulation proceeds by selecting in each state s a move a with low visit count (not previously frequently explored), high move probability and high value (averaged over the leaf states of simulations that selected a from s) according to the current neural network f θ. The search returns a vector π representing a probability distribution over moves, π a = Pr(a s root ). The parameters θ of the deep neural network in AlphaZero are trained by reinforcement learning from self-play games, starting from randomly initialized parameters θ. Each game is 2

3 played by running an MCTS search from the current position s root = s t at turn t, and then selecting a move, a t π t, either proportionally (for exploration) or greedily (for exploitation) with respect to the visit counts at the root state. At the end of the game, the terminal position s T is scored according to the rules of the game to compute the game outcome z: 1 for a loss, 0 for a draw, and +1 for a win. The neural network parameters θ are updated to minimize the error between the predicted outcome v t and the game outcome z, and to maximize the similarity of the policy vector p t to the search probabilities π t. Specifically, the parameters θ are adjusted by gradient descent on a loss function l that sums over mean-squared error and cross-entropy losses, (p, v) = f θ (s), l = (z v) 2 π log p + c θ 2, (1) where c is a parameter controlling the level of L 2 weight regularization. The updated parameters are used in subsequent games of self-play. The AlphaZero algorithm described in this paper (see (10) for pseudocode) differs from the original AlphaGo Zero algorithm in several respects. AlphaGo Zero estimated and optimized the probability of winning, exploiting the fact that Go games have a binary win or loss outcome. However, both chess and shogi may end in drawn outcomes; it is believed that the optimal solution to chess is a draw (16 18). AlphaZero instead estimates and optimizes the expected outcome. The rules of Go are invariant to rotation and reflection. This fact was exploited in AlphaGo and AlphaGo Zero in two ways. First, training data were augmented by generating eight symmetries for each position. Second, during MCTS, board positions were transformed by using a randomly selected rotation or reflection before being evaluated by the neural network, so that the Monte Carlo evaluation was averaged over different biases. To accommodate a broader class of games, AlphaZero does not assume symmetry; the rules of chess and shogi are asymmetric (e.g. pawns only move forward, and castling is different on kingside and queenside). AlphaZero does not augment the training data and does not transform the board position during MCTS. In AlphaGo Zero, self-play games were generated by the best player from all previous iterations. After each iteration of training, the performance of the new player was measured against the best player; if the new player won by a margin of 55% then it replaced the best player. By contrast, AlphaZero simply maintains a single neural network that is updated continually, rather than waiting for an iteration to complete. Self-play games are always generated by using the latest parameters for this neural network. Like AlphaGo Zero, the board state is encoded by spatial planes based only on the basic rules for each game. The actions are encoded by either spatial planes or a flat vector, again based only on the basic rules for each game (10). AlphaGo Zero used a convolutional neural network architecture that is particularly wellsuited to Go: the rules of the game are translationally invariant (matching the weight sharing structure of convolutional networks) and are defined in terms of liberties corresponding to the adjacencies between points on the board (matching the local structure of convolutional networks). By contrast, the rules of chess and shogi are position-dependent (e.g. pawns may 3

4 Figure 1: Training AlphaZero for 700,000 steps. Elo ratings were computed from games between different players where each player was given one second per move. (A) Performance of AlphaZero in chess, compared with the 2016 TCEC world-champion program Stockfish. (B) Performance of AlphaZero in shogi, compared with the 2017 CSA world-champion program Elmo. (C) Performance of AlphaZero in Go, compared with AlphaGo Lee and AlphaGo Zero (20 blocks over 3 days). move two steps forward from the second rank and promote on the eighth rank) and include long-range interactions (e.g. the queen may traverse the board in one move). Despite these differences, AlphaZero uses the same convolutional network architecture as AlphaGo Zero for chess, shogi and Go. The hyperparameters of AlphaGo Zero were tuned by Bayesian optimization. In AlphaZero we reuse the same hyperparameters, algorithm settings and network architecture for all games without game-specific tuning. The only exceptions are the exploration noise and the learning rate schedule (see (10) for further details). We trained separate instances of AlphaZero for chess, shogi and Go. Training proceeded for 700,000 steps (in mini-batches of 4,096 training positions) starting from randomly initialized parameters. During training only, 5,000 first-generation tensor processing units (TPUs) (19) were used to generate self-play games, and 16 second-generation TPUs were used to train the neural networks. Training lasted for approximately 9 hours in chess, 12 hours in shogi and 13 days in Go (see table S3) (20). Further details of the training procedure are provided in (10). Figure 1 shows the performance of AlphaZero during self-play reinforcement learning, as a function of training steps, on an Elo (21) scale (22). In chess, AlphaZero first outperformed Stockfish after just 4 hours (300,000 steps); in shogi, AlphaZero first outperformed Elmo after 2 hours (110,000 steps); and in Go, AlphaZero first outperformed AlphaGo Lee (9) after 30 hours (74,000 steps). The training algorithm achieved similar performance in all independent runs (see fig. S3), suggesting that the high performance of AlphaZero s training algorithm is repeatable. We evaluated the fully trained instances of AlphaZero against Stockfish, Elmo and the previous version of AlphaGo Zero in chess, shogi and Go respectively. Each program was run on the hardware for which it was designed (23): Stockfish and Elmo used 44 central processing unit (CPU) cores (as in the TCEC world championship), whereas AlphaZero and AlphaGo Zero used a single machine with four first-generation TPUs and 44 CPU cores (24). The chess match 4

5 was played against the 2016 TCEC (season 9) world champion Stockfish (see (10) for details). The shogi match was played against the 2017 CSA world champion version of Elmo (10). The Go match was played against the previously published version of AlphaGo Zero (also trained for 700,000 steps (25)). All matches were played using time controls of 3 hours per game, plus an additional 15 seconds for each move. In Go, AlphaZero defeated AlphaGo Zero (9), winning 61% of games. This demonstrates that a general approach can recover the performance of an algorithm that exploited board symmetries to generate eight times as much data (see also fig. S1). In chess, AlphaZero defeated Stockfish, winning 155 games and losing 6 games out of 1,000 (Fig. 2). To verify the robustness of AlphaZero, we played additional matches that started from common human openings (Fig. 3). AlphaZero defeated Stockfish in each opening, suggesting that AlphaZero has mastered a wide spectrum of chess play. The frequency plots in Fig. 3 and the timeline in fig. S2 show that common human openings were independently discovered and played frequently by AlphaZero during self-play training. We also played a match that started from the set of opening positions used in the 2016 TCEC world championship; AlphaZero won convincingly in this match too (26) (see fig. S4). We played additional matches against the most recent development version of Stockfish (27), and a variant of Stockfish that uses a strong opening book (28). AlphaZero won all matches by a large margin (Fig. 2). Table S6 shows 20 chess games played by AlphaZero in its matches against Stockfish. In several games AlphaZero sacrificed pieces for long-term strategic advantage, suggesting that it has a more fluid, context-dependent positional evaluation than the rule-based evaluations used by previous chess programs. In shogi, AlphaZero defeated Elmo, winning 98.2% of games when playing black, and 91.2% overall. We also played a match under the faster time controls used in the 2017 CSA world championship, and against another state-of-the-art shogi program (29); AlphaZero again won both matches by a wide margin (Fig. 2). Table S7 shows 10 shogi games played by AlphaZero in its matches against Elmo. The frequency plots in Fig. 3 and the timeline in fig. S2 show that AlphaZero frequently plays one of the two most common human openings, but rarely plays the second, deviating on the very first move. AlphaZero searches just 60,000 positions per second in chess and shogi, compared with 60 million for Stockfish and 25 million for Elmo (table S4). AlphaZero may compensate for the lower number of evaluations by using its deep neural network to focus much more selectively on the most promising variations (Fig. 4 provides an example from the match against Stockfish) arguably a more human-like approach to search, as originally proposed by Shannon (30). AlphaZero also defeated Stockfish when given 1/10 as much thinking time as its opponent (i.e. searching ~ 1/10, 000 as many positions), and won 46% of games against Elmo when given 1/100 as much time (i.e. searching ~ 1/40, 000 as many positions), see Fig. 2. The high performance of AlphaZero, using MCTS, calls into question the widely held belief (31, 32) that alpha-beta search is inherently superior in these domains. The game of chess represented the pinnacle of artificial intelligence research over several 5

6 decades. State-of-the-art programs are based on powerful engines that search many millions of positions, leveraging handcrafted domain expertise and sophisticated domain adaptations. AlphaZero is a generic reinforcement learning and search algorithm originally devised for the game of Go that achieved superior results within a few hours, searching 1/1, 000 as many positions, given no domain knowledge except the rules of chess. Furthermore, the same algorithm was applied without modification to the more challenging game of shogi, again outperforming state-of-the-art programs within a few hours. These results bring us a step closer to fulfilling a longstanding ambition of artificial intelligence (3): a general games playing system that can learn to master any game. References 1. M. Campbell, A. J. Hoane, F. Hsu, Artificial Intelligence 134, 57 (2002). 2. F.-h. Hsu, Behind Deep Blue: Building the Computer that Defeated the World Chess Champion (Princeton University Press, 2002). 3. B. Pell, Computational Intelligence 12, 177 (1996). 4. M. R. Genesereth, N. Love, B. Pell, AI Magazine 26, 62 (2005). 5. A. L. Samuel, IBM Journal of Research and Development 11, 601 (1967). 6. G. Tesauro, Neural Computation 6, 215 (1994). 7. C. J. Maddison, A. Huang, I. Sutskever, D. Silver, International Conference on Learning Representations (2015). 8. D. Silver, et al., Nature 529, 484 (2016). 9. D. Silver, et al., Nature 550, 354 (2017). 10. See the supplementary materials for additional information. 11. T. Romstad, M. Costalba, J. Kiiski, et al., Stockfish: A strong open source chess engine. Retrieved November 29th, D. N. L. Levy, M. Newborn, How Computers Play Chess (Ishi Press, 2009). 13. V. Allis, Searching for solutions in games and artificial intelligence, Ph.D. thesis, University of Limburg, Netherlands (1994). 14. H. Iida, M. Sakuta, J. Rollason, Artificial Intelligence 134, 121 (2002). 6

7 Figure 2: Comparison with specialized programs. (A) Tournament evaluation of AlphaZero in chess, shogi, and Go in matches against respectively Stockfish, Elmo, and the previously published version of AlphaGo Zero (AG0) that was trained for 3 days. In the top bar, AlphaZero plays white; in the bottom bar AlphaZero plays black. Each bar shows the results from AlphaZero s perspective: win ( W, green), draw ( D, grey), loss ( L, red). (B) Scalability of AlphaZero with thinking time, compared to Stockfish and Elmo. Stockfish and Elmo always receive full time (3 hours per game plus 15 seconds per move), time for AlphaZero is scaled down as indicated. (C) Extra evaluations of AlphaZero in chess against the most recent version of Stockfish at the time of writing (27), and against Stockfish with a strong opening book (28). Extra evaluations of AlphaZero in shogi were carried out against another strong shogi program Aperyqhapaq (29) at full time controls and against Elmo under 2017 CSA world championship time controls (10 minutes per game plus 10 seconds per move). (D) Average result of chess matches starting from different opening positions: either common human positions (see also Fig. 3), or the 2016 TCEC world championship opening positions (see also fig. S4). Average 7 result of shogi matches starting from common human positions (see also Fig. 3). CSA world championship games start from the initial board position. Match conditions are summarized in tables S8 and S9.

8 Figure 3: Matches starting from the most popular human openings. AlphaZero plays against (A) Stockfish in chess and (B) Elmo in shogi. In the left bar, AlphaZero plays white, starting from the given position; in the right bar AlphaZero plays black. Each bar shows the results from AlphaZero s perspective: win (green), draw (grey), loss (red). The percentage frequency of self-play training games in which this opening was selected by AlphaZero is plotted against the duration of training, in hours. 8

9 Figure 4: AlphaZero s search procedure. The search is illustrated for a position (inset) from game 1 (table S6) between AlphaZero (white) and Stockfish (black) after Qf8. The internal state of AlphaZero s MCTS is summarized after 10 2,..., 10 6 simulations. Each summary shows the 10 most visited states. The estimated value is shown in each state, from white s perspective, scaled to the range [0, 100]. The visit count of each state, relative to the root state of that tree, is proportional to the thickness of the border circle. AlphaZero considers 30. c6 but eventually plays 30. d5. 9

10 15. C. S. Association, Results of the 27th world computer shogi championship. http: //www2.computer-shogi.org/wcsc27/index_e.html. Retrieved November 29th, W. Steinitz, The Modern Chess Instructor (Edition Olms AG, 1990). 17. E. Lasker, Common Sense in Chess (Dover Publications, 1965). 18. J. Knudsen, Essential Chess Quotations (iuniverse, 2000). 19. N. P. Jouppi, C. Young, N. Patil, et al., Proceedings of the 44th Annual International Symposium on Computer Architecture, ISCA 17 (ACM, 2017), pp Note that the original AlphaGo Zero paper used GPUs to train the neural networks. 21. R. Coulom, International Conference on Computers and Games (2008), pp The prevalence of draws in high-level chess tends to compress the Elo scale, compared to shogi or Go. 23. Stockfish is designed to exploit CPU hardware and cannot make use of GPU/TPU, whereas AlphaZero is designed to exploit GPU/TPU hardware rather than CPU. 24. A first generation TPU is roughly similar in inference speed to a Titan V GPU, although the architectures are not directly comparable. 25. AlphaGo Zero was ultimately trained for 3.1 million steps over 40 days. 26. Many TCEC opening positions are unbalanced according to both AlphaZero and Stockfish, resulting in more losses for both players. 27. Newest available version of Stockfish as of 13th of January 2018, from b508f9561cc2302c129efe8d60f201ff03ee72c Cerebellum opening book from download. AlphaZero did not use an opening book. To ensure diversity against a deterministic opening book, AlphaZero used a small amount of randomization in its opening moves (10); this avoided duplicate games but also resulted in more losses. 29. Aperyqhapaq s evaluation files are available at qhapaq-bin/releases/tag/eloqhappa. 30. C. E. Shannon, The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science 41, 256 (1950). 10

11 31. O. Arenz, Monte Carlo chess, Master s thesis, Technische Universitat Darmstadt (2012). 32. O. E. David, N. S. Netanyahu, L. Wolf, International Conference on Artificial Neural Networks (Springer, 2016), pp Supplemental References 33. T. Marsland, Encyclopedia of Artificial Intelligence, S. Shapiro, ed. (John Wiley & sons, New York, 1987). 34. G. Tesauro, Artificial Intelligence 134, 181 (2002). 35. G. Tesauro, G. R. Galperin, Advances in Neural Information Processing Systems 9 (1996), pp S. Thrun, Advances in Neural Information Processing Systems (1995), pp D. F. Beal, M. C. Smith, Information Sciences 122, 3 (2000). 38. D. F. Beal, M. C. Smith, Theoretical Computer Science 252, 105 (2001). 39. J. Baxter, A. Tridgell, L. Weaver, Machine Learning 40, 243 (2000). 40. J. Veness, D. Silver, A. Blair, W. Uther, Advances in Neural Information Processing Systems (2009), pp T. Kaneko, K. Hoki, Advances in Computer Games - 13th International Conference, ACG 2011, Tilburg, The Netherlands, November 20-22, 2011, Revised Selected Papers (2011), pp K. Hoki, T. Kaneko, Journal of Artificial Intelligence Research (JAIR) 49, 527 (2014). 43. M. Lai, Giraffe: Using deep reinforcement learning to play chess, Master s thesis, Imperial College London (2015). 44. T. Anthony, Z. Tian, D. Barber, Advances in Neural Information Processing Systems 30 (2017), pp D. E. Knuth, R. W. Moore, Artificial Intelligence 6, 293 (1975). 46. R. Ramanujan, A. Sabharwal, B. Selman, Proceedings of the 26th Conference on Uncertainty in Artificial Intelligence (UAI) (2010). 47. C. D. Rosin, Annals of Mathematics and Artificial Intelligence 61, 203 (2011). 11

12 48. K. He, X. Zhang, S. Ren, J. Sun, 14th European Conference on Computer Vision (2016), pp The TCEC world championship disallows opening books and instead starts two games (one from each colour) from each opening position. 50. Online chess games database, 365chess (2017). URL: com/. 12

13 Acknowledgments We thank Matthew Sadler for analysing chess games; Yoshiharu Habu for analysing shogi games; Lorrayne Bennett for organizational assistance; Bernhard Konrad, Ed Lockhart and Georg Ostrovski for reviewing the paper; and the rest of the DeepMind team for their support. Funding All research described in this report was funded by DeepMind and Alphabet. Author contributions D.S., J.S., T.H. and I.A. designed the AlphaZero algorithm with advice from T.G., A.G., T.L., K.S., M.Lai, L.S., M.Lanctot; J.S., I.A., T.H. and M.Lai implemented the AlphaZero program; T.H., J.S., D.S., M.Lai, I.A., T.G., K.S., D.K. and D.H. ran experiments and/or analysed data; D.S., T.H., J.S., and D.H. managed the project. D.S., J.S., T.H., M.Lai, I.A. and D.H., wrote the paper. Competing interests The authors declare no competing financial interests. DeepMind has filed the following patent applications related to this work: PCT/EP2018/063869; US15/280,711; US15/280,784. Data and materials availability A full description of the algorithm in pseudocode as well as additional games between AlphaZero and other programs are available in the Supplementary Materials. Supplementary Materials 110 chess games between AlphaZero and Stockfish 8 from the initial board position. 100 chess games between AlphaZero and Stockfish 8 from 2016 TCEC start positions. 100 shogi games between AlphaZero and Elmo from the initial board position. Pseudocode description of the AlphaZero algorithm. Data for Figures 1 and 3 in JSON format. Supplementary Figures S1, S2, S3, S4 and Supplementary Tables S1, S2, S3, S4, S5, S6, S7, S8, S9. References (33-50). 13

14 Methods Anatomy of a Computer Chess Program In this section we describe the components of a typical computer chess program, focusing specifically on Stockfish (11), an open source program that won the TCEC (Season 9) computer chess world championship in For an overview of standard methods, see (33). Each position s is described by a sparse vector of handcrafted features φ(s), including midgame/endgame-specific material point values, material imbalance tables, piece-square tables, mobility and trapped pieces, pawn structure, king safety, outposts, bishop pair, and other miscellaneous evaluation patterns. Each feature φ i is assigned, by a combination of manual and automatic tuning, a corresponding weight w i and the position is evaluated by a linear combination v(s, w) = φ(s) w. However, this raw evaluation is only considered accurate for positions that are quiet, with no unresolved captures or checks. A domain-specialized quiescence search is used to resolve ongoing tactical situations before the evaluation function is applied. The final evaluation of a position s is computed by a minimax search that evaluates each leaf using a quiescence search. Alpha-beta pruning is used to safely cut any branch that is provably dominated by another variation. Additional cuts are achieved using aspiration windows and principal variation search. Other pruning strategies include null move pruning (which assumes a pass move should be worse than any variation, in positions that are unlikely to be in zugzwang, as determined by simple heuristics), futility pruning (which assumes knowledge of the maximum possible change in evaluation), and other domain-dependent pruning rules (which assume knowledge of the value of captured pieces). The search is focused on promising variations both by extending the search depth of promising variations, and by reducing the search depth of unpromising variations based on heuristics like history, static-exchange evaluation (SEE), and moving piece type. Extensions are based on domain-independent rules that identify singular moves with no sensible alternative, and domaindependent rules, such as extending check moves. Reductions, such as late move reductions, are based heavily on domain knowledge. The efficiency of alpha-beta search depends critically upon the order in which moves are considered. Moves are therefore ordered by iterative deepening (using a shallower search to order moves for a deeper search). In addition, a combination of domain-independent move ordering heuristics, such as killer heuristic, history heuristic, counter-move heuristic, and also domain-dependent knowledge based on captures (SEE) and potential captures (MVV/LVA). A transposition table facilitates the reuse of values and move orders when the same position is reached by multiple paths. In some variants, a carefully tuned opening book may be used to select moves at the start of the game. An endgame tablebase, precalculated by exhaustive retrograde analysis of endgame positions, provides the optimal move in all positions with six and sometimes seven pieces or less. Other strong chess programs, and also earlier programs such as Deep Blue (1), have used very similar architectures (33) including the majority of the components described above, al- 14

15 though important details vary considerably. None of the techniques described in this section are used by AlphaZero. It is likely that some of these techniques could further improve the performance of AlphaZero; however, we have focused on a pure self-play reinforcement learning approach and leave these extensions for future research. Prior Work on Computer Chess and Shogi In this section we discuss some notable prior work on reinforcement learning and/or deep learning in computer chess, shogi and, due to its historical relevance, backgammon. TD Gammon (6) was a backgammon program that evaluated positions by a multi-layer perceptron, trained by temporal-difference learning to predict the final game outcome. When its evaluation function was combined with a 3-ply search (34) TD Gammon defeated the human world champion. A subsequent paper introduced the first version of Monte-Carlo search (35), which evaluated root positions by the average outcome of n-step rollouts. Each rollout was generated by greedy move selection and the nth position was evaluated by TD Gammon s neural network. NeuroChess (36) evaluated positions by a neural network that used 175 handcrafted input features. It was trained by temporal-difference learning to predict the final game outcome, and also the expected features after two moves. NeuroChess won 13% of games against GnuChess using a fixed depth 2 search, but lost overall. Beal and Smith applied temporal-difference learning to estimate the piece values in chess (37) and shogi (38), starting from random values and learning solely by self-play. KnightCap (39) evaluated positions by a neural network that used an attack table based on knowledge of which squares are attacked or defended by which pieces. It was trained by a variant of temporal-difference learning, known as TD(leaf), that updates the leaf value of the principal variation of an alpha-beta search. KnightCap achieved human master level after training against a strong computer opponent with hand-initialized piece-value weights. Meep (40) evaluated positions by a linear evaluation function based on handcrafted features. It was trained by another variant of temporal-difference learning, known as TreeStrap, that updates all nodes of an alpha-beta search. Meep defeated human international master players in 13 out of 15 games, after training by self-play with randomly initialized weights. Kaneko and Hoki (41) trained the weights of a shogi evaluation function comprising a million features, by learning to select expert human moves during alpha-beta search. They also performed a large-scale optimization based on minimax search regulated by expert game logs (42); this formed part of the Bonanza engine that won the 2013 World Computer Shogi Championship. Giraffe (43) evaluated positions by a neural network that included mobility maps and attack and defend maps describing the lowest valued attacker and defender of each square. It was trained by self-play using TD(leaf), also reaching a standard of play comparable to international masters. 15

16 DeepChess (32) trained a neural network to perform pair-wise evaluations of positions. It was trained by supervised learning from a database of human expert games that was pre-filtered to avoid capture moves and drawn games. DeepChess reached a strong grandmaster level of play. All of these programs combined their learned evaluation functions with an alpha-beta search enhanced by a variety of extensions. By contrast, an approach based on training dual policy and value networks using a policy iteration algorithm similar to AlphaZero was successfully applied to the game Hex (44). This work differed from AlphaZero in several regards: the policy network was initialized by imitating a pre-existing MCTS search algorithm, augmented by rollouts; the network was subsequently retrained from scratch at each iteration; and value targets were based on the outcome of self-play games using the raw policy network, rather than MCTS search. MCTS and Alpha-Beta Search For at least four decades the strongest computer chess programs have used alpha-beta search with handcrafted evaluation functions (33, 45). Chess programs using traditional MCTS (31) were much weaker than alpha-beta search programs (46), whereas alpha-beta programs based on neural networks have previously been unable to compete with faster, handcrafted evaluation functions. Surprisingly, AlphaZero surpassed previous approaches by using an effective combination of MCTS and neural networks. AlphaZero evaluates positions non-linearly using deep neural networks, rather than the linear evaluation function used in typical chess programs. This provides a more powerful evaluation function, but may also introduce larger worst-case generalization errors. When combined with alpha-beta search, which computes an explicit minimax, the biggest errors are typically propagated directly to the root of the subtree. By contrast, AlphaZero s MCTS averages over the position evaluations within a subtree, rather than computing the minimax evaluation of that subtree. We speculate that the approximation errors introduced by neural networks therefore tend to cancel out when evaluating a large subtree. Domain Knowledge AlphaZero was provided with the following domain knowledge about each game: 1. The input features describing the position, and the output features describing the move, are structured as a set of planes; i.e. the neural network architecture is matched to the grid-structure of the board. 2. AlphaZero is provided with perfect knowledge of the game rules. These are used during MCTS, to simulate the positions resulting from a sequence of moves, to determine game termination, and to score any simulations that reach a terminal state. 16

17 3. Knowledge of the rules is also used to encode the input planes (i.e. castling, repetition, no-progress) and output planes (how pieces move, promotions, and piece drops in shogi). 4. The typical number of legal moves is used to scale the exploration noise (see below). 5. Chess and shogi games exceeding 512 steps were terminated and assigned a drawn outcome; Go games exceeding 722 steps were terminated and scored with Tromp-Taylor rules, similarly to previous work (9). AlphaZero did not use an opening book, endgame tablebases, or domain-specific heuristics. Search We briefly describe here the MCTS algorithm (9) used by AlphaZero; further details can be found in the pseudocode in the Supplementary Data. Each state-action pair (s, a) stores a set of statistics, {N(s, a), W (s, a), Q(s, a), P (s, a)}, where N(s, a) is the visit count, W (s, a) is the total action-value, Q(s, a) is the mean action-value, and P (s, a) is the prior probability of selecting a in s. Each simulation begins at the root node of the search tree, s 0, and finishes when the simulation reaches a leaf node s L at ( time-step L. At each of these timesteps, t < L, an action is selected, a t = arg max a Q(st, a) + U(s t, a) ), using a variant of the PUCT algorithm (47), U(s, a) = C(s)P (s, a) N(s)/(1 + N(s, a)), where N(s) is the parent visit count and C(s) is the exploration rate, which grows slowly with search time, C(s) = log ((1 + N(s) + c base )/c base ) + c init, but is essentially constant during the fast training games. The leaf node s L is added to a queue for neural network evaluation, (p, v) = f θ (s L ). The leaf node is expanded and each state-action pair (s L, a) is initialized to {N(s L, a) = 0, W (s L, a) = 0, Q(s L, a) = 0, P (s L, a) = p a }. The visit counts and values are then updated in a backward pass through each step t L, N(s t, a t ) = N(s t, a t ) + 1, W (s t, a t ) = W (s t, a t ) + v, Q(s t, a t ) = W (st,at). N(s t,a t) Representation In this section we describe the representation of the board inputs, and the representation of the action outputs, used by the neural network in AlphaZero. Other representations could have been used; in our experiments the training algorithm worked robustly for many reasonable choices. The input to the neural network is an N N (MT + L) image stack that represents state using a concatenation of T sets of M planes of size N N. Each set of planes represents the board position at a time-step t T + 1,..., t, and is set to zero for time-steps less than 1. The board is oriented to the perspective of the current player. The M feature planes are composed of binary feature planes indicating the presence of the player s pieces, with one plane for each piece type, and a second set of planes indicating the presence of the opponent s pieces. For shogi there are additional planes indicating the number of captured prisoners of each type. There are an additional L constant-valued input planes denoting the player s colour, the move 17

18 number, and the state of special rules: the legality of castling in chess (kingside or queenside); the repetition count for the current position (3 repetitions is an automatic draw in chess; 4 in shogi); and the number of moves without progress in chess (50 moves without progress is an automatic draw). Input features are summarized in Table S1. A move in chess may be described in two parts: first selecting the piece to move, and then selecting among possible moves for that piece. We represent the policy π(a s) by a stack of planes encoding a probability distribution over 4,672 possible moves. Each of the 8 8 positions identifies the square from which to pick up a piece. The first 56 planes encode possible queen moves for any piece: a number of squares [1..7] in which the piece will be moved, along one of eight relative compass directions {N, NE, E, SE, S, SW, W, NW }. The next 8 planes encode possible knight moves for that piece. The final 9 planes encode possible underpromotions for pawn moves or captures in two possible diagonals, to knight, bishop or rook respectively. Other pawn moves or captures from the seventh rank are promoted to a queen. The policy in shogi is represented by a stack of planes similarly encoding a probability distribution over 11,259 possible moves. The first 64 planes encode queen moves and the next 2 planes encode knight moves. An additional planes encode promoting queen moves and promoting knight moves respectively. The last 7 planes encode a captured piece dropped back into the board at that location. The policy in Go is represented identically to AlphaGo Zero (9), using a flat distribution over moves representing possible stone placements and the pass move. We also tried using a flat distribution over moves for chess and shogi; the final result was almost identical although training was slightly slower. Illegal moves are masked out by setting their probabilities to zero, and re-normalising the probabilities over the remaining set of legal moves. The action representations are summarized in Table S2. Architecture Apart from the representation of positions and actions described above, AlphaZero uses the same network architecture as AlphaGo Zero (9), briefly recapitulated here. The neural network consists of a body followed by both policy and value heads. The body consists of a rectified batch-normalized convolutional layer followed by 19 residual blocks (48). Each such block consists of two rectified batch-normalized convolutional layers with a skip connection. Each convolution applies 256 filters of kernel size 3 3 with stride 1. The policy head applies an additional rectified, batch-normalized convolutional layer, followed by a final convolution of 73 filters for chess or 139 filters for shogi, or a linear layer of size 362 for Go, representing the logits of the respective policies described above. The value head applies an additional rectified, batch-normalized convolution of 1 filter of kernel size 1 1 with stride 1, followed by a rectified linear layer of size 256 and a tanh-linear layer of size 1. 18

19 Configuration During training, each MCTS used 800 simulations. The number of games, positions, and thinking time varied per game due largely to different board sizes and game lengths, and are shown in Table S3. The learning rate was set to 0.2 for each game, and was dropped three times during the course of training to 0.02, and respectively, after 100, 300 and 500 thousands of steps for chess and shogi, and after 0, 300 and 500 thousands of steps for Go. Moves are selected in proportion to the root visit count. Dirichlet noise Dir(α) was added to the prior probabilities in the root node; this was scaled in inverse proportion to the approximate number of legal moves in a typical position, to a value of α = {0.3, 0.15, 0.03} for chess, shogi and Go respectively. Positions were batched across parallel training games for evaluation by the neural network. Unless otherwise specified, the training and search algorithm and parameters are identical to AlphaGo Zero (9). During evaluation, AlphaZero selects moves greedily with respect to the root visit count. Each MCTS was executed on a single machine with 4 first-generation TPUs. Opponents To evaluate performance in chess, we used Stockfish version 8 (official Linux release) as a baseline program. Stockfish was configured according to its 2016 TCEC world championship superfinal settings: 44 threads on 44 cores (two 2.2GHz Intel Xeon Broadwell CPUs with 22 cores), a hash size of 32GB, syzygy endgame tablebases, at 3 hour time controls with 15 additional seconds per move. We also evaluated against the most recent version, Stockfish 9 (just released at time of writing), using the same configuration. Stockfish does not have an opening book of its own and all primary evaluations were performed without an opening book. We also performed one secondary evaluation in which the opponent s opening moves were selected by the Brainfish program, using an opening book derived from Stockfish. However, we note that these matches were low in diversity, and AlphaZero and Stockfish tended to produce very similar games throughout the match, more than 90% of which were draws. When we forced AlphaZero to play with greater diversity (by softmax sampling with a temperature of 10.0 among moves for which the value was no more than 1% away from the best move for the first 30 plies) the winning rate increased from 5.8% to 14%. To evaluate performance in shogi, we used Elmo version WCSC27 in combination with YaneuraOu 2017 Early KPPT AVX2 TOURNAMENT as a baseline program, using 44 CPU threads (on two 2.2GHz Intel Xeon Broadwell CPUs with 22 cores) and a hash size of 32GB with the usi options of EnteringKingRule set to CSARule27, MinimumThinkingTime set to 1000, BookFile set to standard book.db, BookDepthLimit set to 0 and BookMoves set to 200. Additionally, we also evaluated against Aperyqhapaq combined with the same YaneuraOu version and no book file. For Aperyqhapaq, we used the same usi options as for Elmo except for the book setting. 19

20 Match conditions We measured the head-to-head performance of AlphaZero in matches against each of the above opponents (Figure 2). Three types of match were played: starting from the initial board position (the default configuration, unless otherwise specified); starting from human opening positions; or starting from the 2016 TCEC opening positions (49). The majority of matches for chess, shogi and Go used the 2016 TCEC superfinal time controls: 3 hours of main thinking time, plus 15 additional seconds of thinking time for each move. We also investigated asymmetric time controls (Figure 2B), where the opponent received 3 hours of main thinking time but AlphaZero received only a fraction of this time. Finally, for shogi only, we ran a match using faster time controls used in the 2017 CSA world championship: 10 minutes per game plus 10 seconds per move. AlphaZero used a simple time control strategy: thinking for 1/20th of the remaining time. Opponent programs used customized, sophisticated heuristics for time control. Pondering was disabled for all players (particularly important for the asymmetric time controls in Figure 2). Resignation was enabled for all players (-650 centipawns for 4 consecutive moves for Stockfish, -4,500 centipawns for 10 consecutive moves for Elmo, or a value of -0.9 for AlphaZero and AlphaGo Lee). Matches consisted of 1,000 games, except for the human openings (200 games as black and 200 games as white from each opening) and the 2016 TCEC openings (50 games as black and 50 games as white from each of the 50 openings). The human opening positions were chosen as those played more than 100,000 times in an online database (50). Elo ratings We evaluated the relative strength of AlphaZero (Figure 1) by measuring the Elo rating of each player. We estimate the probability that player a will defeat player b by a logistic function p(a defeats b) = ( (c elo(e(b) e(a))) ) 1, and estimate the ratings e( ) by Bayesian logistic regression, computed by the BayesElo program (21) using the standard constant c elo = 1/400. Elo ratings were computed from the results of a 1 second per move tournament between iterations of AlphaZero during training, and also a baseline player: either Stockfish, Elmo or AlphaGo Lee respectively. The Elo rating of the baseline players was anchored to publicly available values (9). In order to compare Elo ratings at 1 second per move time controls to standard Elo ratings at full time controls, we also provide the results of Stockfish vs. Stockfish and Elmo vs. Elmo matches (Table S5). Example games The Supplementary Data includes 110 games from the main chess match between AlphaZero and Stockfish, starting from the initial board position; 100 games from the chess match starting 20

21 from 2016 TCEC world championship opening positions; and 100 games from the main shogi match between AlphaZero and Elmo. For the chess match from the initial board position, one game was selected at random for each unique opening sequence of 30 plies; all AlphaZero losses were also included. For the TCEC match, one game as white and one game as black were selected at random from the match starting from each opening position. For the shogi match, one game was selected at random for each unique opening sequence of 25 plies (when AlphaZero was black) or 10 plies (when AlphaZero was white). 10 chess games were independently selected from each batch by GM Matthew Sadler, according to their interest to the chess community; these games are included in Table S6. Similarly, 10 shogi games were independently selected by Yoshiharu Habu; these games are included in Table S7. 21

22 Elo Thousands of Steps AlphaZero Symmetries AlphaZero AlphaGo Zero Hours Figure S1: Learning curves showing the Elo performance during training in Go. Comparison between AlphaZero, a version of AlphaZero that exploits knowledge of symmetries in a similar manner to AlphaGo Zero, and the previously published AlphaGo Zero. AlphaZero generates approximately 1/8 as many positions per training step, and therefore uses eight times more wall clock time, than the symmetry-augmented algorithms. 22

23 50k 15k 143k 29k 233k 58k 329k 88k 422k 116k 515k 233k 608k 466k 700k 700k Figure S2: Chess and shogi openings preferred by AlphaZero at different stages of selfplay training, labelled with the number of training steps. The figure shows the most frequently selected opening 6 plies played by AlphaZero during its games of self-play. Each move was generated by an MCTS with just 800 simulations per move. 23

24 3500 Chess 3000 Elo Thousands of Steps Figure S3: Repeatability of AlphaZero training on the game of chess. The figure shows 6 separate training runs of 400,000 steps (approximately 4 hours each). Elo ratings were computed from a tournament between baseline players and AlphaZero players at different stages of training. AlphaZero players were given 800 simulations per move. Similar repeatability was observed in shogi and Go. 24

25 Figure S4: Chess matches beginning from the 2016 TCEC world championship start positions. In the left bar, AlphaZero plays white, starting from the given position; in the right bar AlphaZero plays black. Each bar shows the results from AlphaZero s perspective: win (green), draw (grey), loss (red). Many of these start positions are unbalanced according to both and AlphaZero and Stockfish, resulting in more losses for both players. 25

26 Go Chess Shogi Feature Planes Feature Planes Feature Planes P1 stone 1 P1 piece 6 P1 piece 14 P2 stone 1 P2 piece 6 P2 piece 14 Repetitions 2 Repetitions 3 P1 prisoner count 7 P2 prisoner count 7 Colour 1 Colour 1 Colour 1 Total move count 1 Total move count 1 P1 castling 2 P2 castling 2 No-progress count 1 Total 17 Total 119 Total 362 Table S1: Input features used by AlphaZero in Go, chess and shogi respectively. The first set of features are repeated for each position in a T = 8-step history. Counts are represented by a single real-valued input; other input features are represented by a one-hot encoding using the specified number of binary input planes. The current player is denoted by P1 and the opponent by P2. Chess Shogi Feature Planes Feature Planes Queen moves 56 Queen moves 64 Knight moves 8 Knight moves 2 Underpromotions 9 Promoting queen moves 64 Promoting knight moves 2 Drop 7 Total 73 Total 139 Table S2: Action representation used by AlphaZero in chess and shogi respectively. The policy is represented by a stack of planes encoding a probability distribution over legal moves; planes correspond to the entries in the table. 26

Mastering Chess and Shogi by Self- Play with a General Reinforcement Learning Algorithm

Mastering Chess and Shogi by Self- Play with a General Reinforcement Learning Algorithm Mastering Chess and Shogi by Self- Play with a General Reinforcement Learning Algorithm by Silver et al Published by Google Deepmind Presented by Kira Selby Background u In March 2016, Deepmind s AlphaGo

More information

Success Stories of Deep RL. David Silver

Success Stories of Deep RL. David Silver Success Stories of Deep RL David Silver Reinforcement Learning (RL) RL is a general-purpose framework for decision-making An agent selects actions Its actions influence its future observations Success

More information

GC Gadgets in the Rush Hour. Game Complexity Gadgets in the Rush Hour. Walter Kosters, Universiteit Leiden

GC Gadgets in the Rush Hour. Game Complexity Gadgets in the Rush Hour. Walter Kosters, Universiteit Leiden GC Gadgets in the Rush Hour Game Complexity Gadgets in the Rush Hour Walter Kosters, Universiteit Leiden www.liacs.leidenuniv.nl/ kosterswa/ IPA, Eindhoven; Friday, January 25, 209 link link link mystery

More information

Google DeepMind s AlphaGo vs. world Go champion Lee Sedol

Google DeepMind s AlphaGo vs. world Go champion Lee Sedol Google DeepMind s AlphaGo vs. world Go champion Lee Sedol Review of Nature paper: Mastering the game of Go with Deep Neural Networks & Tree Search Tapani Raiko Thanks to Antti Tarvainen for some slides

More information

Mastering the game of Go without human knowledge

Mastering the game of Go without human knowledge Mastering the game of Go without human knowledge David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert, Lucas Baker, Matthew Lai, Adrian Bolton,

More information

Automated Suicide: An Antichess Engine

Automated Suicide: An Antichess Engine Automated Suicide: An Antichess Engine Jim Andress and Prasanna Ramakrishnan 1 Introduction Antichess (also known as Suicide Chess or Loser s Chess) is a popular variant of chess where the objective of

More information

CSC321 Lecture 23: Go

CSC321 Lecture 23: Go CSC321 Lecture 23: Go Roger Grosse Roger Grosse CSC321 Lecture 23: Go 1 / 21 Final Exam Friday, April 20, 9am-noon Last names A Y: Clara Benson Building (BN) 2N Last names Z: Clara Benson Building (BN)

More information

TTIC 31230, Fundamentals of Deep Learning David McAllester, April AlphaZero

TTIC 31230, Fundamentals of Deep Learning David McAllester, April AlphaZero TTIC 31230, Fundamentals of Deep Learning David McAllester, April 2017 AlphaZero 1 AlphaGo Fan (October 2015) AlphaGo Defeats Fan Hui, European Go Champion. 2 AlphaGo Lee (March 2016) 3 AlphaGo Zero vs.

More information

Bootstrapping from Game Tree Search

Bootstrapping from Game Tree Search Joel Veness David Silver Will Uther Alan Blair University of New South Wales NICTA University of Alberta December 9, 2009 Presentation Overview Introduction Overview Game Tree Search Evaluation Functions

More information

Presentation Overview. Bootstrapping from Game Tree Search. Game Tree Search. Heuristic Evaluation Function

Presentation Overview. Bootstrapping from Game Tree Search. Game Tree Search. Heuristic Evaluation Function Presentation Bootstrapping from Joel Veness David Silver Will Uther Alan Blair University of New South Wales NICTA University of Alberta A new algorithm will be presented for learning heuristic evaluation

More information

Monte Carlo Tree Search

Monte Carlo Tree Search Monte Carlo Tree Search 1 By the end, you will know Why we use Monte Carlo Search Trees The pros and cons of MCTS How it is applied to Super Mario Brothers and Alpha Go 2 Outline I. Pre-MCTS Algorithms

More information

TD-Leaf(λ) Giraffe: Using Deep Reinforcement Learning to Play Chess. Stefan Lüttgen

TD-Leaf(λ) Giraffe: Using Deep Reinforcement Learning to Play Chess. Stefan Lüttgen TD-Leaf(λ) Giraffe: Using Deep Reinforcement Learning to Play Chess Stefan Lüttgen Motivation Learn to play chess Computer approach different than human one Humans search more selective: Kasparov (3-5

More information

Foundations of Artificial Intelligence

Foundations of Artificial Intelligence Foundations of Artificial Intelligence 6. Board Games Search Strategies for Games, Games with Chance, State of the Art Joschka Boedecker and Wolfram Burgard and Bernhard Nebel Albert-Ludwigs-Universität

More information

Artificial Intelligence Search III

Artificial Intelligence Search III Artificial Intelligence Search III Lecture 5 Content: Search III Quick Review on Lecture 4 Why Study Games? Game Playing as Search Special Characteristics of Game Playing Search Ingredients of 2-Person

More information

Foundations of Artificial Intelligence

Foundations of Artificial Intelligence Foundations of Artificial Intelligence 6. Board Games Search Strategies for Games, Games with Chance, State of the Art Joschka Boedecker and Wolfram Burgard and Frank Hutter and Bernhard Nebel Albert-Ludwigs-Universität

More information

CS 331: Artificial Intelligence Adversarial Search II. Outline

CS 331: Artificial Intelligence Adversarial Search II. Outline CS 331: Artificial Intelligence Adversarial Search II 1 Outline 1. Evaluation Functions 2. State-of-the-art game playing programs 3. 2 player zero-sum finite stochastic games of perfect information 2 1

More information

Supplementary Materials for

Supplementary Materials for www.sciencemag.org/content/362/6419/1140/suppl/dc1 Supplementary Materials for A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play David Silver*, Thomas Hubert*,

More information

Game-playing: DeepBlue and AlphaGo

Game-playing: DeepBlue and AlphaGo Game-playing: DeepBlue and AlphaGo Brief history of gameplaying frontiers 1990s: Othello world champions refuse to play computers 1994: Chinook defeats Checkers world champion 1997: DeepBlue defeats world

More information

46.1 Introduction. Foundations of Artificial Intelligence Introduction MCTS in AlphaGo Neural Networks. 46.

46.1 Introduction. Foundations of Artificial Intelligence Introduction MCTS in AlphaGo Neural Networks. 46. Foundations of Artificial Intelligence May 30, 2016 46. AlphaGo and Outlook Foundations of Artificial Intelligence 46. AlphaGo and Outlook Thomas Keller Universität Basel May 30, 2016 46.1 Introduction

More information

Foundations of AI. 6. Adversarial Search. Search Strategies for Games, Games with Chance, State of the Art. Wolfram Burgard & Bernhard Nebel

Foundations of AI. 6. Adversarial Search. Search Strategies for Games, Games with Chance, State of the Art. Wolfram Burgard & Bernhard Nebel Foundations of AI 6. Adversarial Search Search Strategies for Games, Games with Chance, State of the Art Wolfram Burgard & Bernhard Nebel Contents Game Theory Board Games Minimax Search Alpha-Beta Search

More information

Algorithms for solving sequential (zero-sum) games. Main case in these slides: chess! Slide pack by " Tuomas Sandholm"

Algorithms for solving sequential (zero-sum) games. Main case in these slides: chess! Slide pack by  Tuomas Sandholm Algorithms for solving sequential (zero-sum) games Main case in these slides: chess! Slide pack by " Tuomas Sandholm" Rich history of cumulative ideas Game-theoretic perspective" Game of perfect information"

More information

Lecture 14. Questions? Friday, February 10 CS 430 Artificial Intelligence - Lecture 14 1

Lecture 14. Questions? Friday, February 10 CS 430 Artificial Intelligence - Lecture 14 1 Lecture 14 Questions? Friday, February 10 CS 430 Artificial Intelligence - Lecture 14 1 Outline Chapter 5 - Adversarial Search Alpha-Beta Pruning Imperfect Real-Time Decisions Stochastic Games Friday,

More information

CS 229 Final Project: Using Reinforcement Learning to Play Othello

CS 229 Final Project: Using Reinforcement Learning to Play Othello CS 229 Final Project: Using Reinforcement Learning to Play Othello Kevin Fry Frank Zheng Xianming Li ID: kfry ID: fzheng ID: xmli 16 December 2016 Abstract We built an AI that learned to play Othello.

More information

AI Approaches to Ultimate Tic-Tac-Toe

AI Approaches to Ultimate Tic-Tac-Toe AI Approaches to Ultimate Tic-Tac-Toe Eytan Lifshitz CS Department Hebrew University of Jerusalem, Israel David Tsurel CS Department Hebrew University of Jerusalem, Israel I. INTRODUCTION This report is

More information

Set 4: Game-Playing. ICS 271 Fall 2017 Kalev Kask

Set 4: Game-Playing. ICS 271 Fall 2017 Kalev Kask Set 4: Game-Playing ICS 271 Fall 2017 Kalev Kask Overview Computer programs that play 2-player games game-playing as search with the complication of an opponent General principles of game-playing and search

More information

COMP219: Artificial Intelligence. Lecture 13: Game Playing

COMP219: Artificial Intelligence. Lecture 13: Game Playing CMP219: Artificial Intelligence Lecture 13: Game Playing 1 verview Last time Search with partial/no observations Belief states Incremental belief state search Determinism vs non-determinism Today We will

More information

COMP219: COMP219: Artificial Intelligence Artificial Intelligence Dr. Annabel Latham Lecture 12: Game Playing Overview Games and Search

COMP219: COMP219: Artificial Intelligence Artificial Intelligence Dr. Annabel Latham Lecture 12: Game Playing Overview Games and Search COMP19: Artificial Intelligence COMP19: Artificial Intelligence Dr. Annabel Latham Room.05 Ashton Building Department of Computer Science University of Liverpool Lecture 1: Game Playing 1 Overview Last

More information

Algorithms for Data Structures: Search for Games. Phillip Smith 27/11/13

Algorithms for Data Structures: Search for Games. Phillip Smith 27/11/13 Algorithms for Data Structures: Search for Games Phillip Smith 27/11/13 Search for Games Following this lecture you should be able to: Understand the search process in games How an AI decides on the best

More information

Augmenting Self-Learning In Chess Through Expert Imitation

Augmenting Self-Learning In Chess Through Expert Imitation Augmenting Self-Learning In Chess Through Expert Imitation Michael Xie Department of Computer Science Stanford University Stanford, CA 94305 xie@cs.stanford.edu Gene Lewis Department of Computer Science

More information

Adversarial Search. Soleymani. Artificial Intelligence: A Modern Approach, 3 rd Edition, Chapter 5

Adversarial Search. Soleymani. Artificial Intelligence: A Modern Approach, 3 rd Edition, Chapter 5 Adversarial Search CE417: Introduction to Artificial Intelligence Sharif University of Technology Spring 2017 Soleymani Artificial Intelligence: A Modern Approach, 3 rd Edition, Chapter 5 Outline Game

More information

Adversarial Search Aka Games

Adversarial Search Aka Games Adversarial Search Aka Games Chapter 5 Some material adopted from notes by Charles R. Dyer, U of Wisconsin-Madison Overview Game playing State of the art and resources Framework Game trees Minimax Alpha-beta

More information

Chess Algorithms Theory and Practice. Rune Djurhuus Chess Grandmaster / September 23, 2013

Chess Algorithms Theory and Practice. Rune Djurhuus Chess Grandmaster / September 23, 2013 Chess Algorithms Theory and Practice Rune Djurhuus Chess Grandmaster runed@ifi.uio.no / runedj@microsoft.com September 23, 2013 1 Content Complexity of a chess game History of computer chess Search trees

More information

Using Neural Network and Monte-Carlo Tree Search to Play the Game TEN

Using Neural Network and Monte-Carlo Tree Search to Play the Game TEN Using Neural Network and Monte-Carlo Tree Search to Play the Game TEN Weijie Chen Fall 2017 Weijie Chen Page 1 of 7 1. INTRODUCTION Game TEN The traditional game Tic-Tac-Toe enjoys people s favor. Moreover,

More information

Adversarial Search and Game- Playing C H A P T E R 6 C M P T : S P R I N G H A S S A N K H O S R A V I

Adversarial Search and Game- Playing C H A P T E R 6 C M P T : S P R I N G H A S S A N K H O S R A V I Adversarial Search and Game- Playing C H A P T E R 6 C M P T 3 1 0 : S P R I N G 2 0 1 1 H A S S A N K H O S R A V I Adversarial Search Examine the problems that arise when we try to plan ahead in a world

More information

Artificial Intelligence

Artificial Intelligence Artificial Intelligence CS482, CS682, MW 1 2:15, SEM 201, MS 227 Prerequisites: 302, 365 Instructor: Sushil Louis, sushil@cse.unr.edu, http://www.cse.unr.edu/~sushil Games and game trees Multi-agent systems

More information

Reinforcement Learning of Local Shape in the Game of Go

Reinforcement Learning of Local Shape in the Game of Go Reinforcement Learning of Local Shape in the Game of Go David Silver, Richard Sutton, and Martin Müller Department of Computing Science University of Alberta Edmonton, Canada T6G 2E8 {silver, sutton, mmueller}@cs.ualberta.ca

More information

CPS331 Lecture: Search in Games last revised 2/16/10

CPS331 Lecture: Search in Games last revised 2/16/10 CPS331 Lecture: Search in Games last revised 2/16/10 Objectives: 1. To introduce mini-max search 2. To introduce the use of static evaluation functions 3. To introduce alpha-beta pruning Materials: 1.

More information

Game-Playing & Adversarial Search

Game-Playing & Adversarial Search Game-Playing & Adversarial Search This lecture topic: Game-Playing & Adversarial Search (two lectures) Chapter 5.1-5.5 Next lecture topic: Constraint Satisfaction Problems (two lectures) Chapter 6.1-6.4,

More information

Andrei Behel AC-43И 1

Andrei Behel AC-43И 1 Andrei Behel AC-43И 1 History The game of Go originated in China more than 2,500 years ago. The rules of the game are simple: Players take turns to place black or white stones on a board, trying to capture

More information

CS221 Project Final Report Gomoku Game Agent

CS221 Project Final Report Gomoku Game Agent CS221 Project Final Report Gomoku Game Agent Qiao Tan qtan@stanford.edu Xiaoti Hu xiaotihu@stanford.edu 1 Introduction Gomoku, also know as five-in-a-row, is a strategy board game which is traditionally

More information

How AI Won at Go and So What? Garry Kasparov vs. Deep Blue (1997)

How AI Won at Go and So What? Garry Kasparov vs. Deep Blue (1997) How AI Won at Go and So What? Garry Kasparov vs. Deep Blue (1997) Alan Fern School of Electrical Engineering and Computer Science Oregon State University Deep Mind s vs. Lee Sedol (2016) Watson vs. Ken

More information

Game Playing. Philipp Koehn. 29 September 2015

Game Playing. Philipp Koehn. 29 September 2015 Game Playing Philipp Koehn 29 September 2015 Outline 1 Games Perfect play minimax decisions α β pruning Resource limits and approximate evaluation Games of chance Games of imperfect information 2 games

More information

Foundations of AI. 5. Board Games. Search Strategies for Games, Games with Chance, State of the Art. Wolfram Burgard and Luc De Raedt SA-1

Foundations of AI. 5. Board Games. Search Strategies for Games, Games with Chance, State of the Art. Wolfram Burgard and Luc De Raedt SA-1 Foundations of AI 5. Board Games Search Strategies for Games, Games with Chance, State of the Art Wolfram Burgard and Luc De Raedt SA-1 Contents Board Games Minimax Search Alpha-Beta Search Games with

More information

CS 771 Artificial Intelligence. Adversarial Search

CS 771 Artificial Intelligence. Adversarial Search CS 771 Artificial Intelligence Adversarial Search Typical assumptions Two agents whose actions alternate Utility values for each agent are the opposite of the other This creates the adversarial situation

More information

CS440/ECE448 Lecture 11: Stochastic Games, Stochastic Search, and Learned Evaluation Functions

CS440/ECE448 Lecture 11: Stochastic Games, Stochastic Search, and Learned Evaluation Functions CS440/ECE448 Lecture 11: Stochastic Games, Stochastic Search, and Learned Evaluation Functions Slides by Svetlana Lazebnik, 9/2016 Modified by Mark Hasegawa Johnson, 9/2017 Types of game environments Perfect

More information

CS2212 PROGRAMMING CHALLENGE II EVALUATION FUNCTIONS N. H. N. D. DE SILVA

CS2212 PROGRAMMING CHALLENGE II EVALUATION FUNCTIONS N. H. N. D. DE SILVA CS2212 PROGRAMMING CHALLENGE II EVALUATION FUNCTIONS N. H. N. D. DE SILVA Game playing was one of the first tasks undertaken in AI as soon as computers became programmable. (e.g., Turing, Shannon, and

More information

Programming an Othello AI Michael An (man4), Evan Liang (liange)

Programming an Othello AI Michael An (man4), Evan Liang (liange) Programming an Othello AI Michael An (man4), Evan Liang (liange) 1 Introduction Othello is a two player board game played on an 8 8 grid. Players take turns placing stones with their assigned color (black

More information

Games and Adversarial Search

Games and Adversarial Search 1 Games and Adversarial Search BBM 405 Fundamentals of Artificial Intelligence Pinar Duygulu Hacettepe University Slides are mostly adapted from AIMA, MIT Open Courseware and Svetlana Lazebnik (UIUC) Spring

More information

Artificial Intelligence

Artificial Intelligence Artificial Intelligence CS482, CS682, MW 1 2:15, SEM 201, MS 227 Prerequisites: 302, 365 Instructor: Sushil Louis, sushil@cse.unr.edu, http://www.cse.unr.edu/~sushil Non-classical search - Path does not

More information

CS 4700: Foundations of Artificial Intelligence

CS 4700: Foundations of Artificial Intelligence CS 4700: Foundations of Artificial Intelligence selman@cs.cornell.edu Module: Adversarial Search R&N: Chapter 5 Part II 1 Outline Game Playing Optimal decisions Minimax α-β pruning Case study: Deep Blue

More information

CS 4700: Foundations of Artificial Intelligence

CS 4700: Foundations of Artificial Intelligence CS 4700: Foundations of Artificial Intelligence selman@cs.cornell.edu Module: Adversarial Search R&N: Chapter 5 1 Outline Adversarial Search Optimal decisions Minimax α-β pruning Case study: Deep Blue

More information

Unit-III Chap-II Adversarial Search. Created by: Ashish Shah 1

Unit-III Chap-II Adversarial Search. Created by: Ashish Shah 1 Unit-III Chap-II Adversarial Search Created by: Ashish Shah 1 Alpha beta Pruning In case of standard ALPHA BETA PRUNING minimax tree, it returns the same move as minimax would, but prunes away branches

More information

Foundations of AI. 6. Board Games. Search Strategies for Games, Games with Chance, State of the Art

Foundations of AI. 6. Board Games. Search Strategies for Games, Games with Chance, State of the Art Foundations of AI 6. Board Games Search Strategies for Games, Games with Chance, State of the Art Wolfram Burgard, Andreas Karwath, Bernhard Nebel, and Martin Riedmiller SA-1 Contents Board Games Minimax

More information

Last update: March 9, Game playing. CMSC 421, Chapter 6. CMSC 421, Chapter 6 1

Last update: March 9, Game playing. CMSC 421, Chapter 6. CMSC 421, Chapter 6 1 Last update: March 9, 2010 Game playing CMSC 421, Chapter 6 CMSC 421, Chapter 6 1 Finite perfect-information zero-sum games Finite: finitely many agents, actions, states Perfect information: every agent

More information

Contents. Foundations of Artificial Intelligence. Problems. Why Board Games?

Contents. Foundations of Artificial Intelligence. Problems. Why Board Games? Contents Foundations of Artificial Intelligence 6. Board Games Search Strategies for Games, Games with Chance, State of the Art Wolfram Burgard, Bernhard Nebel, and Martin Riedmiller Albert-Ludwigs-Universität

More information

Algorithms for solving sequential (zero-sum) games. Main case in these slides: chess. Slide pack by Tuomas Sandholm

Algorithms for solving sequential (zero-sum) games. Main case in these slides: chess. Slide pack by Tuomas Sandholm Algorithms for solving sequential (zero-sum) games Main case in these slides: chess Slide pack by Tuomas Sandholm Rich history of cumulative ideas Game-theoretic perspective Game of perfect information

More information

More Adversarial Search

More Adversarial Search More Adversarial Search CS151 David Kauchak Fall 2010 http://xkcd.com/761/ Some material borrowed from : Sara Owsley Sood and others Admin Written 2 posted Machine requirements for mancala Most of the

More information

DeepStack: Expert-Level AI in Heads-Up No-Limit Poker. Surya Prakash Chembrolu

DeepStack: Expert-Level AI in Heads-Up No-Limit Poker. Surya Prakash Chembrolu DeepStack: Expert-Level AI in Heads-Up No-Limit Poker Surya Prakash Chembrolu AI and Games AlphaGo Go Watson Jeopardy! DeepBlue -Chess Chinook -Checkers TD-Gammon -Backgammon Perfect Information Games

More information

Opponent Models and Knowledge Symmetry in Game-Tree Search

Opponent Models and Knowledge Symmetry in Game-Tree Search Opponent Models and Knowledge Symmetry in Game-Tree Search Jeroen Donkers Institute for Knowlegde and Agent Technology Universiteit Maastricht, The Netherlands donkers@cs.unimaas.nl Abstract In this paper

More information

Spatial Average Pooling for Computer Go

Spatial Average Pooling for Computer Go Spatial Average Pooling for Computer Go Tristan Cazenave Université Paris-Dauphine PSL Research University CNRS, LAMSADE PARIS, FRANCE Abstract. Computer Go has improved up to a superhuman level thanks

More information

Adversary Search. Ref: Chapter 5

Adversary Search. Ref: Chapter 5 Adversary Search Ref: Chapter 5 1 Games & A.I. Easy to measure success Easy to represent states Small number of operators Comparison against humans is possible. Many games can be modeled very easily, although

More information

AI in Tabletop Games. Team 13 Josh Charnetsky Zachary Koch CSE Professor Anita Wasilewska

AI in Tabletop Games. Team 13 Josh Charnetsky Zachary Koch CSE Professor Anita Wasilewska AI in Tabletop Games Team 13 Josh Charnetsky Zachary Koch CSE 352 - Professor Anita Wasilewska Works Cited Kurenkov, Andrey. a-brief-history-of-game-ai.png. 18 Apr. 2016, www.andreykurenkov.com/writing/a-brief-history-of-game-ai/

More information

6. Games. COMP9414/ 9814/ 3411: Artificial Intelligence. Outline. Mechanical Turk. Origins. origins. motivation. minimax search

6. Games. COMP9414/ 9814/ 3411: Artificial Intelligence. Outline. Mechanical Turk. Origins. origins. motivation. minimax search COMP9414/9814/3411 16s1 Games 1 COMP9414/ 9814/ 3411: Artificial Intelligence 6. Games Outline origins motivation Russell & Norvig, Chapter 5. minimax search resource limits and heuristic evaluation α-β

More information

Today. Types of Game. Games and Search 1/18/2010. COMP210: Artificial Intelligence. Lecture 10. Game playing

Today. Types of Game. Games and Search 1/18/2010. COMP210: Artificial Intelligence. Lecture 10. Game playing COMP10: Artificial Intelligence Lecture 10. Game playing Trevor Bench-Capon Room 15, Ashton Building Today We will look at how search can be applied to playing games Types of Games Perfect play minimax

More information

Game Playing: Adversarial Search. Chapter 5

Game Playing: Adversarial Search. Chapter 5 Game Playing: Adversarial Search Chapter 5 Outline Games Perfect play minimax search α β pruning Resource limits and approximate evaluation Games of chance Games of imperfect information Games vs. Search

More information

CS 380: ARTIFICIAL INTELLIGENCE MONTE CARLO SEARCH. Santiago Ontañón

CS 380: ARTIFICIAL INTELLIGENCE MONTE CARLO SEARCH. Santiago Ontañón CS 380: ARTIFICIAL INTELLIGENCE MONTE CARLO SEARCH Santiago Ontañón so367@drexel.edu Recall: Adversarial Search Idea: When there is only one agent in the world, we can solve problems using DFS, BFS, ID,

More information

Lecture 5: Game Playing (Adversarial Search)

Lecture 5: Game Playing (Adversarial Search) Lecture 5: Game Playing (Adversarial Search) CS 580 (001) - Spring 2018 Amarda Shehu Department of Computer Science George Mason University, Fairfax, VA, USA February 21, 2018 Amarda Shehu (580) 1 1 Outline

More information

TD-Gammon, a Self-Teaching Backgammon Program, Achieves Master-Level Play

TD-Gammon, a Self-Teaching Backgammon Program, Achieves Master-Level Play NOTE Communicated by Richard Sutton TD-Gammon, a Self-Teaching Backgammon Program, Achieves Master-Level Play Gerald Tesauro IBM Thomas 1. Watson Research Center, I? 0. Box 704, Yorktozon Heights, NY 10598

More information

Computer Go: from the Beginnings to AlphaGo. Martin Müller, University of Alberta

Computer Go: from the Beginnings to AlphaGo. Martin Müller, University of Alberta Computer Go: from the Beginnings to AlphaGo Martin Müller, University of Alberta 2017 Outline of the Talk Game of Go Short history - Computer Go from the beginnings to AlphaGo The science behind AlphaGo

More information

Reinforcement Learning in Games Autonomous Learning Systems Seminar

Reinforcement Learning in Games Autonomous Learning Systems Seminar Reinforcement Learning in Games Autonomous Learning Systems Seminar Matthias Zöllner Intelligent Autonomous Systems TU-Darmstadt zoellner@rbg.informatik.tu-darmstadt.de Betreuer: Gerhard Neumann Abstract

More information

Artificial Intelligence Adversarial Search

Artificial Intelligence Adversarial Search Artificial Intelligence Adversarial Search Adversarial Search Adversarial search problems games They occur in multiagent competitive environments There is an opponent we can t control planning again us!

More information

CS 4700: Artificial Intelligence

CS 4700: Artificial Intelligence CS 4700: Foundations of Artificial Intelligence Fall 2017 Instructor: Prof. Haym Hirsh Lecture 10 Today Adversarial search (R&N Ch 5) Tuesday, March 7 Knowledge Representation and Reasoning (R&N Ch 7)

More information

Artificial Intelligence. Topic 5. Game playing

Artificial Intelligence. Topic 5. Game playing Artificial Intelligence Topic 5 Game playing broadening our world view dealing with incompleteness why play games? perfect decisions the Minimax algorithm dealing with resource limits evaluation functions

More information

Feature Learning Using State Differences

Feature Learning Using State Differences Feature Learning Using State Differences Mesut Kirci and Jonathan Schaeffer and Nathan Sturtevant Department of Computing Science University of Alberta Edmonton, Alberta, Canada {kirci,nathanst,jonathan}@cs.ualberta.ca

More information

Game Design Verification using Reinforcement Learning

Game Design Verification using Reinforcement Learning Game Design Verification using Reinforcement Learning Eirini Ntoutsi Dimitris Kalles AHEAD Relationship Mediators S.A., 65 Othonos-Amalias St, 262 21 Patras, Greece and Department of Computer Engineering

More information

CS-E4800 Artificial Intelligence

CS-E4800 Artificial Intelligence CS-E4800 Artificial Intelligence Jussi Rintanen Department of Computer Science Aalto University March 9, 2017 Difficulties in Rational Collective Behavior Individual utility in conflict with collective

More information

Five-In-Row with Local Evaluation and Beam Search

Five-In-Row with Local Evaluation and Beam Search Five-In-Row with Local Evaluation and Beam Search Jiun-Hung Chen and Adrienne X. Wang jhchen@cs axwang@cs Abstract This report provides a brief overview of the game of five-in-row, also known as Go-Moku,

More information

CS440/ECE448 Lecture 9: Minimax Search. Slides by Svetlana Lazebnik 9/2016 Modified by Mark Hasegawa-Johnson 9/2017

CS440/ECE448 Lecture 9: Minimax Search. Slides by Svetlana Lazebnik 9/2016 Modified by Mark Hasegawa-Johnson 9/2017 CS440/ECE448 Lecture 9: Minimax Search Slides by Svetlana Lazebnik 9/2016 Modified by Mark Hasegawa-Johnson 9/2017 Why study games? Games are a traditional hallmark of intelligence Games are easy to formalize

More information

Adversarial Search. CMPSCI 383 September 29, 2011

Adversarial Search. CMPSCI 383 September 29, 2011 Adversarial Search CMPSCI 383 September 29, 2011 1 Why are games interesting to AI? Simple to represent and reason about Must consider the moves of an adversary Time constraints Russell & Norvig say: Games,

More information

Adversarial Search (Game Playing)

Adversarial Search (Game Playing) Artificial Intelligence Adversarial Search (Game Playing) Chapter 5 Adapted from materials by Tim Finin, Marie desjardins, and Charles R. Dyer Outline Game playing State of the art and resources Framework

More information

CITS3001. Algorithms, Agents and Artificial Intelligence. Semester 2, 2016 Tim French

CITS3001. Algorithms, Agents and Artificial Intelligence. Semester 2, 2016 Tim French CITS3001 Algorithms, Agents and Artificial Intelligence Semester 2, 2016 Tim French School of Computer Science & Software Eng. The University of Western Australia 8. Game-playing AIMA, Ch. 5 Objectives

More information

Computer Science and Software Engineering University of Wisconsin - Platteville. 4. Game Play. CS 3030 Lecture Notes Yan Shi UW-Platteville

Computer Science and Software Engineering University of Wisconsin - Platteville. 4. Game Play. CS 3030 Lecture Notes Yan Shi UW-Platteville Computer Science and Software Engineering University of Wisconsin - Platteville 4. Game Play CS 3030 Lecture Notes Yan Shi UW-Platteville Read: Textbook Chapter 6 What kind of games? 2-player games Zero-sum

More information

Game-playing AIs: Games and Adversarial Search FINAL SET (w/ pruning study examples) AIMA

Game-playing AIs: Games and Adversarial Search FINAL SET (w/ pruning study examples) AIMA Game-playing AIs: Games and Adversarial Search FINAL SET (w/ pruning study examples) AIMA 5.1-5.2 Games: Outline of Unit Part I: Games as Search Motivation Game-playing AI successes Game Trees Evaluation

More information

Adversarial search (game playing)

Adversarial search (game playing) Adversarial search (game playing) References Russell and Norvig, Artificial Intelligence: A modern approach, 2nd ed. Prentice Hall, 2003 Nilsson, Artificial intelligence: A New synthesis. McGraw Hill,

More information

CS 188: Artificial Intelligence

CS 188: Artificial Intelligence CS 188: Artificial Intelligence Adversarial Search Instructor: Stuart Russell University of California, Berkeley Game Playing State-of-the-Art Checkers: 1950: First computer player. 1959: Samuel s self-taught

More information

Further Evolution of a Self-Learning Chess Program

Further Evolution of a Self-Learning Chess Program Further Evolution of a Self-Learning Chess Program David B. Fogel Timothy J. Hays Sarah L. Hahn James Quon Natural Selection, Inc. 3333 N. Torrey Pines Ct., Suite 200 La Jolla, CA 92037 USA dfogel@natural-selection.com

More information

ARTIFICIAL INTELLIGENCE (CS 370D)

ARTIFICIAL INTELLIGENCE (CS 370D) Princess Nora University Faculty of Computer & Information Systems ARTIFICIAL INTELLIGENCE (CS 370D) (CHAPTER-5) ADVERSARIAL SEARCH ADVERSARIAL SEARCH Optimal decisions Min algorithm α-β pruning Imperfect,

More information

ADVERSARIAL SEARCH. Chapter 5

ADVERSARIAL SEARCH. Chapter 5 ADVERSARIAL SEARCH Chapter 5... every game of skill is susceptible of being played by an automaton. from Charles Babbage, The Life of a Philosopher, 1832. Outline Games Perfect play minimax decisions α

More information

AI, AlphaGo and computer Hex

AI, AlphaGo and computer Hex a math and computing story computing.science university of alberta 2018 march thanks Computer Research Hex Group Michael Johanson, Yngvi Björnsson, Morgan Kan, Nathan Po, Jack van Rijswijck, Broderick

More information

CS 440 / ECE 448 Introduction to Artificial Intelligence Spring 2010 Lecture #5

CS 440 / ECE 448 Introduction to Artificial Intelligence Spring 2010 Lecture #5 CS 440 / ECE 448 Introduction to Artificial Intelligence Spring 2010 Lecture #5 Instructor: Eyal Amir Grad TAs: Wen Pu, Yonatan Bisk Undergrad TAs: Sam Johnson, Nikhil Johri Topics Game playing Game trees

More information

An Artificially Intelligent Ludo Player

An Artificially Intelligent Ludo Player An Artificially Intelligent Ludo Player Andres Calderon Jaramillo and Deepak Aravindakshan Colorado State University {andrescj, deepakar}@cs.colostate.edu Abstract This project replicates results reported

More information

Computing Science (CMPUT) 496

Computing Science (CMPUT) 496 Computing Science (CMPUT) 496 Search, Knowledge, and Simulations Martin Müller Department of Computing Science University of Alberta mmueller@ualberta.ca Winter 2017 Part IV Knowledge 496 Today - Mar 9

More information

School of EECS Washington State University. Artificial Intelligence

School of EECS Washington State University. Artificial Intelligence School of EECS Washington State University Artificial Intelligence 1 } Classic AI challenge Easy to represent Difficult to solve } Zero-sum games Total final reward to all players is constant } Perfect

More information

More on games (Ch )

More on games (Ch ) More on games (Ch. 5.4-5.6) Alpha-beta pruning Previously on CSci 4511... We talked about how to modify the minimax algorithm to prune only bad searches (i.e. alpha-beta pruning) This rule of checking

More information

Playing Othello Using Monte Carlo

Playing Othello Using Monte Carlo June 22, 2007 Abstract This paper deals with the construction of an AI player to play the game Othello. A lot of techniques are already known to let AI players play the game Othello. Some of these techniques

More information

Adversarial Search and Game Playing

Adversarial Search and Game Playing Games Adversarial Search and Game Playing Russell and Norvig, 3 rd edition, Ch. 5 Games: multi-agent environment q What do other agents do and how do they affect our success? q Cooperative vs. competitive

More information

Aja Huang Cho Chikun David Silver Demis Hassabis. Fan Hui Geoff Hinton Lee Sedol Michael Redmond

Aja Huang Cho Chikun David Silver Demis Hassabis. Fan Hui Geoff Hinton Lee Sedol Michael Redmond CMPUT 396 3 hr closedbook 6 pages, 7 marks/page page 1 1. [3 marks] For each person or program, give the label of its description. Aja Huang Cho Chikun David Silver Demis Hassabis Fan Hui Geoff Hinton

More information

Matthew Sadler and Natasha Regan. Game Changer. AlphaZero s Groundbreaking Chess Strategies and the Promise of AI

Matthew Sadler and Natasha Regan. Game Changer. AlphaZero s Groundbreaking Chess Strategies and the Promise of AI Matthew Sadler and Natasha Regan Game Changer AlphaZero s Groundbreaking Chess Strategies and the Promise of AI New In Chess 2019 Contents Explanation of symbols 6 Foreword by Garry Kasparov 7 Introduction

More information

CS 380: ARTIFICIAL INTELLIGENCE ADVERSARIAL SEARCH. Santiago Ontañón

CS 380: ARTIFICIAL INTELLIGENCE ADVERSARIAL SEARCH. Santiago Ontañón CS 380: ARTIFICIAL INTELLIGENCE ADVERSARIAL SEARCH Santiago Ontañón so367@drexel.edu Recall: Problem Solving Idea: represent the problem we want to solve as: State space Actions Goal check Cost function

More information

Game Playing AI Class 8 Ch , 5.4.1, 5.5

Game Playing AI Class 8 Ch , 5.4.1, 5.5 Game Playing AI Class Ch. 5.-5., 5.4., 5.5 Bookkeeping HW Due 0/, :59pm Remaining CSP questions? Cynthia Matuszek CMSC 6 Based on slides by Marie desjardin, Francisco Iacobelli Today s Class Clear criteria

More information