SEARCH VS KNOWLEDGE: EMPIRICAL STUDY OF MINIMAX ON KRK ENDGAME

Size: px
Start display at page:

Download "SEARCH VS KNOWLEDGE: EMPIRICAL STUDY OF MINIMAX ON KRK ENDGAME"

Transcription

1 SEARCH VS KNOWLEDGE: EMPIRICAL STUDY OF MINIMAX ON KRK ENDGAME Aleksander Sadikov, Ivan Bratko, Igor Kononenko University of Ljubljana, Faculty of Computer and Information Science, Tržaška 25, 1000 Ljubljana, Slovenia, Abstract: Key words: This article presents the results of an empirical experiment designed to gain insight into what is the effect of the minimax on the evaluation function. The experiment s simulations were performed upon the KRK chess endgame. Main result is that dependencies between evaluations of sibling nodes in a game tree and an abundance of possibilities to commit blunders present in the KRK endgame are not enough to explain the success of the minimax in practical game-playing as was previously believed. The article argues that correlation between a high branching factor in the game tree and having a winning move available is something that minimax implicitly takes into account and that this is at least partially responsible for its success. Second part of the article presents weighted minimax, an attempt at exploiting dependencies between evaluations of sibling nodes in a game tree and shows that its results could be promising. minimax principle, KRK chess endgame, weighted minimax 1. INTRODUCTION Over twenty years ago Beal (1980) set out to analyze whether and why values backed up from minimax search are more trustworthy than the heuristic values themselves. He constructed a simple mathematical model to analyze the minimax algorithm. To his surprise the analysis of the model showed that backed-up values are actually somewhat less trustworthy than heuristic values themselves. He writes: This result is disappointing. It was hoped that the analysis would show that the probability of error reduced with

2 2 Aleksander Sadikov, Ivan Bratko, Igor Kononenko backing-up. A couple of years later two articles (Beal, 1982; Bratko and Gams, 1982) simultaneously conducted further analysis into why minimax does yield good results in practical game-playing while apparently backedup values seem less reliable and reached the same conclusion. They argued that the true values of sibling nodes in a game tree are not independent of one another. This clustering of similar values is a major feature in practical games and it was this phenomenon that Beal s mathematical model did not account for. The problem with the minimax paradigm under sibling values independence was also confirmed by Nau (1982, 1983), who called this a search-depth pathology in game trees. His simulation (Nau, 1982) where strong dependencies between sibling nodes were introduced concurs that this can cause search-depth pathology to disappear. However, Pearl (1984) partly disagrees with the conclusion reached by Beal, Bratko, Gams and Nau, and claims that while strong dependencies between sibling nodes in a game tree can eliminate the pathology, practical games like chess do not possess dependencies of sufficient strength. He points out that there are few chess positions so strong that they cannot be spoiled abruptly if one really tries hard to do so. He concludes that the success of minimax in game-playing programs is based on the fact that common games do not possess a uniform structure but are riddled with early terminal positions, colloquially named blunders, pitfalls or traps. Close ancestors of such traps carry more reliable evaluations than the rest of the nodes, and when more of these ancestors are exposed by the search, the decisions become more valid. Moreover, Schrüfer (1986) and its follow-up (Althöfer, 1989) did some further analysis of pathology in game trees. Especially interesting is their observation that to avoid pathology, an evaluation function must, among other things, have negligible probability of underestimating a position from the perspective of the player to move. All of the above studies have two things in common: (a) they accept the empirical evidence that minimax principle works in practical game-playing programs and (b) they try to mathematically model the minimax algorithm and theoretically deduce what happens when heuristic values assigned to leaves are backed-up towards the root of the game tree. To make such mathematical analysis feasible the researchers are forced to make certain assumptions about the game they model and to make simplifications in their model. Thus, the results of these models are always to be viewed with the acknowledgement of this assumptions and simplifications in the back of one s mind. In contrast to that, our approach in this article is to take (part of) a real game with a real evaluation function and observe empirically what is going on when we change the search depth and the quality of evaluation function. We have at our disposal an absolutely correct evaluation function which we can corrupt in a controlled way. We also have a minimax search

3 Search vs Knowledge: Empirical Study of Minimax on KRK Endgame 3 engine that is capable of searching right down to the terminal nodes of the game tree. The next chapter describes our choice of the game, the evaluation function and its artificial corruption, as well as the search engine. Chapter 3 presents the results for various settings of our simulation parameters. Chapter 4 unveils the outcome of our attempt to improve the minimax search by letting it acknowledge the dependencies between sibling nodes in a game tree. At the end we give our conclusions and some ideas for further work. 2. EXPERIMENTAL DESIGN We have decided to center our simulations on a simple subset of chess: the KRK endgame. In this endgame White has a king and a rook, while Black has only his king. The goal for White is to mate his opponent, striving to do so in as little moves as possible. There are two possible outcomes of this endgame: a win for White or a draw. While the KRK endgame is very simple, it still possesses all the interesting attributes: positions are of various difficulties (measured in the number of moves to mate), there surely exist dependencies between the values of sibling nodes in a game tree, and there is a possibility of blunders and early termination for both sides (stalemate or losing a rook for White; premature mate for Black). We are interested in the quality of play for White under different conditions. Therefore, unless stated otherwise, we always look at things from the White player s perspective. For the KRK endgame we have at our disposal an absolutely correct evaluation function. It is in the form of a database that consists of all possible legal positions and their values. The database can be obtained from UCI Machine Learning Repository (Blake and Merz, 1998). The value for each position tells us how many moves are needed to reach mate in the case that both players play optimally. The positions in the database always assume it is Black s turn to move. There are two special cases: value 0 means Black is mated and value 255 means that Black has a draw (either the position is a stalemate or Black can capture the white rook). The database consists of 28,056 positions. There are actually over 200,000 legal KRK positions, however board symmetries allow for such a reduction. Detailed description of the database and board symmetries is given in (Bain, 1992). Our version of the database is implemented as an array of 28,056 cells and can be viewed as a sort of transposition table. Apart from positions having special evaluations of 0 or 255, there are 25,233 positions divided into 16 levels of difficulty. Positions from level 1 require one move (2 plies) to mate (assuming optimal play); positions from level 2

4 4 Aleksander Sadikov, Ivan Bratko, Igor Kononenko require two moves to mate, and so on. Positions from the most difficult level require 16 moves (32 plies) to mate. Different levels have different number of positions. There are 4,553 positions of level 14 and only 390 positions of level 16. Figure 1 shows how many cases (positions) are left unsolved if we applied searches of various depths without any knowledge apart from the rules of the game. The term unsolved in this context means that White has to make a move without knowing at that time the complete move-tree that guarantees a mate. The curve starts to fall significantly between depths of 14 and 20 plies and after ply 20 it steeply drops towards zero. Unsolved cases Number of cases Search depth Figure 1. Number of unsolved cases as a function of search depth For the purpose of our experiments we corrupted the ideal evaluation function in a controlled manner. Our method of doing this is as follows. We take a position value and add to it a certain amount of Gaussian noise. Formula (and a plot): P 2 2 ( x µ ) / ( ) ( 2 σ x = e )

5 Search vs Knowledge: Empirical Study of Minimax on KRK Endgame 5 gives the probability P(x)dx that given the correct evaluation and standard deviation the new (corrupted) evaluation x will take on a value in the range [x, x + dx], which is a real number. The error of new evaluation is x. We do this for all positions in the database. The corruption is symmetrical, meaning that there is practically equal chance that the new evaluation will be optimistic or pessimistic. Exceptions are positions closer to mate, where there is a higher probability of pessimistic evaluations; especially with higher degrees of corruption. This is due to the fact that we do not allow x to become negative (we redo the corruption of the position until it is not negative). The level of corruption is controlled by the parameter, which is in fact standard deviation and controls how dispersed are the corrupted values x around the correct values (the width of the hill on the plot above). Standard deviation is measured in moves. For example, if equals 0.5, this means that approximately two thirds of corrupted evaluations are within 0.5 moves around the true evaluation and over 95% of corrupted evaluations are within 1.0 move (two standard deviations) around the true evaluation. To be able to compare the quality of initial knowledge (evaluation function) to the quality of knowledge after backing up the values with the minimax algorithm, we have to be able to calculate the standard deviation after minimaxing. This is easy, because our search algorithm returns the backed-up values from a fixed search depth for every unique KRK position in an array exactly the same as our initial database. This array is in fact our backed-up evaluation function. We thus have one such array for every search depth from 0 (initial database) to 32 ply. After obtaining such an array, we calculate with the formula: σ = 1 N ( x i µ i ) N i= 1 2 where x i is the backed-up corrupted value and i is the true value for position i. N is the number of positions in the array. This gives us a tool to directly monitor how minimax affects the quality of evaluation function. Our search engine is the standard fixed-depth minimax search. We were able to search up to the maximal needed search depth of 32 plies (after 32 plies or 16 moves all the positions are solved by search alone) by exploiting the fact that the KRK endgame only has a comparatively small number of unique (under symmetries) positions (28,056) which we can all remember in a sort of transposition table. We start at depth 0 by loading the values from a (corrupted) database, then move on to depth 2, perform a 2-ply minimax search and use the results of the previous depth as evaluations of the leaves, store results of depth 2 search, move on to depth 4 and so on. For terminal

6 6 Aleksander Sadikov, Ivan Bratko, Igor Kononenko nodes (mate or draw) we use their true game values and not the evaluation from the database / transposition table. 3. THE RESULTS OF THE EXPERIMENTS Figure 2 shows what happens to the quality of the evaluation function when we change the search depth. The x-axis represents the search depth measured in plies and the y-axis represents the standard deviation measured in moves. Each curve in the graph represents a different evaluation function they differ in the level of their corruption (the initial ). The legend marks these different evaluation functions with the size of their initial. The best way to separate the curves is to look at their initial corruption. The last three functions, marked with rand, are added for comparison their initial values are random real numbers between 0 and 50. They represent zero knowledge evaluation functions. Search vs Knowledge Knowledge corruption level Search depth 0.25f 0.50f 0.75f 1.00f 1.50f 2.00f 3.00f 5.00f 10.00f 15.00f 20.00f 25.00f rand 1 rand 2 rand 3 Figure 2. Influence of search depth on quality of evaluation function It seems that we have to divide the evaluation functions into two groups: in the first group we have evaluation functions with (relatively) low initial error and in the second group those with a high initial error. The first group is a realistic model of real-life evaluation functions, while the second group contains evaluation functions with (almost) zero knowledge. We can observe that evaluation functions with initial less than 3.0 do not exhibit the

7 Search vs Knowledge: Empirical Study of Minimax on KRK Endgame 7 tendency to speedily drop towards 0 (perfect knowledge). They drop slightly or remain on the same level until very deep search is invoked. One has to bear in mind that when search depth reaches 20 ply a large portion of the positions is solved by the search alone and that heavily influences the assessment of for very deep searches. That is probably the only reason why they eventually do drop. On the other hand, we can see that evaluation functions with initial of 10 (even 5) or more do exhibit the expected behaviour of speedily dropping with increased search depth. As the graph shows they behave basically the same as random evaluation functions. From this we can conclude that with corruption they lost just about all their knowledge and are in fact random evaluation functions themselves. To circumvent the effect of deep searches, where many of the positions are solved by the search alone, we have made another analysis in which we disregarded the effect of the solved positions. Figure 3 shows the results of this analysis. Everything is done exactly the same as before, the only exception is in calculating the new standard deviation. Positions, which are solved by the search, are omitted from the calculation, thus reducing the number of positions N. If we take a closer look at Figure 3 we can see that there is not much change in comparison with Figure 2. The evaluation functions from group two behave more or less the same, while evaluation functions from group one decline to drop for a little longer. When looking at Figure 3, one must bear in mind that from search depth 28 onward the curves are unreliable due to small number of positions that are left unsolved by the search and are used to calculate. The end result is that for reasonable evaluation functions (group one) searching deeper does not (significantly) improve the quality of the evaluation function for playing the KRK endgame. The endgame undoubtedly contains dependencies between the values of sibling nodes in a game tree. It is also full of possibilities for blunders on the part of White player. White can, after all, lose his rook in at most two moves if Black plays normally. This means that the two reasons why minimax is effective in practice are present and yet the pathology is still present. On the other hand, evaluation functions from group two, that are (practically) random, benefit a lot from searching deeper. How can all this be explained?

8 8 Aleksander Sadikov, Ivan Bratko, Igor Kononenko Search vs Knowledge (unsolved cases only) Knowledge corruption level Search depth 0.25f 0.50f 0.75f 1.00f 1.50f 2.00f 3.00f 5.00f 10.00f 15.00f 20.00f 25.00f rand 1 rand 2 rand 3 Figure 3. Influence of search depth on quality of evaluation function (disregarding search-solved cases) The answer why random evaluation functions from group two benefit a lot from searching deeper is given in (Beal and Smith, 1994). The authors have conducted an experiment where they pitched a chess playing program using a random evaluation function against a program using a random evaluation function and a fixed-depth minimax search. The latter program completely dominated. We can confirm this result because our simulation of the first program returned a constant standard deviation of around 20 for all search depths up to 26 ply. The explanation for this phenomenon is that there exists a correlation between a high branching factor in the game tree and having a winning move available. Minimax implicitly takes into account the mobility parameter at all depths of the game tree (which is not the same as using an evaluation function explicitly counting available moves). Further theoretical investigation of this phenomenon is given in (Levene and Fenner, 2001). This phenomenon might also be another part of answer to the question why minimax is successful in practice. It is possible that it has some effect on all evaluation functions (not just random ones) and it is possible that it prevents the error of the evaluation function to increase unreasonably. Could it be that with the initial knowledge of the evaluation function at least preserved, the minimax s ability to prevent falling into traps might be enough for the minimax principle to be successful?

9 Search vs Knowledge: Empirical Study of Minimax on KRK Endgame 9 Up to this point, we did not say anything about how well would a computer program using one of our corrupted evaluation functions actually play. The answer is given in Figure 4. We have played out all unique KRK positions except the ones with special values of 0 or 255, in total 25,233 positions. White player was guided by a corrupted evaluation function. Black player was always playing optimally. We measured the quality of play as the average number of moves above what an optimal White player (using a non-corrupted database) would need. This statistic is computed as the difference between number of moves spent by White for all positions and number of moves needed for all positions using optimal play, divided with the number of positions (25,233). Since it is hard to separate different curves in Figure 4: curve 0.25 is practically on the x-axis and therefore hardly visible, other curves then follow from left to right by increasing level of corruption. The exception is curve which is actually the last one on the right. Noise effect on actual play Average moves above optimum Search depth 0.25f 0.50f 0.75f 1.00f 1.50f 2.00f 3.00f 5.00f 10.00f 15.00f 20.00f 25.00f Figure 4. Quality of play using corrupted evaluation functions In Figure 4 we can see that evaluation function with initial corruption level of 0.25 moves provides practically optimal play starting already at search depth 0. Other evaluation functions from group one play well (spending no more than 1 move more than optimal play) from search depth 6-8 onward. Evaluation functions from group two play well only after reaching search depth 14.

10 10 Aleksander Sadikov, Ivan Bratko, Igor Kononenko 4. WEIGHTED MINIMAX Since there are dependencies between evaluations of the sibling nodes in a game tree we were curious if it is conceivable to exploit them. We introduced the idea of a weighted minimax. It works just like a normal minimax search, except that the evaluation of a node takes into account the values of its sibling nodes. The closer the sibling s evaluation is to the evaluation of the node under investigation, the bigger weight is assigned to it, which in turn means that it has more effect on the evaluation. If a terminal node is reached by the search, its evaluation remains unchanged. Only the best of all the siblings needs to have its value modified as described. Figure 5 explains how weighted minimax computes the evaluation of a node. It starts by ordering the siblings by their evaluations from best to worst. Then it calculates the weighted minimax evaluation weval for the best sibling using the following formula: weval ( Pos) 1 eval 3 i i = 1 3 i i ( Pos ) where eval(pos i ) is the evaluation of sibling i. i Figure 5. Calculation of weighted minimax The results of the simulation using weighted minimax in the form described above are presented in Figure 6. Group two evaluation functions drop as with regular minimax, they even seem to drop a little faster. On the other hand, group one evaluation functions exhibit an interesting behaviour. At first (up to ply 4-8) they drop significantly, which means that weighting is useful for these search depths. The exception is the initially least corrupted evaluation function (0.25f). Then they begin to rise back up, well past the point of their initial corruption. For larger search depths, weighting clearly is harmful. In general, it seems that the worst the initial knowledge, the more

11 Search vs Knowledge: Empirical Study of Minimax on KRK Endgame 11 beneficial weighting is at low search depths and vice versa. If this is true, weighting could serve either to improve the search evaluations of a gameplaying program or to confirm that the existing evaluation is good enough. Weighted minimax (version 1) 6.0 Knowledge corruption level f 0.50f 0.75f 1.00f 2.00f 5.00f 20.00f Search depth Figure 6. Weighted minimax, version 1 We have done one further experiment with weighted minimax. We were afraid that bad moves which are present in practically all positions get too much influence despite the fact that they are given low weights. We wondered what would happen if we were to fully disregard their influence on the evaluation. Second version of our weighted minimax algorithm works exactly the same as the first version, except that it disregards those siblings with evaluations worse by a certain threshold than the evaluation of the node under investigation. In our experiment we decided to set the threshold at 3.0 [moves]. The results of the experiment with second version of weighted minimax algorithm are presented in Figure 7. We can see that they do not differ a lot from the results of the first experiment. Main difference is that at low search depths there is slightly more benefit from using the weighted minimax. We believe that the gains could be further increased by fine tuning the threshold value as well as the function that assigns weights to sibling nodes.

12 12 Aleksander Sadikov, Ivan Bratko, Igor Kononenko Weighted minimax (version 2) 6.0 Knowledge corruption level f 0.50f 0.75f 1.00f 2.00f 5.00f 20.00f Search depth Figure 7. Weighted minimax, version 2 At the moment we are unable to give a credible explanation why weighted minimax behaves as it does. Our hypothesis is the following. Weighting helps an erroneous evaluation function up to the point where the function becomes quite good (this point seems to be around = 0.4). When the evaluation function is good, further weighting becomes harmful much like it is harmful for terminal positions where we already know the correct evaluation. There is no doubt that the correct evaluation can only be harmed, it cannot be improved. For the evaluation function with initial = 0.25, it is harmful from the start because it is below 0.4 already. Evaluation function with initial = 2 is no exception, it falls until it reaches 0.4, then it rises again. Situations that do not conform to this hypothesis only occur at depths of 26 ply and beyond (evaluation functions with initial = 5 in Figure 6; = 2 and = 20 in Figure 7). However, graphs in Figures 6 and 7 are disregarding the effect of solved positions (just like the graph in Figure 3) and we believe them to be quite unreliable for very high depths (26 ply and beyond). From the results of these experiments we suggest using the weighted minimax might be useful at low search depths (in the top part of the game tree). The idea should be tried in a serious game-playing program.

13 Search vs Knowledge: Empirical Study of Minimax on KRK Endgame CONCLUSIONS AND FURTHER WORK Theoretical studies of the minimax principle in the past have shown that it has a negative effect on the quality of the evaluation function. As the answer why it is nevertheless successful in practice they suggested two reasons: (a) dependencies between evaluations of sibling nodes in a game tree and (b) existence of traps that cause early terminations of the game. We have taken the opposite approach to the problem; we tried to empirically check these conclusions using the KRK chess endgame. We can confirm that minimax algorithm is in fact poor preserver of the knowledge built into the evaluation function. Yet, regardless of that, it proved to still be successful in actual play (Figure 4). It turns out that even with dependencies between evaluations of sibling nodes in a game tree and an abundance of possibilities to commit blunders present in our endgame, the anomaly still existed. Our belief is that the result reached by Beal and Smith (1994) is of great importance; we are convinced that it is part of the answer why minimax is successful. It the second part of the article we presented weighted minimax, an attempt to exploit dependencies between evaluations of sibling nodes in a game tree. It seems to work, however we need to know when it reaches the point where it becomes harmful. Fine tuning of its parameters might further increase its benefits. In view of the presented results further work should focus on testing whether our hypothesis regarding the behaviour of weighed minimax stands. It would also be very interesting to recheck our results using a more complex game perhaps a KRKN or KRKP chess endgame. REFERENCES Althöfer, I. (1989) Generalized minimax algorithms are no better error correctors than minimax itself, Advances in Computer Chess, 5 (ed. D.F. Beal), pp , Elsevier Science Publishers. Bain, M. (1992) Learning optimal chess strategies, Proc. Intl. Workshop on Inductive Logic Programming (ed. S. Muggleton), Institute for New Generation Computer Technology, Tokyo, Japan. Beal, D.F. (1980) An Analysis of Minimax, Advances in Computer Chess, 2 (ed. M.R.B. Clarke), pp , Edinburgh University Press. Beal, D.F. (1982) Benefits of minimax search, Advances in Computer Chess, 3 (ed. M.R.B. Clarke), pp. 1-15, Pergamon Press. Beal, D.F. and Smith, M.C. (1994) Random Evaluations in Chess, ICCA Journal, Vol. 17(1), pp Blake, C.L. and Merz, C.J. (1998) UCI Repository of machine learning databases [ Irvine, CA: University of California, Department of Information and Computer Science.

14 14 Aleksander Sadikov, Ivan Bratko, Igor Kononenko Bratko, I. and Gams, M. (1982) Error analysis of the minimax principle, Advances in Computer Chess, 3 (ed. M.R.B. Clarke), pp. 1-15, Pergamon Press. Levene, M. and Fenner, T.I. (2001) The Effect of Mobility on Minimaxing of Game Trees with Random Leaf Values, Artificial Intelligence, 130, pp Nau, D.S. (1983) Pathology on Game Trees Revisited, and an Alternative to Minimaxing, Artificial Intelligence, 21(1, 2), pp Nau, D.S. (1982) An Investigation of the Causes of Pathology in Games, Artificial Intelligence, 19, pp Pearl, J. (1984) Heuristics: intelligent search strategies for computer problem solving, Addison-Wesley Publishing Company. Schrüfer, G. (1986) Presence and absence of pathology on game trees, Advances in Computer Chess, 4 (ed. D.F. Beal), pp , Pergamon Press.

SEARCH VERSUS KNOWLEDGE: AN EMPIRICAL STUDY OF MINIMAX ON KRK

SEARCH VERSUS KNOWLEDGE: AN EMPIRICAL STUDY OF MINIMAX ON KRK SEARCH VERSUS KNOWLEDGE: AN EMPIRICAL STUDY OF MINIMAX ON KRK A. Sadikov, I. Bratko, I. Kononenko University of Ljubljana, Faculty of Computer and lnformation Science, Triaska 25, 000 Ljubljana, Slovenia

More information

Algorithms for solving sequential (zero-sum) games. Main case in these slides: chess. Slide pack by Tuomas Sandholm

Algorithms for solving sequential (zero-sum) games. Main case in these slides: chess. Slide pack by Tuomas Sandholm Algorithms for solving sequential (zero-sum) games Main case in these slides: chess Slide pack by Tuomas Sandholm Rich history of cumulative ideas Game-theoretic perspective Game of perfect information

More information

Algorithms for solving sequential (zero-sum) games. Main case in these slides: chess! Slide pack by " Tuomas Sandholm"

Algorithms for solving sequential (zero-sum) games. Main case in these slides: chess! Slide pack by  Tuomas Sandholm Algorithms for solving sequential (zero-sum) games Main case in these slides: chess! Slide pack by " Tuomas Sandholm" Rich history of cumulative ideas Game-theoretic perspective" Game of perfect information"

More information

ACCURACY AND SAVINGS IN DEPTH-LIMITED CAPTURE SEARCH

ACCURACY AND SAVINGS IN DEPTH-LIMITED CAPTURE SEARCH ACCURACY AND SAVINGS IN DEPTH-LIMITED CAPTURE SEARCH Prakash Bettadapur T. A.Marsland Computing Science Department University of Alberta Edmonton Canada T6G 2H1 ABSTRACT Capture search, an expensive part

More information

FACTORS AFFECTING DIMINISHING RETURNS FOR SEARCHING DEEPER 1

FACTORS AFFECTING DIMINISHING RETURNS FOR SEARCHING DEEPER 1 Factors Affecting Diminishing Returns for ing Deeper 75 FACTORS AFFECTING DIMINISHING RETURNS FOR SEARCHING DEEPER 1 Matej Guid 2 and Ivan Bratko 2 Ljubljana, Slovenia ABSTRACT The phenomenon of diminishing

More information

Artificial Intelligence

Artificial Intelligence Artificial Intelligence 174 (2010) 1323 1338 Contents lists available at ScienceDirect Artificial Intelligence www.elsevier.com/locate/artint When is it better not to look ahead? Dana S. Nau a,, Mitja

More information

Experiments on Alternatives to Minimax

Experiments on Alternatives to Minimax Experiments on Alternatives to Minimax Dana Nau University of Maryland Paul Purdom Indiana University April 23, 1993 Chun-Hung Tzeng Ball State University Abstract In the field of Artificial Intelligence,

More information

Set 4: Game-Playing. ICS 271 Fall 2017 Kalev Kask

Set 4: Game-Playing. ICS 271 Fall 2017 Kalev Kask Set 4: Game-Playing ICS 271 Fall 2017 Kalev Kask Overview Computer programs that play 2-player games game-playing as search with the complication of an opponent General principles of game-playing and search

More information

Influence of Search Depth on Position Evaluation

Influence of Search Depth on Position Evaluation Influence of Search Depth on Position Evaluation Matej Guid and Ivan Bratko Faculty of Computer and Information Science, University of Ljubljana, Ljubljana, Slovenia Abstract. By using a well-known chess

More information

Adversarial Search and Game- Playing C H A P T E R 6 C M P T : S P R I N G H A S S A N K H O S R A V I

Adversarial Search and Game- Playing C H A P T E R 6 C M P T : S P R I N G H A S S A N K H O S R A V I Adversarial Search and Game- Playing C H A P T E R 6 C M P T 3 1 0 : S P R I N G 2 0 1 1 H A S S A N K H O S R A V I Adversarial Search Examine the problems that arise when we try to plan ahead in a world

More information

Programming an Othello AI Michael An (man4), Evan Liang (liange)

Programming an Othello AI Michael An (man4), Evan Liang (liange) Programming an Othello AI Michael An (man4), Evan Liang (liange) 1 Introduction Othello is a two player board game played on an 8 8 grid. Players take turns placing stones with their assigned color (black

More information

Ar#ficial)Intelligence!!

Ar#ficial)Intelligence!! Introduc*on! Ar#ficial)Intelligence!! Roman Barták Department of Theoretical Computer Science and Mathematical Logic So far we assumed a single-agent environment, but what if there are more agents and

More information

Lecture 14. Questions? Friday, February 10 CS 430 Artificial Intelligence - Lecture 14 1

Lecture 14. Questions? Friday, February 10 CS 430 Artificial Intelligence - Lecture 14 1 Lecture 14 Questions? Friday, February 10 CS 430 Artificial Intelligence - Lecture 14 1 Outline Chapter 5 - Adversarial Search Alpha-Beta Pruning Imperfect Real-Time Decisions Stochastic Games Friday,

More information

More on games (Ch )

More on games (Ch ) More on games (Ch. 5.4-5.6) Alpha-beta pruning Previously on CSci 4511... We talked about how to modify the minimax algorithm to prune only bad searches (i.e. alpha-beta pruning) This rule of checking

More information

ARTIFICIAL INTELLIGENCE (CS 370D)

ARTIFICIAL INTELLIGENCE (CS 370D) Princess Nora University Faculty of Computer & Information Systems ARTIFICIAL INTELLIGENCE (CS 370D) (CHAPTER-5) ADVERSARIAL SEARCH ADVERSARIAL SEARCH Optimal decisions Min algorithm α-β pruning Imperfect,

More information

Game-Playing & Adversarial Search

Game-Playing & Adversarial Search Game-Playing & Adversarial Search This lecture topic: Game-Playing & Adversarial Search (two lectures) Chapter 5.1-5.5 Next lecture topic: Constraint Satisfaction Problems (two lectures) Chapter 6.1-6.4,

More information

CS 4700: Foundations of Artificial Intelligence

CS 4700: Foundations of Artificial Intelligence CS 4700: Foundations of Artificial Intelligence selman@cs.cornell.edu Module: Adversarial Search R&N: Chapter 5 1 Outline Adversarial Search Optimal decisions Minimax α-β pruning Case study: Deep Blue

More information

Learning long-term chess strategies from databases

Learning long-term chess strategies from databases Mach Learn (2006) 63:329 340 DOI 10.1007/s10994-006-6747-7 TECHNICAL NOTE Learning long-term chess strategies from databases Aleksander Sadikov Ivan Bratko Received: March 10, 2005 / Revised: December

More information

More on games (Ch )

More on games (Ch ) More on games (Ch. 5.4-5.6) Announcements Midterm next Tuesday: covers weeks 1-4 (Chapters 1-4) Take the full class period Open book/notes (can use ebook) ^^ No programing/code, internet searches or friends

More information

CPS331 Lecture: Search in Games last revised 2/16/10

CPS331 Lecture: Search in Games last revised 2/16/10 CPS331 Lecture: Search in Games last revised 2/16/10 Objectives: 1. To introduce mini-max search 2. To introduce the use of static evaluation functions 3. To introduce alpha-beta pruning Materials: 1.

More information

Adversarial Search. CMPSCI 383 September 29, 2011

Adversarial Search. CMPSCI 383 September 29, 2011 Adversarial Search CMPSCI 383 September 29, 2011 1 Why are games interesting to AI? Simple to represent and reason about Must consider the moves of an adversary Time constraints Russell & Norvig say: Games,

More information

game tree complete all possible moves

game tree complete all possible moves Game Trees Game Tree A game tree is a tree the nodes of which are positions in a game and edges are moves. The complete game tree for a game is the game tree starting at the initial position and containing

More information

Game Tree Search. CSC384: Introduction to Artificial Intelligence. Generalizing Search Problem. General Games. What makes something a game?

Game Tree Search. CSC384: Introduction to Artificial Intelligence. Generalizing Search Problem. General Games. What makes something a game? CSC384: Introduction to Artificial Intelligence Generalizing Search Problem Game Tree Search Chapter 5.1, 5.2, 5.3, 5.6 cover some of the material we cover here. Section 5.6 has an interesting overview

More information

Adversarial Search. Soleymani. Artificial Intelligence: A Modern Approach, 3 rd Edition, Chapter 5

Adversarial Search. Soleymani. Artificial Intelligence: A Modern Approach, 3 rd Edition, Chapter 5 Adversarial Search CE417: Introduction to Artificial Intelligence Sharif University of Technology Spring 2017 Soleymani Artificial Intelligence: A Modern Approach, 3 rd Edition, Chapter 5 Outline Game

More information

CS 771 Artificial Intelligence. Adversarial Search

CS 771 Artificial Intelligence. Adversarial Search CS 771 Artificial Intelligence Adversarial Search Typical assumptions Two agents whose actions alternate Utility values for each agent are the opposite of the other This creates the adversarial situation

More information

Adversarial Reasoning: Sampling-Based Search with the UCT algorithm. Joint work with Raghuram Ramanujan and Ashish Sabharwal

Adversarial Reasoning: Sampling-Based Search with the UCT algorithm. Joint work with Raghuram Ramanujan and Ashish Sabharwal Adversarial Reasoning: Sampling-Based Search with the UCT algorithm Joint work with Raghuram Ramanujan and Ashish Sabharwal Upper Confidence bounds for Trees (UCT) n The UCT algorithm (Kocsis and Szepesvari,

More information

Laboratory 1: Uncertainty Analysis

Laboratory 1: Uncertainty Analysis University of Alabama Department of Physics and Astronomy PH101 / LeClair May 26, 2014 Laboratory 1: Uncertainty Analysis Hypothesis: A statistical analysis including both mean and standard deviation can

More information

AI Approaches to Ultimate Tic-Tac-Toe

AI Approaches to Ultimate Tic-Tac-Toe AI Approaches to Ultimate Tic-Tac-Toe Eytan Lifshitz CS Department Hebrew University of Jerusalem, Israel David Tsurel CS Department Hebrew University of Jerusalem, Israel I. INTRODUCTION This report is

More information

Adversarial Search (Game Playing)

Adversarial Search (Game Playing) Artificial Intelligence Adversarial Search (Game Playing) Chapter 5 Adapted from materials by Tim Finin, Marie desjardins, and Charles R. Dyer Outline Game playing State of the art and resources Framework

More information

COMP219: COMP219: Artificial Intelligence Artificial Intelligence Dr. Annabel Latham Lecture 12: Game Playing Overview Games and Search

COMP219: COMP219: Artificial Intelligence Artificial Intelligence Dr. Annabel Latham Lecture 12: Game Playing Overview Games and Search COMP19: Artificial Intelligence COMP19: Artificial Intelligence Dr. Annabel Latham Room.05 Ashton Building Department of Computer Science University of Liverpool Lecture 1: Game Playing 1 Overview Last

More information

Adversarial Search. CS 486/686: Introduction to Artificial Intelligence

Adversarial Search. CS 486/686: Introduction to Artificial Intelligence Adversarial Search CS 486/686: Introduction to Artificial Intelligence 1 Introduction So far we have only been concerned with a single agent Today, we introduce an adversary! 2 Outline Games Minimax search

More information

2 person perfect information

2 person perfect information Why Study Games? Games offer: Intellectual Engagement Abstraction Representability Performance Measure Not all games are suitable for AI research. We will restrict ourselves to 2 person perfect information

More information

Search Versus Knowledge in Game-Playing Programs Revisited

Search Versus Knowledge in Game-Playing Programs Revisited Search Versus Knowledge in Game-Playing Programs Revisited Abstract Andreas Junghanns, Jonathan Schaeffer University of Alberta Dept. of Computing Science Edmonton, Alberta CANADA T6G 2H1 Email: fandreas,jonathang@cs.ualberta.ca

More information

AI Module 23 Other Refinements

AI Module 23 Other Refinements odule 23 ther Refinements ntroduction We have seen how game playing domain is different than other domains and how one needs to change the method of search. We have also seen how i search algorithm is

More information

Artificial Intelligence Search III

Artificial Intelligence Search III Artificial Intelligence Search III Lecture 5 Content: Search III Quick Review on Lecture 4 Why Study Games? Game Playing as Search Special Characteristics of Game Playing Search Ingredients of 2-Person

More information

CS 188: Artificial Intelligence

CS 188: Artificial Intelligence CS 188: Artificial Intelligence Adversarial Search Instructor: Stuart Russell University of California, Berkeley Game Playing State-of-the-Art Checkers: 1950: First computer player. 1959: Samuel s self-taught

More information

CS885 Reinforcement Learning Lecture 13c: June 13, Adversarial Search [RusNor] Sec

CS885 Reinforcement Learning Lecture 13c: June 13, Adversarial Search [RusNor] Sec CS885 Reinforcement Learning Lecture 13c: June 13, 2018 Adversarial Search [RusNor] Sec. 5.1-5.4 CS885 Spring 2018 Pascal Poupart 1 Outline Minimax search Evaluation functions Alpha-beta pruning CS885

More information

AN EVALUATION OF TWO ALTERNATIVES TO MINIMAX. Dana Nau 1 Computer Science Department University of Maryland College Park, MD 20742

AN EVALUATION OF TWO ALTERNATIVES TO MINIMAX. Dana Nau 1 Computer Science Department University of Maryland College Park, MD 20742 Uncertainty in Artificial Intelligence L.N. Kanal and J.F. Lemmer (Editors) Elsevier Science Publishers B.V. (North-Holland), 1986 505 AN EVALUATION OF TWO ALTERNATIVES TO MINIMAX Dana Nau 1 University

More information

Monte Carlo tree search techniques in the game of Kriegspiel

Monte Carlo tree search techniques in the game of Kriegspiel Monte Carlo tree search techniques in the game of Kriegspiel Paolo Ciancarini and Gian Piero Favini University of Bologna, Italy 22 IJCAI, Pasadena, July 2009 Agenda Kriegspiel as a partial information

More information

Opponent Models and Knowledge Symmetry in Game-Tree Search

Opponent Models and Knowledge Symmetry in Game-Tree Search Opponent Models and Knowledge Symmetry in Game-Tree Search Jeroen Donkers Institute for Knowlegde and Agent Technology Universiteit Maastricht, The Netherlands donkers@cs.unimaas.nl Abstract In this paper

More information

Unit-III Chap-II Adversarial Search. Created by: Ashish Shah 1

Unit-III Chap-II Adversarial Search. Created by: Ashish Shah 1 Unit-III Chap-II Adversarial Search Created by: Ashish Shah 1 Alpha beta Pruning In case of standard ALPHA BETA PRUNING minimax tree, it returns the same move as minimax would, but prunes away branches

More information

Adversarial Search. CS 486/686: Introduction to Artificial Intelligence

Adversarial Search. CS 486/686: Introduction to Artificial Intelligence Adversarial Search CS 486/686: Introduction to Artificial Intelligence 1 AccessAbility Services Volunteer Notetaker Required Interested? Complete an online application using your WATIAM: https://york.accessiblelearning.com/uwaterloo/

More information

Foundations of AI. 5. Board Games. Search Strategies for Games, Games with Chance, State of the Art. Wolfram Burgard and Luc De Raedt SA-1

Foundations of AI. 5. Board Games. Search Strategies for Games, Games with Chance, State of the Art. Wolfram Burgard and Luc De Raedt SA-1 Foundations of AI 5. Board Games Search Strategies for Games, Games with Chance, State of the Art Wolfram Burgard and Luc De Raedt SA-1 Contents Board Games Minimax Search Alpha-Beta Search Games with

More information

Computing Science (CMPUT) 496

Computing Science (CMPUT) 496 Computing Science (CMPUT) 496 Search, Knowledge, and Simulations Martin Müller Department of Computing Science University of Alberta mmueller@ualberta.ca Winter 2017 Part IV Knowledge 496 Today - Mar 9

More information

Computer Science and Software Engineering University of Wisconsin - Platteville. 4. Game Play. CS 3030 Lecture Notes Yan Shi UW-Platteville

Computer Science and Software Engineering University of Wisconsin - Platteville. 4. Game Play. CS 3030 Lecture Notes Yan Shi UW-Platteville Computer Science and Software Engineering University of Wisconsin - Platteville 4. Game Play CS 3030 Lecture Notes Yan Shi UW-Platteville Read: Textbook Chapter 6 What kind of games? 2-player games Zero-sum

More information

CMPUT 396 Tic-Tac-Toe Game

CMPUT 396 Tic-Tac-Toe Game CMPUT 396 Tic-Tac-Toe Game Recall minimax: - For a game tree, we find the root minimax from leaf values - With minimax we can always determine the score and can use a bottom-up approach Why use minimax?

More information

COMP219: Artificial Intelligence. Lecture 13: Game Playing

COMP219: Artificial Intelligence. Lecture 13: Game Playing CMP219: Artificial Intelligence Lecture 13: Game Playing 1 verview Last time Search with partial/no observations Belief states Incremental belief state search Determinism vs non-determinism Today We will

More information

Adversary Search. Ref: Chapter 5

Adversary Search. Ref: Chapter 5 Adversary Search Ref: Chapter 5 1 Games & A.I. Easy to measure success Easy to represent states Small number of operators Comparison against humans is possible. Many games can be modeled very easily, although

More information

Artificial Intelligence Adversarial Search

Artificial Intelligence Adversarial Search Artificial Intelligence Adversarial Search Adversarial Search Adversarial search problems games They occur in multiagent competitive environments There is an opponent we can t control planning again us!

More information

Artificial Intelligence

Artificial Intelligence Artificial Intelligence CS482, CS682, MW 1 2:15, SEM 201, MS 227 Prerequisites: 302, 365 Instructor: Sushil Louis, sushil@cse.unr.edu, http://www.cse.unr.edu/~sushil Non-classical search - Path does not

More information

Minimaxing Theory and Practice

Minimaxing Theory and Practice AI Magazine Volume 9 Number 3 (988) ( AAAI) Minimaxing Theory and Practice Hermann Kaindl Empirical evidence suggests that searching deeper in game trees using the minimax propagation rule usually improves

More information

37 Game Theory. Bebe b1 b2 b3. a Abe a a A Two-Person Zero-Sum Game

37 Game Theory. Bebe b1 b2 b3. a Abe a a A Two-Person Zero-Sum Game 37 Game Theory Game theory is one of the most interesting topics of discrete mathematics. The principal theorem of game theory is sublime and wonderful. We will merely assume this theorem and use it to

More information

COMP3211 Project. Artificial Intelligence for Tron game. Group 7. Chiu Ka Wa ( ) Chun Wai Wong ( ) Ku Chun Kit ( )

COMP3211 Project. Artificial Intelligence for Tron game. Group 7. Chiu Ka Wa ( ) Chun Wai Wong ( ) Ku Chun Kit ( ) COMP3211 Project Artificial Intelligence for Tron game Group 7 Chiu Ka Wa (20369737) Chun Wai Wong (20265022) Ku Chun Kit (20123470) Abstract Tron is an old and popular game based on a movie of the same

More information

Solving Kriegspiel endings with brute force: the case of KR vs. K

Solving Kriegspiel endings with brute force: the case of KR vs. K Solving Kriegspiel endings with brute force: the case of KR vs. K Paolo Ciancarini Gian Piero Favini University of Bologna 12th Int. Conf. On Advances in Computer Games, Pamplona, Spain, May 2009 The problem

More information

More Adversarial Search

More Adversarial Search More Adversarial Search CS151 David Kauchak Fall 2010 http://xkcd.com/761/ Some material borrowed from : Sara Owsley Sood and others Admin Written 2 posted Machine requirements for mancala Most of the

More information

Algorithms for Data Structures: Search for Games. Phillip Smith 27/11/13

Algorithms for Data Structures: Search for Games. Phillip Smith 27/11/13 Algorithms for Data Structures: Search for Games Phillip Smith 27/11/13 Search for Games Following this lecture you should be able to: Understand the search process in games How an AI decides on the best

More information

Artificial Intelligence 1: game playing

Artificial Intelligence 1: game playing Artificial Intelligence 1: game playing Lecturer: Tom Lenaerts Institut de Recherches Interdisciplinaires et de Développements en Intelligence Artificielle (IRIDIA) Université Libre de Bruxelles Outline

More information

Game Playing Beyond Minimax. Game Playing Summary So Far. Game Playing Improving Efficiency. Game Playing Minimax using DFS.

Game Playing Beyond Minimax. Game Playing Summary So Far. Game Playing Improving Efficiency. Game Playing Minimax using DFS. Game Playing Summary So Far Game tree describes the possible sequences of play is a graph if we merge together identical states Minimax: utility values assigned to the leaves Values backed up the tree

More information

Today. Types of Game. Games and Search 1/18/2010. COMP210: Artificial Intelligence. Lecture 10. Game playing

Today. Types of Game. Games and Search 1/18/2010. COMP210: Artificial Intelligence. Lecture 10. Game playing COMP10: Artificial Intelligence Lecture 10. Game playing Trevor Bench-Capon Room 15, Ashton Building Today We will look at how search can be applied to playing games Types of Games Perfect play minimax

More information

Artificial Intelligence

Artificial Intelligence Artificial Intelligence CS482, CS682, MW 1 2:15, SEM 201, MS 227 Prerequisites: 302, 365 Instructor: Sushil Louis, sushil@cse.unr.edu, http://www.cse.unr.edu/~sushil Games and game trees Multi-agent systems

More information

COMP9414: Artificial Intelligence Adversarial Search

COMP9414: Artificial Intelligence Adversarial Search CMP9414, Wednesday 4 March, 004 CMP9414: Artificial Intelligence In many problems especially game playing you re are pitted against an opponent This means that certain operators are beyond your control

More information

Chess Algorithms Theory and Practice. Rune Djurhuus Chess Grandmaster / September 23, 2013

Chess Algorithms Theory and Practice. Rune Djurhuus Chess Grandmaster / September 23, 2013 Chess Algorithms Theory and Practice Rune Djurhuus Chess Grandmaster runed@ifi.uio.no / runedj@microsoft.com September 23, 2013 1 Content Complexity of a chess game History of computer chess Search trees

More information

A Quoridor-playing Agent

A Quoridor-playing Agent A Quoridor-playing Agent P.J.C. Mertens June 21, 2006 Abstract This paper deals with the construction of a Quoridor-playing software agent. Because Quoridor is a rather new game, research about the game

More information

Foundations of AI. 6. Adversarial Search. Search Strategies for Games, Games with Chance, State of the Art. Wolfram Burgard & Bernhard Nebel

Foundations of AI. 6. Adversarial Search. Search Strategies for Games, Games with Chance, State of the Art. Wolfram Burgard & Bernhard Nebel Foundations of AI 6. Adversarial Search Search Strategies for Games, Games with Chance, State of the Art Wolfram Burgard & Bernhard Nebel Contents Game Theory Board Games Minimax Search Alpha-Beta Search

More information

Games CSE 473. Kasparov Vs. Deep Junior August 2, 2003 Match ends in a 3 / 3 tie!

Games CSE 473. Kasparov Vs. Deep Junior August 2, 2003 Match ends in a 3 / 3 tie! Games CSE 473 Kasparov Vs. Deep Junior August 2, 2003 Match ends in a 3 / 3 tie! Games in AI In AI, games usually refers to deteristic, turntaking, two-player, zero-sum games of perfect information Deteristic:

More information

Adversarial Search Aka Games

Adversarial Search Aka Games Adversarial Search Aka Games Chapter 5 Some material adopted from notes by Charles R. Dyer, U of Wisconsin-Madison Overview Game playing State of the art and resources Framework Game trees Minimax Alpha-beta

More information

A Move Generating Algorithm for Hex Solvers

A Move Generating Algorithm for Hex Solvers A Move Generating Algorithm for Hex Solvers Rune Rasmussen, Frederic Maire, and Ross Hayward Faculty of Information Technology, Queensland University of Technology, Gardens Point Campus, GPO Box 2434,

More information

CS510 \ Lecture Ariel Stolerman

CS510 \ Lecture Ariel Stolerman CS510 \ Lecture04 2012-10-15 1 Ariel Stolerman Administration Assignment 2: just a programming assignment. Midterm: posted by next week (5), will cover: o Lectures o Readings A midterm review sheet will

More information

CS 380: ARTIFICIAL INTELLIGENCE MONTE CARLO SEARCH. Santiago Ontañón

CS 380: ARTIFICIAL INTELLIGENCE MONTE CARLO SEARCH. Santiago Ontañón CS 380: ARTIFICIAL INTELLIGENCE MONTE CARLO SEARCH Santiago Ontañón so367@drexel.edu Recall: Adversarial Search Idea: When there is only one agent in the world, we can solve problems using DFS, BFS, ID,

More information

CS188 Spring 2014 Section 3: Games

CS188 Spring 2014 Section 3: Games CS188 Spring 2014 Section 3: Games 1 Nearly Zero Sum Games The standard Minimax algorithm calculates worst-case values in a zero-sum two player game, i.e. a game in which for all terminal states s, the

More information

CS-E4800 Artificial Intelligence

CS-E4800 Artificial Intelligence CS-E4800 Artificial Intelligence Jussi Rintanen Department of Computer Science Aalto University March 9, 2017 Difficulties in Rational Collective Behavior Individual utility in conflict with collective

More information

Game Playing AI Class 8 Ch , 5.4.1, 5.5

Game Playing AI Class 8 Ch , 5.4.1, 5.5 Game Playing AI Class Ch. 5.-5., 5.4., 5.5 Bookkeeping HW Due 0/, :59pm Remaining CSP questions? Cynthia Matuszek CMSC 6 Based on slides by Marie desjardin, Francisco Iacobelli Today s Class Clear criteria

More information

Artificial Intelligence. Minimax and alpha-beta pruning

Artificial Intelligence. Minimax and alpha-beta pruning Artificial Intelligence Minimax and alpha-beta pruning In which we examine the problems that arise when we try to plan ahead to get the best result in a world that includes a hostile agent (other agent

More information

BayesChess: A computer chess program based on Bayesian networks

BayesChess: A computer chess program based on Bayesian networks BayesChess: A computer chess program based on Bayesian networks Antonio Fernández and Antonio Salmerón Department of Statistics and Applied Mathematics University of Almería Abstract In this paper we introduce

More information

DIT411/TIN175, Artificial Intelligence. Peter Ljunglöf. 2 February, 2018

DIT411/TIN175, Artificial Intelligence. Peter Ljunglöf. 2 February, 2018 DIT411/TIN175, Artificial Intelligence Chapters 4 5: Non-classical and adversarial search CHAPTERS 4 5: NON-CLASSICAL AND ADVERSARIAL SEARCH DIT411/TIN175, Artificial Intelligence Peter Ljunglöf 2 February,

More information

CS 331: Artificial Intelligence Adversarial Search II. Outline

CS 331: Artificial Intelligence Adversarial Search II. Outline CS 331: Artificial Intelligence Adversarial Search II 1 Outline 1. Evaluation Functions 2. State-of-the-art game playing programs 3. 2 player zero-sum finite stochastic games of perfect information 2 1

More information

Pengju

Pengju Introduction to AI Chapter05 Adversarial Search: Game Playing Pengju Ren@IAIR Outline Types of Games Formulation of games Perfect-Information Games Minimax and Negamax search α-β Pruning Pruning more Imperfect

More information

Game-playing AIs: Games and Adversarial Search FINAL SET (w/ pruning study examples) AIMA

Game-playing AIs: Games and Adversarial Search FINAL SET (w/ pruning study examples) AIMA Game-playing AIs: Games and Adversarial Search FINAL SET (w/ pruning study examples) AIMA 5.1-5.2 Games: Outline of Unit Part I: Games as Search Motivation Game-playing AI successes Game Trees Evaluation

More information

Games (adversarial search problems)

Games (adversarial search problems) Mustafa Jarrar: Lecture Notes on Games, Birzeit University, Palestine Fall Semester, 204 Artificial Intelligence Chapter 6 Games (adversarial search problems) Dr. Mustafa Jarrar Sina Institute, University

More information

Intuition Mini-Max 2

Intuition Mini-Max 2 Games Today Saying Deep Blue doesn t really think about chess is like saying an airplane doesn t really fly because it doesn t flap its wings. Drew McDermott I could feel I could smell a new kind of intelligence

More information

CS 229 Final Project: Using Reinforcement Learning to Play Othello

CS 229 Final Project: Using Reinforcement Learning to Play Othello CS 229 Final Project: Using Reinforcement Learning to Play Othello Kevin Fry Frank Zheng Xianming Li ID: kfry ID: fzheng ID: xmli 16 December 2016 Abstract We built an AI that learned to play Othello.

More information

Last update: March 9, Game playing. CMSC 421, Chapter 6. CMSC 421, Chapter 6 1

Last update: March 9, Game playing. CMSC 421, Chapter 6. CMSC 421, Chapter 6 1 Last update: March 9, 2010 Game playing CMSC 421, Chapter 6 CMSC 421, Chapter 6 1 Finite perfect-information zero-sum games Finite: finitely many agents, actions, states Perfect information: every agent

More information

Monte Carlo Go Has a Way to Go

Monte Carlo Go Has a Way to Go Haruhiro Yoshimoto Department of Information and Communication Engineering University of Tokyo, Japan hy@logos.ic.i.u-tokyo.ac.jp Monte Carlo Go Has a Way to Go Kazuki Yoshizoe Graduate School of Information

More information

Adversarial Search. Rob Platt Northeastern University. Some images and slides are used from: AIMA CS188 UC Berkeley

Adversarial Search. Rob Platt Northeastern University. Some images and slides are used from: AIMA CS188 UC Berkeley Adversarial Search Rob Platt Northeastern University Some images and slides are used from: AIMA CS188 UC Berkeley What is adversarial search? Adversarial search: planning used to play a game such as chess

More information

Game Mechanics Minesweeper is a game in which the player must correctly deduce the positions of

Game Mechanics Minesweeper is a game in which the player must correctly deduce the positions of Table of Contents Game Mechanics...2 Game Play...3 Game Strategy...4 Truth...4 Contrapositive... 5 Exhaustion...6 Burnout...8 Game Difficulty... 10 Experiment One... 12 Experiment Two...14 Experiment Three...16

More information

5.4 Imperfect, Real-Time Decisions

5.4 Imperfect, Real-Time Decisions 5.4 Imperfect, Real-Time Decisions Searching through the whole (pruned) game tree is too inefficient for any realistic game Moves must be made in a reasonable amount of time One has to cut off the generation

More information

Monte Carlo based battleship agent

Monte Carlo based battleship agent Monte Carlo based battleship agent Written by: Omer Haber, 313302010; Dror Sharf, 315357319 Introduction The game of battleship is a guessing game for two players which has been around for almost a century.

More information

Bootstrapping from Game Tree Search

Bootstrapping from Game Tree Search Joel Veness David Silver Will Uther Alan Blair University of New South Wales NICTA University of Alberta December 9, 2009 Presentation Overview Introduction Overview Game Tree Search Evaluation Functions

More information

2/5/17 ADVERSARIAL SEARCH. Today. Introduce adversarial games Minimax as an optimal strategy Alpha-beta pruning Real-time decision making

2/5/17 ADVERSARIAL SEARCH. Today. Introduce adversarial games Minimax as an optimal strategy Alpha-beta pruning Real-time decision making ADVERSARIAL SEARCH Today Introduce adversarial games Minimax as an optimal strategy Alpha-beta pruning Real-time decision making 1 Adversarial Games People like games! Games are fun, engaging, and hard-to-solve

More information

2048: An Autonomous Solver

2048: An Autonomous Solver 2048: An Autonomous Solver Final Project in Introduction to Artificial Intelligence ABSTRACT. Our goal in this project was to create an automatic solver for the wellknown game 2048 and to analyze how different

More information

Artificial Intelligence. Topic 5. Game playing

Artificial Intelligence. Topic 5. Game playing Artificial Intelligence Topic 5 Game playing broadening our world view dealing with incompleteness why play games? perfect decisions the Minimax algorithm dealing with resource limits evaluation functions

More information

Adversarial Search and Game Playing

Adversarial Search and Game Playing Games Adversarial Search and Game Playing Russell and Norvig, 3 rd edition, Ch. 5 Games: multi-agent environment q What do other agents do and how do they affect our success? q Cooperative vs. competitive

More information

Adversarial Search and Game Playing. Russell and Norvig: Chapter 5

Adversarial Search and Game Playing. Russell and Norvig: Chapter 5 Adversarial Search and Game Playing Russell and Norvig: Chapter 5 Typical case 2-person game Players alternate moves Zero-sum: one player s loss is the other s gain Perfect information: both players have

More information

Presentation Overview. Bootstrapping from Game Tree Search. Game Tree Search. Heuristic Evaluation Function

Presentation Overview. Bootstrapping from Game Tree Search. Game Tree Search. Heuristic Evaluation Function Presentation Bootstrapping from Joel Veness David Silver Will Uther Alan Blair University of New South Wales NICTA University of Alberta A new algorithm will be presented for learning heuristic evaluation

More information

Using Artificial intelligent to solve the game of 2048

Using Artificial intelligent to solve the game of 2048 Using Artificial intelligent to solve the game of 2048 Ho Shing Hin (20343288) WONG, Ngo Yin (20355097) Lam Ka Wing (20280151) Abstract The report presents the solver of the game 2048 base on artificial

More information

Techniques for Generating Sudoku Instances

Techniques for Generating Sudoku Instances Chapter Techniques for Generating Sudoku Instances Overview Sudoku puzzles become worldwide popular among many players in different intellectual levels. In this chapter, we are going to discuss different

More information

Microeconomics II Lecture 2: Backward induction and subgame perfection Karl Wärneryd Stockholm School of Economics November 2016

Microeconomics II Lecture 2: Backward induction and subgame perfection Karl Wärneryd Stockholm School of Economics November 2016 Microeconomics II Lecture 2: Backward induction and subgame perfection Karl Wärneryd Stockholm School of Economics November 2016 1 Games in extensive form So far, we have only considered games where players

More information

CS 297 Report Improving Chess Program Encoding Schemes. Supriya Basani

CS 297 Report Improving Chess Program Encoding Schemes. Supriya Basani CS 297 Report Improving Chess Program Encoding Schemes Supriya Basani (sbasani@yahoo.com) Advisor: Dr. Chris Pollett Department of Computer Science San Jose State University December 2006 Table of Contents:

More information

Game Playing. Philipp Koehn. 29 September 2015

Game Playing. Philipp Koehn. 29 September 2015 Game Playing Philipp Koehn 29 September 2015 Outline 1 Games Perfect play minimax decisions α β pruning Resource limits and approximate evaluation Games of chance Games of imperfect information 2 games

More information

Generalized Game Trees

Generalized Game Trees Generalized Game Trees Richard E. Korf Computer Science Department University of California, Los Angeles Los Angeles, Ca. 90024 Abstract We consider two generalizations of the standard two-player game

More information