Column Checkers: Brute Force against Cognition

Size: px
Start display at page:

Download "Column Checkers: Brute Force against Cognition"

Transcription

1 Column Checkers: Brute Force against Cognition Martijn Bosma February 21, 2005 Abstract The game Column Checkers is an unknown game. It is not clear whether cognition and knowledge are needed to play the game well. Neither is known what is a good strategy since there does not seem to be something like a Column Checkers community. In this project a brute-force computer model is built that is able to play the game with as little knowledge as possible. The model uses an iterative alpha-beta algorithm and a very simple evaluation function to select the best moves. It is found that this is enough to beat a group of four human Column Checkers players. Three of them are very experienced and one is a novice. Furthermore the same strategies used by the human players emerge in the playing style of the model. 1 Introduction When I was around 10 years old I learned a funny game from my father. It was a kind of draughts but the way one captures the pieces of ones opponents is very different. The captured checker-pieces are not removed from the board but are kept as prisoners in columns. The owner of the column is the player who owns the top checker-piece. The game knows so many forced moves that during the game, there are whole periods where none of the players has any choice of what move to play. On the other hand there are periods where selecting the right move determines a loss or a win. I played this game many times with a couple of friends, but outside this circle of people nobody seemed to know the game. Therefore we could not really know what was a good strategy and we did not know the nature of the game. Is it just a simple mechanical game, like noughts and cross, or does one need a lot of knowledge and cognition like in chess? The main question of this project is: Are cognition and knowledge needed to win column checkers? To answer this question I have built a computer model that is able to play the game the brute force way[3]. In my model this means that the computer knows as least as possible of the game, but it is allowed to search many board-states that result from the moves that can be done. If the computer is able to win the game from human players by searching a reasonable amount of board-states, then we have a clear indication that the game is mechanical. Another question I like to answer is: Are human players using the right strategies? 1

2 We can only answer this question by looking at the playing-style of the computer in those games where it outperforms human players. This playing style is not based on knowledge that is coded in the model. It is a result of searching many board-states. If this style was based on knowledge and cognition we would have called it a strategy. Now if our strategies and this playing style are similar, we have an indication that we are using strategies that work. Firstly I will explain the rules of Column Checkers in section 2. Then I will describe the model in section 4 and the main experiment in section 5. I will shortly discuss a test that indicates that a larger search-tree indeed leads to better playing in section 6. Finally I will discuss the results and try to answer the research questions in section 7. A description of the program together with some screen shots can be found in appendix A. 2 The Game It seems like Column Checkers is a mix between Lasca [4] and international draughts 1. It is similar to draughts except for the way one captures the pieces of ones opponent, which is the same as in Lasca. I will explain the game Column Checkers first, by using a text with a rule description of international draughts 2 I have adapted this text to obtain the rules of Column Checkers. Then I will give an example of jumps (capture moves) to illustrate the character of the game. The way checkers are captured makes the events in Column Checkers very different from other types of draughts games. Since it is similar to Lasca in this respect I will use examples of this game to illustrate them The Rules Let us look at the rules of Column Checkers. I will describe the game by using seven rules, listed below. 1. Column Checkers is played on the dark squares only, of a checkerboard of 100 alternating dark and light squares, (ten rows, ten files) by two opponents having 20 checkers each of contrasting colors, nominally referred to as black and white. 2. The board is positioned squarely between the players and turned so that a dark square is at each player s near left side. Each player places his checkers on the dark squares of the four rows nearest him. The player with the lighter checkers makes the first move of the game, and the players take turns thereafter, making one move at a time. 3. The object of the game is to prevent the opponent from being able to move when it is his turn to do so. This is accomplished either by capturing all of the opponent s checkers, or by blocking those that remain so that none of them can be moved. It is impossible for the game to end in a draw. 4. Checkers move forward only, one square at a time in a diagonal direction, to an unoccupied square. Checkers capture by jumping over an opposing checker on a diagonally adjacent square to the square immediately beyond, but may do so only if this square is 1 International draughts is called polish draughts as well 2 The rules are taken from [5]. This website has a very clear rule description. 3 There are clear examples on the internet of jumps in Lasca, with good figures. The example in the Jumps section is taken from [4] 2

3 unoccupied. Checkers may jump forward or backward, and may continue jumping as long as they encounter opposing checkers with unoccupied squares immediately beyond them. Checkers may never jump over checkers of the same color. 5. A captured checker is not taken from the board as in draughts. It is placed under the capturing checker instead, and can be released later in the game. During a jump the capturing checker is placed on top of the captured checker. Then both checkers are moved to the destination square of the jump. The result is a column of two checkers with the attacking checker, the commander, on top. The column belongs to the player who s checker is on top, and it is moved in the same way as any other checker. The commander of a column can be captured as well. In this case the jumping checker is placed on top of the column. Then only the commander of this column is captured and placed under the jumping checker and moved to the destination square of the jump. Finally a jumping unit can also be a column. Then the whole column is placed on top of the captured checker (or column), and again, the checker or commander is captured and moved to the destination square of the jump. See the Jumps section below for an example. 6. A checker which reaches the far side of the board is not able to move anymore, since it is only permitted to move forward. If a player is not able to move any of his checkers, the board is turned 180 degrees. (What was backward is now forward, and vise versa.) The game proceeds in the opposite direction. If the player is still not able to make a move, he has lost the game. (see rule 3) 7. Whenever a player is able to make a capture he must do so. When there is more than one way to jump, a player must choose a sequence of jumps which results in the capture of the greatest possible number of opposing units. 2.2 Jumps Now we look at an example of jumps, by using figures of Lasca. It illustrates how jumps are done and how they can be used to liberate captured checkers. Remember that in this respect there is no difference between Lasca and Column Checkers. Figure 1: Red has captured one of yellows checkers, and it is yellows turn. Yellow will be able to get it back. Jumping is compulsory and therefore yellow moves a checker in front of the column owned by red. We see in the example of the Figures 1 to 4 that jumps can be used to force the opponent to do moves you like him to do. This is a very important tactic in Column Checkers. 3

4 Figure 2: It seems like yellow gives a checker away! Red is forced to jump. Figure 3: Now the column is exactly at the right position. This is what yellow had in mind. Figure 4: Yellow liberates two checkers and captures the red one. 3 Human Players How do humans play Column Checkers? What Knowledge do they use? There are no books on this game, neither are there professional players. The game is so unfamiliar that it seems that I have to rely on the knowledge of me and a couple of my friends who know the game very well. For the overall tactics Lasca is of no use here. The game is too different. I have discussed the game and its strategies with two of them, furthermore I used my own knowledge to get an idea of what is going on when a human plays Column Checkers. Note that this is by no means an exhaustive introspection. It is just a way to get the general idea of Column Checkers strategies in humans. 4

5 There were three general findings we came up with in our discussion: Humans do not think many plies ahead. It is good to build big columns of your own color during the early stages of the game. Play in the direction of the center during the early stages of the game. I will discuss them below. 3.1 Intuition Like in draughts, jumping is compulsory. Unlike draughts the checkers remain on the board until the end. This means that it remains pretty crowded, especially in the beginning. Therefore the game knows a lot of forced moves. This means that a player has no choice but to do a certain jump move many times. The result is that it should be easy to predict the state on the board many plies ahead. At least it should be easier than in draughts, since in that game the players have more moves to choose from every turn. It is known that professional draughts players think many plies ahead [6]. In the discussion about the game it turned out that nobody did this. Most of the times we did not even think two plies ahead! Instead we used a kind of intuition. A feeling for what is a good move based on our experience. We probably use memories of images of board states we have encountered many times before [3]. This intuition, gained by experience is a merely subconscious affair. Therefore we do not have much access to our own strategies. If the next states of the game are so easy to predict, why are we not calculating more plies ahead? An answer might lie in the radically changing patterns on the board. Although a path from board-state A to board-state B might be straightforward, the visualization of the state change is a different story. When you see the game in state A it is often hard to imagine what state B will look like. The visual appearance of the game can change quickly and drastically. This makes it different from games as chess and international draughts, where the visual appearance of the board state changes more gradually. 3.2 Give it away Although we trust mainly our intuition, we could mention a couple of wise things to do when playing Column Checkers. One of them is to give checkers away when you know you have a good chance of liberating them later in the game. Try to force your opponent to capture many of your checkers, especially with the same column. In this way he will end up with a commander with many of your checkers underneath. When you liberate this treasure you have a big column of your own kind. This liberated column makes a strong fighting tool. If you capture checkers of your opponent with this monster he has to capture all your checkers of the pile first to liberate his own. Since every jump only takes one checker off this will take a long time, if he succeeds at all. Therefore it is wise to create big columns of checkers of your own color in the early stages of the game. 5

6 3.3 The Center When your opponent has moved a checker or column to the side of the board, it is easy to block him there. A unit on the side of the board can only move in one direction, and if you block this direction, he is stuck there. When you are in this lucky situation, you can prepare a long jumping route for the blocked unit. This means that you arrange your checkers in such a way that the blocked column is forced to jump over all of them 4. When you finished preparing this trap you force the unit to capture all the checkers that make this created route. Of course you are ready to capture the commander of the resulting column, and in this way you have a big column mainly consisting of checkers of your own color. We have seen that creating big piles of ones own checkers is good. The lesson we can learn from here is: Do not move your checkers to the side of the board during the early stages of the game. Always play in the direction of the center. 4 The Model The computer model knows the rules of the game as described in section 2.1. To determine what is a good move the model uses its evaluation function. This is a function that gives values to every board-state. In this way the model is able to compare moves and choose the one with the highest resulting value. Unlike humans, the model will calculate many plies ahead and does not use much knowledge [3]. This type of behavior in computer models is called the force strategy. The model looks at many board-states and compares them in a simple and fast way. This brute force search is performed by the alpha-beta algorithm. I will discuss the evaluation function first, and then the alpha-beta algorithm. 4.1 Evaluation Function The objective of this project is to look whether brute force alone is enough to win Column Checkers and therefore we try to give the model as little knowledge as possible. Obviously no knowledge would lead to random decisions due to the random selection of moves. I have implemented this and it leads to a model that looses the game in no time from a human player. A little knowledge is necessary to make brute force search work 5. One of the simplest evaluation functions is: Count your free checkers and subtract the number of free checkers from your opponent. Free checkers are all the checkers that are not captured. Remember that each player plays with twenty checkers. If a player wins, all of his checkers are free and all the checkers of the opponent are captured. The evaluation would be 20 0 = 20 for the winner. For the looser this same board-state would be 0 20 = 20. Therefore this evaluation function results initially in an evaluation range of [-20, 20]. If we like to improve the evaluation function we will calculate more evaluation values based on other features of the game, and therefore the evaluation range will be enlarged. However we like the model to recognize a win or a loss on all circumstances. Therefore we give board-states that indicate a win or a loss a very high or very low marker 4 Remember that according to rule 7 of section 2.1 jumping the longest possible sequence is compulsory. 5 Comparing board states is of no use if the values attached to them are random! 6

7 value to recognize these situations. We want to be sure that this marker value is outside the evaluation range otherwise the model would mistake another board-state for a win or a loss. We choose 1000 and 1000 as marker values. Winning is rewarded with value 1000 and loosing is punished with value Now imagine a situation where many moves lead to equally valued board-states. This happens for example when white has to make the first move of the game. After any move both players have 20 free checkers on the board. The evaluation function will give a value of = 0 for the board-state that results from every move. (In this case we assume that the model only looks one ply ahead.) For the model it does not matter what move to do, but the computer has to make a choice. This choice-problem has been solved by adding a little noise to the evaluation function. Noise here is the addition of a random number to the evaluation value. This gives the model the possibility to make a random choice when board states have the same evaluation value 6 On the other hand, we do not want the noise to interfere with different valued board-states. For example, we never want the model to choose a board-state with value 9 if it can choose a board-state with value 10. Therefore we multiply the result of the evaluation function with 10 and add noise in the range of [0, 9]. In this way the differences in the value, due to different board-states are always larger than difference due to the noise. The evaluation range is now [-209, 209]. See the pseudo code in table 1 for the details of the evaluation function. Figure 5: Evaluation Function. r = random(10) : 0 r < 10 6 A computer is not able to generate genuine random numbers and therefore random means here: pseudo random. 7

8 4.2 Alpha-Beta Algorithm The alpha-beta algorithm is a more efficient version of the well known min-max algorithm because it prunes branches of the search tree. For a detailed explanation of the min-max and the alpha-beta algorithm I refer to chapter 5 of [2]. Here I will give a brief explanation by means of an example. Figure 6: Example of alpha-beta pruning The tree in figure 5 is two ply deep. Max has to choose from two moves. It is looking for the move with the highest value. For every move max can do min has to choose from two moves as well. Min will choose the move that results in the lowest value. A min-max algorithm will search the whole tree to find the best move. However, this is not always necessary as we can see in the figure. Max will investigate the left move first. At its turn, min investigates the left move first as well, and it finds a board state of value 5. Then min investigates the right move, with value 3. Min chooses the latter board state because it has the lowest value. When max chooses the left move, the best value it can get is 3. Now max investigates the right move and min the corresponding left move. Here the value of the board state is 2. We know that min always chooses the lowest value, and therefore, if max does the right move it can expect a value of 2 or lower. This situation is already worse than the value max can obtain if it does the left move. Therefore it is not necessary anymore to investigate the last move of min in the branch on the right. This node of the tree is pruned by the alpha-beta algorithm. If we take a closer look at figure 5 we see that the ordering of the nodes of the tree has an effect on pruning. If for example the two last nodes (the pruned node and its sibling) are swapped, no pruning will occur. We conclude that the ordering of the moves matters, in the best case we like the moves of max to be ordered from high to low, then most branches of the search tree are pruned. Here we seem to have a chicken-egg situation: We need the alpha-beta algorithm to obtain values for the different board-states. On the other hand we need those board-states to make this alpha-beta algorithm work effectively. There is a version of the alpha-beta algorithm that solves this problem. It is called the Iterative Alpha-Beta Algorithm [1]. 8

9 I have implemented the iterative alpha-beta algorithm, and in my implementation it works like this: Before the player starts a game against my model, he can set the number of board states N that the model is allowed to search. When the model has to make a move it starts by generating an alpha-beta tree of two ply deep and it counts the number of board-states that it has investigated, n. Now all the possible moves the model can do are labeled with evaluation values. If n < N, the model will order all the possible moves from high to low according to their evaluation value. Then the model will generate an alpha-beta tree of one ply deeper. In this case this is a tree of three ply deep. According to van Diepen and van den Herik [1] this resulting search tree of three ply together with the tree of two ply is generally smaller than the same search tree of three ply we would get if we did not order the moves. This means that this ordering of nodes prunes so many branches of the search tree that it is worth searching twice 7. The number of board states n of this tree is counted and every move is labeled with the evaluation value. Again if n < N all the possible moves are ordered and the model will search one ply deeper. This process continues until n N. If n N, the move with the highest evaluation value is chosen. 5 Human Players: Experiment and Results Now it is time to put my model to the test. Four people will play four times against my model. I will be one of the subjects. A human player plays with white and starts the game. Every game the search space of the iterative alpha-beta algorithm will be set to a fixed number of board states. The first game every subject plays has a search space of 2000 board-states. For the second game this is set to 4000 board-states. In the third game it is set to 6000 board-states and finally in the fourth game to 8000 board-states. Between the games the players get a break of half an hour. There are no time limits. This means that the computer and the human players can use as much time as they need to. (Generally the computer is much faster than the humans) One person Bob is less experienced. He has learned the game a couple of months before this test, and he has not played the game that much. ( a rough estimation is 10 to 20 times) The others: Manso, Daan en Martijn know the game for years and have played it certainly more than 100 times. I have to mention that due to this project my skills have increased a bit and I have become the best player of the group. The results of these four people playing against the model can be found in table 2. The players played every level only once, therefore we cannot say that these results will be the same when we repeat the test. For example, I have played the 6000 nodes condition more times, and there were many times I have lost as well. Another point is that the model always played with black. (white starts the game) To look at the effects of starting with black or white I should have tested every level more times to see if it would make any difference. However my experience is that it does not matter much. Finally the test-group is small. It is possible that these subjects do not represent the way people generally play column checkers. Having said this, the results displayed in this table indicate that the model plays better when the number of board-states is increased. Nobody in this test group wins from the 8000 board-states condition. 7 I refer to their book Schaken voor Computers for details of the effects of the algorithm [1]. 9

10 search space subject Bob win loose loose loose Manso win win loose loose Daan win win loose loose Martijn win win win loose Table 1: Results: The numbers in this table indicate the number of board states the model is allowed to search with every move. 6 Search Trees In the experiment where human players play against the model which is allowed to search more and more board-states we get the impression that a larger search space is beneficial. The model seems to play better when its search space is increased. In this section we let the model play against itself to get an indication whether this is true. Since the model plays against itself we get results independent from human playing skills. We generate a competition between different versions of the model. The difference is obtained by varying the number of boardstates systematically. We investigate models with search trees with maximum numbers of 4000, 8000, 12000, 16000, and board-states. There is a white and a black player of every board-states category, and every player of a color plays against all the players of the other color ten times, and the results are displayed in Figure 7. The X and Y axis represent the different players. The numbers indicate the number of permitted board-states. Every crossing on the XY plain in this 3D graph represents 10 matches. The Z axis represents the points obtained by the white player. A win stands for two points, a loss for zero and a draw for one. In matches between human players a draw is very uncommon, but we don t know if this is so in a machine against machine game. Therefore we abort any game that lasts longer than 1000 moves. A situation where both players are of equal strength is where white has ten points. This means implicitly that black gained the other ten points. When we look at Figure 7 we see that white gains more than ten points when it is allowed to search more board-states than black. It gains less than ten points in those conditions where it is allowed to search less board-states than black. On average white gains points. This is a little bit more than 50%. This can be due to an advantage whit has because it starts the game. However it can be noise as well. The number of games played in this test is too small to explain such a small difference. However, the main tendency of the graph shows that searching more board-states leads to a better performance. When we ran this test we looked as well at the average number of moves played in the game for every condition. Now we know that searching more board-states leads to better performance we expect that a game with artificial players where one player is allowed to search more board states than the other, is finished in less moves. This is the case when humans play the game. If one player is substantially better, the game is finished quickly. The number of moves needed to win the game is low. On the other hand when two players are playing at the same level, then the game can last many moves. The result is depicted in Figure 8. Again the white and the black players are set on the X and the Y axes. The Z axis represents the average number of moves played in the ten games. We expect a diagonal ridge where both players are allowed to 10

11 search the same number of board-states. We expect a valley where the difference between both players is large. This hypothesis cannot be confirmed based on the results of this test. We see a hill in the [black = 4000, white = 8000] and [black = 4000, white = 12000] conditions. For the rest it is hard to find a pattern in this graph. We can conclude that in these tests, players of equal strength do not play longer games. points white in 10 matches points white black white Figure 7: The number of points obtained by white in ten games, for every condition. average number of moves ( 10 games ) 450 number of moves 500 X: 1 Y: 3 Z: white black Figure 8: The average number of moves for every search-depth condition. 7 Discussion and Conclusion The results in table 2 indicate that the alpha-beta algorithm combined with a simple evaluation function can beat all the human players when it is allowed to search 8000 board-states. For a 11

12 computer program searching 8000 board states is not really much. The model performs so well, because the branching factor in the search tree is very low, due to the forced moves (jumps) that occur all the time. This enables the model to search many plies ahead. Unlike humans the computer does not have a problem with the visualization of the patterns of future board-states. When the model played against itself we found that searching more board-states indeed leads to a better performance. What is more interesting is the fact that the discussed strategies appear in the behavior of the model as well. Like human players, the model tries to force its opponent to capture many checkers and then it liberates the resulting column, to obtain a strong fighting unit. As a human when playing the game we sometimes give away too many checkers, and we are not able to get them back. In that case we have taken too many risks. This is a mistake the model never makes. If the model gives away a lot of checkers, it knows that it is able to get them back. It simply has seen it in one branch of its search tree. This behavior becomes stronger when we allow the computer to search more board-states. Playing your checkers in the direction of the center during the first stage of the game is the second behavior that emerges out of the brute force algorithm. Once again, something we feel is right based on our intuition gained by our experience, is found by brute force. To get an indication of what superb playing looks like I have increased the search space of the model well beyond human performance levels, to board-states. When the search space is increased to this size the model makes moves that I as an experienced Column Checkers player cannot understand anymore. Often it turns out that many plies after such a strange move, one of its checkers was set to exactly the right position to win the game. The conclusion is that our skill of playing Column Checkers is so low that a simple brute force algorithm, using only a little knowledge is able to beat us easily. Even if humans improve a lot the model will still be superior. We can say that cognition has lost this battle. On the other hand our cognition saves us a lot of calculations of board states. Where the model needs to search through thousands of board-states we only need to look at a fraction of them to come up with the same tactics. Note the 2000 and 4000 board-state conditions of table 2, where we the humans, beat the model. Here the model investigates definitely more than 1000 times as many board states as we do, and it is not able to win. We human beings can be proud on our intuition gained by experience. A The Program The column checkers program is written in Pascal and is called alphabeta.exe. It can be run ny double clicking on the executable. The language used in the program is Dutch. The program starts with displaying a simple representation of the board. The human player plays with white, and the computer player plays with blue. These colors are chosen to obtain a good contrast on the screen. White starts the game. The checker pieces are represented as numbers. Every column has two numbers separated by a horizontal stripe. The left number represents the number of pieces of the owner of the column. The right number represents the number of conquered pieces. The start of the game is the same as in draughts and is illustrated in Figure 9. We see that both players have twenty checker pieces, all represented by a 1 on the left and a 0 on the right. This means that every player has twenty columns of height 1 with no prisoners. Below the board we see two lines of text with numbers behind them. The first line Geef de 12

13 Figure 9: Start Screen minimale zoekdiepte op: means: Fill out the minimal search depth. The second line Geef de knopengrens op: means Fill out how many board-states are permitted. Here the user can control the search depth of the algorithm. This can be done in two ways. One is by filling out how many ply the program is allowed to search, the other is by filling out how many board-states, nmaxboardstates, the program is allowed to search. These two ways of setting the search space work together. In the first line the user sets the minimum number of plies. This means that the program will search these number of plies first. In the meanwhile it counts the number of searched board-states, sboardstates. After the minimal number of plies is searched the program checks whether sboardstates nmaxboardstates. If this is not the case it will search on. From then on the program will check sboardstates nmaxboardstates after every newly searched boardstate. When sboardstates nmaxboardstates the algorithm will return the results of the last fully searched ply. 13

14 Below these two lines of text we see a list of all legal moves the human player is allowed to do. The moves are displayed in draughts notation [6]. In draughts notation all 50 black squares of the board are numbered, from 1 to 50. Below this list we see the sentence Je kan 0 schijven slaan. This means: You can capture 0 checkers. It tells the user the length of the longest possible jump. Finally we see the sentence Vul een zet in: on the bottom of the screen. Here the user is asked to pick a move from the list of moves and to fill out the number behind the sentence. This is the way to play the game. Figure 10: The game after a first move of the human player. In the screen shot of Figure 10 we see that the user has done the first move. It has selected move number 5. Below the board we see again two lines of text, which display the results of the evaluation function. Line 1 means: The value for white is: 0 and line 2 means: The value for black is: 0. In the screen representation blue stands for the black player. Both players have twenty free checkers so they are valued equally. We do not display the noise that is added to the evaluation function since this distracts the user from the main evaluation results. The noise 14

15 is only important for the program to choose among equally valued moves. The last sentence on this screen shot tells the user to press a 0 (zero) and enter to leave the game, or another digit-key and enter to continue. Figure 11: After searching at depth n of the search tree the alpha-beta algorithm returns the temporary results. These are displayed on the screen. This process continues until sboardstates nmaxboardstates. Figure 11 shows a part of the output of the iterative alpha-beta algorithm. Diepte means depth, and the number n behind it represents the current search depth. Below it are all the possible moves with their evaluation values at depth n. For every depth nmaxboardstates and sboardstates are displayed, and the difference between the two. Every time sboardstates nmaxboardstates the algorithm searches one ply deeper. We see the advantage of searching deeper, when we look at move 2. ( Zet 2 ). At depth 2, the evaluation function predicts a balance between the two players. At depth 3 this balance is turned into an advantage for the computer player, but on depth 4 the table is turned. Now the 15

16 evaluation function predicts an advantage for the human player. In these cases the evaluations coming from the deepest branches of the search tree are the most reliable, since the algorithm has seen more of the game. We can compare this with a human player who is allowed to move the checkers around to check the effects of a move. Figure 12: Example of a moment of a match. nmaxboardstates is set to Finally we take a look at the last screen shot in Figure 12. This is an example of what the game can look like at one third of the match. In this match nmaxboardstates is set to The human player has to do a move, and we see that the list of moves has only one element. That is because he has to jump, since jumping is compulsory. The human player has captured one checker of the computer player, while the latter has caught three checkers. The computer player managed to create a column of height 4 and one of height 3, while the highest column of the human player is only 2 checkers high. Although the human player is making up for his disadvantage by capturing two checkers of the computer player, this does not last long. The 16

17 computer player has built a little trap here. It is able to capture all the gained pieces back 8. 8 I leave this as an exercise for the interested reader 17

18 References [1] van Diepen P. and van den Herik J., Schaken voor Computers. Schoonhoven, Academic Service, 1987 [2] Russel S. and Norvig P., Artificial Intelligence: A Modern Approach. Prentice-Hall, Upper Saddle River, New Jersey, 1995 [3] Schaeffer J., The Role of Games in Understanding Computational Intelligence IEEE Intelligent Systems, pp , November/December 1999 [4] Website: lascaabout.htm [5] Website: rules eng.html [6] Wiersma H., Dammen in Opbouw. Baarn, Tirion,

A Quoridor-playing Agent

A Quoridor-playing Agent A Quoridor-playing Agent P.J.C. Mertens June 21, 2006 Abstract This paper deals with the construction of a Quoridor-playing software agent. Because Quoridor is a rather new game, research about the game

More information

Experiments on Alternatives to Minimax

Experiments on Alternatives to Minimax Experiments on Alternatives to Minimax Dana Nau University of Maryland Paul Purdom Indiana University April 23, 1993 Chun-Hung Tzeng Ball State University Abstract In the field of Artificial Intelligence,

More information

Five-In-Row with Local Evaluation and Beam Search

Five-In-Row with Local Evaluation and Beam Search Five-In-Row with Local Evaluation and Beam Search Jiun-Hung Chen and Adrienne X. Wang jhchen@cs axwang@cs Abstract This report provides a brief overview of the game of five-in-row, also known as Go-Moku,

More information

Game Tree Search. CSC384: Introduction to Artificial Intelligence. Generalizing Search Problem. General Games. What makes something a game?

Game Tree Search. CSC384: Introduction to Artificial Intelligence. Generalizing Search Problem. General Games. What makes something a game? CSC384: Introduction to Artificial Intelligence Generalizing Search Problem Game Tree Search Chapter 5.1, 5.2, 5.3, 5.6 cover some of the material we cover here. Section 5.6 has an interesting overview

More information

Set 4: Game-Playing. ICS 271 Fall 2017 Kalev Kask

Set 4: Game-Playing. ICS 271 Fall 2017 Kalev Kask Set 4: Game-Playing ICS 271 Fall 2017 Kalev Kask Overview Computer programs that play 2-player games game-playing as search with the complication of an opponent General principles of game-playing and search

More information

ARTIFICIAL INTELLIGENCE (CS 370D)

ARTIFICIAL INTELLIGENCE (CS 370D) Princess Nora University Faculty of Computer & Information Systems ARTIFICIAL INTELLIGENCE (CS 370D) (CHAPTER-5) ADVERSARIAL SEARCH ADVERSARIAL SEARCH Optimal decisions Min algorithm α-β pruning Imperfect,

More information

Programming an Othello AI Michael An (man4), Evan Liang (liange)

Programming an Othello AI Michael An (man4), Evan Liang (liange) Programming an Othello AI Michael An (man4), Evan Liang (liange) 1 Introduction Othello is a two player board game played on an 8 8 grid. Players take turns placing stones with their assigned color (black

More information

Artificial Intelligence. Minimax and alpha-beta pruning

Artificial Intelligence. Minimax and alpha-beta pruning Artificial Intelligence Minimax and alpha-beta pruning In which we examine the problems that arise when we try to plan ahead to get the best result in a world that includes a hostile agent (other agent

More information

CS 771 Artificial Intelligence. Adversarial Search

CS 771 Artificial Intelligence. Adversarial Search CS 771 Artificial Intelligence Adversarial Search Typical assumptions Two agents whose actions alternate Utility values for each agent are the opposite of the other This creates the adversarial situation

More information

CS 4700: Foundations of Artificial Intelligence

CS 4700: Foundations of Artificial Intelligence CS 4700: Foundations of Artificial Intelligence selman@cs.cornell.edu Module: Adversarial Search R&N: Chapter 5 1 Outline Adversarial Search Optimal decisions Minimax α-β pruning Case study: Deep Blue

More information

Games (adversarial search problems)

Games (adversarial search problems) Mustafa Jarrar: Lecture Notes on Games, Birzeit University, Palestine Fall Semester, 204 Artificial Intelligence Chapter 6 Games (adversarial search problems) Dr. Mustafa Jarrar Sina Institute, University

More information

Algorithms for Data Structures: Search for Games. Phillip Smith 27/11/13

Algorithms for Data Structures: Search for Games. Phillip Smith 27/11/13 Algorithms for Data Structures: Search for Games Phillip Smith 27/11/13 Search for Games Following this lecture you should be able to: Understand the search process in games How an AI decides on the best

More information

COMP219: COMP219: Artificial Intelligence Artificial Intelligence Dr. Annabel Latham Lecture 12: Game Playing Overview Games and Search

COMP219: COMP219: Artificial Intelligence Artificial Intelligence Dr. Annabel Latham Lecture 12: Game Playing Overview Games and Search COMP19: Artificial Intelligence COMP19: Artificial Intelligence Dr. Annabel Latham Room.05 Ashton Building Department of Computer Science University of Liverpool Lecture 1: Game Playing 1 Overview Last

More information

OCTAGON 5 IN 1 GAME SET

OCTAGON 5 IN 1 GAME SET OCTAGON 5 IN 1 GAME SET CHESS, CHECKERS, BACKGAMMON, DOMINOES AND POKER DICE Replacement Parts Order direct at or call our Customer Service department at (800) 225-7593 8 am to 4:30 pm Central Standard

More information

Adversarial Search (Game Playing)

Adversarial Search (Game Playing) Artificial Intelligence Adversarial Search (Game Playing) Chapter 5 Adapted from materials by Tim Finin, Marie desjardins, and Charles R. Dyer Outline Game playing State of the art and resources Framework

More information

COMP219: Artificial Intelligence. Lecture 13: Game Playing

COMP219: Artificial Intelligence. Lecture 13: Game Playing CMP219: Artificial Intelligence Lecture 13: Game Playing 1 verview Last time Search with partial/no observations Belief states Incremental belief state search Determinism vs non-determinism Today We will

More information

Playing Othello Using Monte Carlo

Playing Othello Using Monte Carlo June 22, 2007 Abstract This paper deals with the construction of an AI player to play the game Othello. A lot of techniques are already known to let AI players play the game Othello. Some of these techniques

More information

Game-Playing & Adversarial Search

Game-Playing & Adversarial Search Game-Playing & Adversarial Search This lecture topic: Game-Playing & Adversarial Search (two lectures) Chapter 5.1-5.5 Next lecture topic: Constraint Satisfaction Problems (two lectures) Chapter 6.1-6.4,

More information

2048: An Autonomous Solver

2048: An Autonomous Solver 2048: An Autonomous Solver Final Project in Introduction to Artificial Intelligence ABSTRACT. Our goal in this project was to create an automatic solver for the wellknown game 2048 and to analyze how different

More information

AI Module 23 Other Refinements

AI Module 23 Other Refinements odule 23 ther Refinements ntroduction We have seen how game playing domain is different than other domains and how one needs to change the method of search. We have also seen how i search algorithm is

More information

AI Approaches to Ultimate Tic-Tac-Toe

AI Approaches to Ultimate Tic-Tac-Toe AI Approaches to Ultimate Tic-Tac-Toe Eytan Lifshitz CS Department Hebrew University of Jerusalem, Israel David Tsurel CS Department Hebrew University of Jerusalem, Israel I. INTRODUCTION This report is

More information

Outline. Game Playing. Game Problems. Game Problems. Types of games Playing a perfect game. Playing an imperfect game

Outline. Game Playing. Game Problems. Game Problems. Types of games Playing a perfect game. Playing an imperfect game Outline Game Playing ECE457 Applied Artificial Intelligence Fall 2007 Lecture #5 Types of games Playing a perfect game Minimax search Alpha-beta pruning Playing an imperfect game Real-time Imperfect information

More information

mywbut.com Two agent games : alpha beta pruning

mywbut.com Two agent games : alpha beta pruning Two agent games : alpha beta pruning 1 3.5 Alpha-Beta Pruning ALPHA-BETA pruning is a method that reduces the number of nodes explored in Minimax strategy. It reduces the time required for the search and

More information

Today. Types of Game. Games and Search 1/18/2010. COMP210: Artificial Intelligence. Lecture 10. Game playing

Today. Types of Game. Games and Search 1/18/2010. COMP210: Artificial Intelligence. Lecture 10. Game playing COMP10: Artificial Intelligence Lecture 10. Game playing Trevor Bench-Capon Room 15, Ashton Building Today We will look at how search can be applied to playing games Types of Games Perfect play minimax

More information

a b c d e f g h 1 a b c d e f g h C A B B A C C X X C C X X C C A B B A C Diagram 1-2 Square names

a b c d e f g h 1 a b c d e f g h C A B B A C C X X C C X X C C A B B A C Diagram 1-2 Square names Chapter Rules and notation Diagram - shows the standard notation for Othello. The columns are labeled a through h from left to right, and the rows are labeled through from top to bottom. In this book,

More information

Adversarial Search and Game- Playing C H A P T E R 6 C M P T : S P R I N G H A S S A N K H O S R A V I

Adversarial Search and Game- Playing C H A P T E R 6 C M P T : S P R I N G H A S S A N K H O S R A V I Adversarial Search and Game- Playing C H A P T E R 6 C M P T 3 1 0 : S P R I N G 2 0 1 1 H A S S A N K H O S R A V I Adversarial Search Examine the problems that arise when we try to plan ahead in a world

More information

3. Bishops b. The main objective of this lesson is to teach the rules of movement for the bishops.

3. Bishops b. The main objective of this lesson is to teach the rules of movement for the bishops. page 3-1 3. Bishops b Objectives: 1. State and apply rules of movement for bishops 2. Use movement rules to count moves and captures 3. Solve problems using bishops The main objective of this lesson is

More information

Google DeepMind s AlphaGo vs. world Go champion Lee Sedol

Google DeepMind s AlphaGo vs. world Go champion Lee Sedol Google DeepMind s AlphaGo vs. world Go champion Lee Sedol Review of Nature paper: Mastering the game of Go with Deep Neural Networks & Tree Search Tapani Raiko Thanks to Antti Tarvainen for some slides

More information

CS221 Project Final Report Gomoku Game Agent

CS221 Project Final Report Gomoku Game Agent CS221 Project Final Report Gomoku Game Agent Qiao Tan qtan@stanford.edu Xiaoti Hu xiaotihu@stanford.edu 1 Introduction Gomoku, also know as five-in-a-row, is a strategy board game which is traditionally

More information

CMPUT 657: Heuristic Search

CMPUT 657: Heuristic Search CMPUT 657: Heuristic Search Assignment 1: Two-player Search Summary You are to write a program to play the game of Lose Checkers. There are two goals for this assignment. First, you want to build the smallest

More information

CPS331 Lecture: Search in Games last revised 2/16/10

CPS331 Lecture: Search in Games last revised 2/16/10 CPS331 Lecture: Search in Games last revised 2/16/10 Objectives: 1. To introduce mini-max search 2. To introduce the use of static evaluation functions 3. To introduce alpha-beta pruning Materials: 1.

More information

Adversarial Search Aka Games

Adversarial Search Aka Games Adversarial Search Aka Games Chapter 5 Some material adopted from notes by Charles R. Dyer, U of Wisconsin-Madison Overview Game playing State of the art and resources Framework Game trees Minimax Alpha-beta

More information

Foundations of AI. 5. Board Games. Search Strategies for Games, Games with Chance, State of the Art. Wolfram Burgard and Luc De Raedt SA-1

Foundations of AI. 5. Board Games. Search Strategies for Games, Games with Chance, State of the Art. Wolfram Burgard and Luc De Raedt SA-1 Foundations of AI 5. Board Games Search Strategies for Games, Games with Chance, State of the Art Wolfram Burgard and Luc De Raedt SA-1 Contents Board Games Minimax Search Alpha-Beta Search Games with

More information

CSC384: Introduction to Artificial Intelligence. Game Tree Search

CSC384: Introduction to Artificial Intelligence. Game Tree Search CSC384: Introduction to Artificial Intelligence Game Tree Search Chapter 5.1, 5.2, 5.3, 5.6 cover some of the material we cover here. Section 5.6 has an interesting overview of State-of-the-Art game playing

More information

Lecture 14. Questions? Friday, February 10 CS 430 Artificial Intelligence - Lecture 14 1

Lecture 14. Questions? Friday, February 10 CS 430 Artificial Intelligence - Lecture 14 1 Lecture 14 Questions? Friday, February 10 CS 430 Artificial Intelligence - Lecture 14 1 Outline Chapter 5 - Adversarial Search Alpha-Beta Pruning Imperfect Real-Time Decisions Stochastic Games Friday,

More information

Computer Science and Software Engineering University of Wisconsin - Platteville. 4. Game Play. CS 3030 Lecture Notes Yan Shi UW-Platteville

Computer Science and Software Engineering University of Wisconsin - Platteville. 4. Game Play. CS 3030 Lecture Notes Yan Shi UW-Platteville Computer Science and Software Engineering University of Wisconsin - Platteville 4. Game Play CS 3030 Lecture Notes Yan Shi UW-Platteville Read: Textbook Chapter 6 What kind of games? 2-player games Zero-sum

More information

Artificial Intelligence Lecture 3

Artificial Intelligence Lecture 3 Artificial Intelligence Lecture 3 The problem Depth first Not optimal Uses O(n) space Optimal Uses O(B n ) space Can we combine the advantages of both approaches? 2 Iterative deepening (IDA) Let M be a

More information

YourTurnMyTurn.com: Reversi rules. Roel Hobo Copyright 2018 YourTurnMyTurn.com

YourTurnMyTurn.com: Reversi rules. Roel Hobo Copyright 2018 YourTurnMyTurn.com YourTurnMyTurn.com: Reversi rules Roel Hobo Copyright 2018 YourTurnMyTurn.com Inhoud Reversi rules...1 Rules...1 Opening...3 Tabel 1: Openings...4 Midgame...5 Endgame...8 To conclude...9 i Reversi rules

More information

UNIT 13A AI: Games & Search Strategies. Announcements

UNIT 13A AI: Games & Search Strategies. Announcements UNIT 13A AI: Games & Search Strategies 1 Announcements Do not forget to nominate your favorite CA bu emailing gkesden@gmail.com, No lecture on Friday, no recitation on Thursday No office hours Wednesday,

More information

Opponent Models and Knowledge Symmetry in Game-Tree Search

Opponent Models and Knowledge Symmetry in Game-Tree Search Opponent Models and Knowledge Symmetry in Game-Tree Search Jeroen Donkers Institute for Knowlegde and Agent Technology Universiteit Maastricht, The Netherlands donkers@cs.unimaas.nl Abstract In this paper

More information

Artificial Intelligence. 4. Game Playing. Prof. Bojana Dalbelo Bašić Assoc. Prof. Jan Šnajder

Artificial Intelligence. 4. Game Playing. Prof. Bojana Dalbelo Bašić Assoc. Prof. Jan Šnajder Artificial Intelligence 4. Game Playing Prof. Bojana Dalbelo Bašić Assoc. Prof. Jan Šnajder University of Zagreb Faculty of Electrical Engineering and Computing Academic Year 2017/2018 Creative Commons

More information

Beeches Holiday Lets Games Manual

Beeches Holiday Lets Games Manual Beeches Holiday Lets Games Manual www.beechesholidaylets.co.uk Page 1 Contents Shut the box... 3 Yahtzee Instructions... 5 Overview... 5 Game Play... 5 Upper Section... 5 Lower Section... 5 Combinations...

More information

Adversary Search. Ref: Chapter 5

Adversary Search. Ref: Chapter 5 Adversary Search Ref: Chapter 5 1 Games & A.I. Easy to measure success Easy to represent states Small number of operators Comparison against humans is possible. Many games can be modeled very easily, although

More information

MyPawns OppPawns MyKings OppKings MyThreatened OppThreatened MyWins OppWins Draws

MyPawns OppPawns MyKings OppKings MyThreatened OppThreatened MyWins OppWins Draws The Role of Opponent Skill Level in Automated Game Learning Ying Ge and Michael Hash Advisor: Dr. Mark Burge Armstrong Atlantic State University Savannah, Geogia USA 31419-1997 geying@drake.armstrong.edu

More information

UNIT 13A AI: Games & Search Strategies

UNIT 13A AI: Games & Search Strategies UNIT 13A AI: Games & Search Strategies 1 Artificial Intelligence Branch of computer science that studies the use of computers to perform computational processes normally associated with human intellect

More information

Game Tree Search. Generalizing Search Problems. Two-person Zero-Sum Games. Generalizing Search Problems. CSC384: Intro to Artificial Intelligence

Game Tree Search. Generalizing Search Problems. Two-person Zero-Sum Games. Generalizing Search Problems. CSC384: Intro to Artificial Intelligence CSC384: Intro to Artificial Intelligence Game Tree Search Chapter 6.1, 6.2, 6.3, 6.6 cover some of the material we cover here. Section 6.6 has an interesting overview of State-of-the-Art game playing programs.

More information

CITS3001. Algorithms, Agents and Artificial Intelligence. Semester 2, 2016 Tim French

CITS3001. Algorithms, Agents and Artificial Intelligence. Semester 2, 2016 Tim French CITS3001 Algorithms, Agents and Artificial Intelligence Semester 2, 2016 Tim French School of Computer Science & Software Eng. The University of Western Australia 8. Game-playing AIMA, Ch. 5 Objectives

More information

Adversarial Search. CS 486/686: Introduction to Artificial Intelligence

Adversarial Search. CS 486/686: Introduction to Artificial Intelligence Adversarial Search CS 486/686: Introduction to Artificial Intelligence 1 Introduction So far we have only been concerned with a single agent Today, we introduce an adversary! 2 Outline Games Minimax search

More information

Generalized Game Trees

Generalized Game Trees Generalized Game Trees Richard E. Korf Computer Science Department University of California, Los Angeles Los Angeles, Ca. 90024 Abstract We consider two generalizations of the standard two-player game

More information

Adversarial Search and Game Playing. Russell and Norvig: Chapter 5

Adversarial Search and Game Playing. Russell and Norvig: Chapter 5 Adversarial Search and Game Playing Russell and Norvig: Chapter 5 Typical case 2-person game Players alternate moves Zero-sum: one player s loss is the other s gain Perfect information: both players have

More information

Chess Rules- The Ultimate Guide for Beginners

Chess Rules- The Ultimate Guide for Beginners Chess Rules- The Ultimate Guide for Beginners By GM Igor Smirnov A PUBLICATION OF ABOUT THE AUTHOR Grandmaster Igor Smirnov Igor Smirnov is a chess Grandmaster, coach, and holder of a Master s degree in

More information

Real-Time Connect 4 Game Using Artificial Intelligence

Real-Time Connect 4 Game Using Artificial Intelligence Journal of Computer Science 5 (4): 283-289, 2009 ISSN 1549-3636 2009 Science Publications Real-Time Connect 4 Game Using Artificial Intelligence 1 Ahmad M. Sarhan, 2 Adnan Shaout and 2 Michele Shock 1

More information

CS 440 / ECE 448 Introduction to Artificial Intelligence Spring 2010 Lecture #5

CS 440 / ECE 448 Introduction to Artificial Intelligence Spring 2010 Lecture #5 CS 440 / ECE 448 Introduction to Artificial Intelligence Spring 2010 Lecture #5 Instructor: Eyal Amir Grad TAs: Wen Pu, Yonatan Bisk Undergrad TAs: Sam Johnson, Nikhil Johri Topics Game playing Game trees

More information

Data Structures and Algorithms

Data Structures and Algorithms Data Structures and Algorithms CS245-2015S-P4 Two Player Games David Galles Department of Computer Science University of San Francisco P4-0: Overview Example games (board splitting, chess, Network) /Max

More information

CMSC 671 Project Report- Google AI Challenge: Planet Wars

CMSC 671 Project Report- Google AI Challenge: Planet Wars 1. Introduction Purpose The purpose of the project is to apply relevant AI techniques learned during the course with a view to develop an intelligent game playing bot for the game of Planet Wars. Planet

More information

CS885 Reinforcement Learning Lecture 13c: June 13, Adversarial Search [RusNor] Sec

CS885 Reinforcement Learning Lecture 13c: June 13, Adversarial Search [RusNor] Sec CS885 Reinforcement Learning Lecture 13c: June 13, 2018 Adversarial Search [RusNor] Sec. 5.1-5.4 CS885 Spring 2018 Pascal Poupart 1 Outline Minimax search Evaluation functions Alpha-beta pruning CS885

More information

Adversarial Search. Chapter 5. Mausam (Based on slides of Stuart Russell, Andrew Parks, Henry Kautz, Linda Shapiro) 1

Adversarial Search. Chapter 5. Mausam (Based on slides of Stuart Russell, Andrew Parks, Henry Kautz, Linda Shapiro) 1 Adversarial Search Chapter 5 Mausam (Based on slides of Stuart Russell, Andrew Parks, Henry Kautz, Linda Shapiro) 1 Game Playing Why do AI researchers study game playing? 1. It s a good reasoning problem,

More information

CS 2710 Foundations of AI. Lecture 9. Adversarial search. CS 2710 Foundations of AI. Game search

CS 2710 Foundations of AI. Lecture 9. Adversarial search. CS 2710 Foundations of AI. Game search CS 2710 Foundations of AI Lecture 9 Adversarial search Milos Hauskrecht milos@cs.pitt.edu 5329 Sennott Square CS 2710 Foundations of AI Game search Game-playing programs developed by AI researchers since

More information

Foundations of Artificial Intelligence

Foundations of Artificial Intelligence Foundations of Artificial Intelligence 42. Board Games: Alpha-Beta Search Malte Helmert University of Basel May 16, 2018 Board Games: Overview chapter overview: 40. Introduction and State of the Art 41.

More information

Free Cell Solver. Copyright 2001 Kevin Atkinson Shari Holstege December 11, 2001

Free Cell Solver. Copyright 2001 Kevin Atkinson Shari Holstege December 11, 2001 Free Cell Solver Copyright 2001 Kevin Atkinson Shari Holstege December 11, 2001 Abstract We created an agent that plays the Free Cell version of Solitaire by searching through the space of possible sequences

More information

Abalone. Stephen Friedman and Beltran Ibarra

Abalone. Stephen Friedman and Beltran Ibarra Abalone Stephen Friedman and Beltran Ibarra Dept of Computer Science and Engineering University of Washington Seattle, WA-98195 {sfriedma,bida}@cs.washington.edu Abstract In this paper we explore applying

More information

CS 1571 Introduction to AI Lecture 12. Adversarial search. CS 1571 Intro to AI. Announcements

CS 1571 Introduction to AI Lecture 12. Adversarial search. CS 1571 Intro to AI. Announcements CS 171 Introduction to AI Lecture 1 Adversarial search Milos Hauskrecht milos@cs.pitt.edu 39 Sennott Square Announcements Homework assignment is out Programming and experiments Simulated annealing + Genetic

More information

Creating a Poker Playing Program Using Evolutionary Computation

Creating a Poker Playing Program Using Evolutionary Computation Creating a Poker Playing Program Using Evolutionary Computation Simon Olsen and Rob LeGrand, Ph.D. Abstract Artificial intelligence is a rapidly expanding technology. We are surrounded by technology that

More information

Welcome to the Brain Games Chess Help File.

Welcome to the Brain Games Chess Help File. HELP FILE Welcome to the Brain Games Chess Help File. Chess a competitive strategy game dating back to the 15 th century helps to developer strategic thinking skills, memorization, and visualization of

More information

Algorithms for solving sequential (zero-sum) games. Main case in these slides: chess. Slide pack by Tuomas Sandholm

Algorithms for solving sequential (zero-sum) games. Main case in these slides: chess. Slide pack by Tuomas Sandholm Algorithms for solving sequential (zero-sum) games Main case in these slides: chess Slide pack by Tuomas Sandholm Rich history of cumulative ideas Game-theoretic perspective Game of perfect information

More information

Constructing an Abalone Game-Playing Agent

Constructing an Abalone Game-Playing Agent 18th June 2005 Abstract This paper will deal with the complexity of the game Abalone 1 and depending on this complexity, will explore techniques that are useful for constructing an Abalone game-playing

More information

Adversarial Search. CS 486/686: Introduction to Artificial Intelligence

Adversarial Search. CS 486/686: Introduction to Artificial Intelligence Adversarial Search CS 486/686: Introduction to Artificial Intelligence 1 AccessAbility Services Volunteer Notetaker Required Interested? Complete an online application using your WATIAM: https://york.accessiblelearning.com/uwaterloo/

More information

CS 229 Final Project: Using Reinforcement Learning to Play Othello

CS 229 Final Project: Using Reinforcement Learning to Play Othello CS 229 Final Project: Using Reinforcement Learning to Play Othello Kevin Fry Frank Zheng Xianming Li ID: kfry ID: fzheng ID: xmli 16 December 2016 Abstract We built an AI that learned to play Othello.

More information

4. Games and search. Lecture Artificial Intelligence (4ov / 8op)

4. Games and search. Lecture Artificial Intelligence (4ov / 8op) 4. Games and search 4.1 Search problems State space search find a (shortest) path from the initial state to the goal state. Constraint satisfaction find a value assignment to a set of variables so that

More information

Foundations of AI. 6. Adversarial Search. Search Strategies for Games, Games with Chance, State of the Art. Wolfram Burgard & Bernhard Nebel

Foundations of AI. 6. Adversarial Search. Search Strategies for Games, Games with Chance, State of the Art. Wolfram Burgard & Bernhard Nebel Foundations of AI 6. Adversarial Search Search Strategies for Games, Games with Chance, State of the Art Wolfram Burgard & Bernhard Nebel Contents Game Theory Board Games Minimax Search Alpha-Beta Search

More information

Adversarial Search. CMPSCI 383 September 29, 2011

Adversarial Search. CMPSCI 383 September 29, 2011 Adversarial Search CMPSCI 383 September 29, 2011 1 Why are games interesting to AI? Simple to represent and reason about Must consider the moves of an adversary Time constraints Russell & Norvig say: Games,

More information

Artificial Intelligence Adversarial Search

Artificial Intelligence Adversarial Search Artificial Intelligence Adversarial Search Adversarial Search Adversarial search problems games They occur in multiagent competitive environments There is an opponent we can t control planning again us!

More information

Game Engineering CS F-24 Board / Strategy Games

Game Engineering CS F-24 Board / Strategy Games Game Engineering CS420-2014F-24 Board / Strategy Games David Galles Department of Computer Science University of San Francisco 24-0: Overview Example games (board splitting, chess, Othello) /Max trees

More information

CS 221 Othello Project Professor Koller 1. Perversi

CS 221 Othello Project Professor Koller 1. Perversi CS 221 Othello Project Professor Koller 1 Perversi 1 Abstract Philip Wang Louis Eisenberg Kabir Vadera pxwang@stanford.edu tarheel@stanford.edu kvadera@stanford.edu In this programming project we designed

More information

ADVERSARIAL SEARCH. Today. Reading. Goals. AIMA Chapter , 5.7,5.8

ADVERSARIAL SEARCH. Today. Reading. Goals. AIMA Chapter , 5.7,5.8 ADVERSARIAL SEARCH Today Reading AIMA Chapter 5.1-5.5, 5.7,5.8 Goals Introduce adversarial games Minimax as an optimal strategy Alpha-beta pruning (Real-time decisions) 1 Questions to ask Were there any

More information

ADVERSARIAL SEARCH. Chapter 5

ADVERSARIAL SEARCH. Chapter 5 ADVERSARIAL SEARCH Chapter 5... every game of skill is susceptible of being played by an automaton. from Charles Babbage, The Life of a Philosopher, 1832. Outline Games Perfect play minimax decisions α

More information

Search Depth. 8. Search Depth. Investing. Investing in Search. Jonathan Schaeffer

Search Depth. 8. Search Depth. Investing. Investing in Search. Jonathan Schaeffer Search Depth 8. Search Depth Jonathan Schaeffer jonathan@cs.ualberta.ca www.cs.ualberta.ca/~jonathan So far, we have always assumed that all searches are to a fixed depth Nice properties in that the search

More information

LEARN TO PLAY CHESS CONTENTS 1 INTRODUCTION. Terry Marris December 2004

LEARN TO PLAY CHESS CONTENTS 1 INTRODUCTION. Terry Marris December 2004 LEARN TO PLAY CHESS Terry Marris December 2004 CONTENTS 1 Kings and Queens 2 The Rooks 3 The Bishops 4 The Pawns 5 The Knights 6 How to Play 1 INTRODUCTION Chess is a game of war. You have pieces that

More information

Ponnuki, FiveStones and GoloisStrasbourg: three software to help Go teachers

Ponnuki, FiveStones and GoloisStrasbourg: three software to help Go teachers Ponnuki, FiveStones and GoloisStrasbourg: three software to help Go teachers Tristan Cazenave Labo IA, Université Paris 8, 2 rue de la Liberté, 93526, St-Denis, France cazenave@ai.univ-paris8.fr Abstract.

More information

The game of Paco Ŝako

The game of Paco Ŝako The game of Paco Ŝako Created to be an expression of peace, friendship and collaboration, Paco Ŝako is a new and dynamic chess game, with a mindful touch, and a mind-blowing gameplay. Two players sitting

More information

Algorithms for solving sequential (zero-sum) games. Main case in these slides: chess! Slide pack by " Tuomas Sandholm"

Algorithms for solving sequential (zero-sum) games. Main case in these slides: chess! Slide pack by  Tuomas Sandholm Algorithms for solving sequential (zero-sum) games Main case in these slides: chess! Slide pack by " Tuomas Sandholm" Rich history of cumulative ideas Game-theoretic perspective" Game of perfect information"

More information

V. Adamchik Data Structures. Game Trees. Lecture 1. Apr. 05, Plan: 1. Introduction. 2. Game of NIM. 3. Minimax

V. Adamchik Data Structures. Game Trees. Lecture 1. Apr. 05, Plan: 1. Introduction. 2. Game of NIM. 3. Minimax Game Trees Lecture 1 Apr. 05, 2005 Plan: 1. Introduction 2. Game of NIM 3. Minimax V. Adamchik 2 ü Introduction The search problems we have studied so far assume that the situation is not going to change.

More information

CS 4700: Artificial Intelligence

CS 4700: Artificial Intelligence CS 4700: Foundations of Artificial Intelligence Fall 2017 Instructor: Prof. Haym Hirsh Lecture 10 Today Adversarial search (R&N Ch 5) Tuesday, March 7 Knowledge Representation and Reasoning (R&N Ch 7)

More information

More Adversarial Search

More Adversarial Search More Adversarial Search CS151 David Kauchak Fall 2010 http://xkcd.com/761/ Some material borrowed from : Sara Owsley Sood and others Admin Written 2 posted Machine requirements for mancala Most of the

More information

Chess Handbook: Course One

Chess Handbook: Course One Chess Handbook: Course One 2012 Vision Academy All Rights Reserved No Reproduction Without Permission WELCOME! Welcome to The Vision Academy! We are pleased to help you learn Chess, one of the world s

More information

a b c d e f g h i j k l m n

a b c d e f g h i j k l m n Shoebox, page 1 In his book Chess Variants & Games, A. V. Murali suggests playing chess on the exterior surface of a cube. This playing surface has intriguing properties: We can think of it as three interlocked

More information

Assignment 2, University of Toronto, CSC384 - Intro to AI, Winter

Assignment 2, University of Toronto, CSC384 - Intro to AI, Winter Assignment 2, University of Toronto, CSC384 - Intro to AI, Winter 2014 1 Computer Science 384 March 5, 2014 St. George Campus University of Toronto Homework Assignment #2 Game Tree Search Due: Mon March

More information

Introduction to Spring 2009 Artificial Intelligence Final Exam

Introduction to Spring 2009 Artificial Intelligence Final Exam CS 188 Introduction to Spring 2009 Artificial Intelligence Final Exam INSTRUCTIONS You have 3 hours. The exam is closed book, closed notes except a two-page crib sheet, double-sided. Please use non-programmable

More information

Training a Back-Propagation Network with Temporal Difference Learning and a database for the board game Pente

Training a Back-Propagation Network with Temporal Difference Learning and a database for the board game Pente Training a Back-Propagation Network with Temporal Difference Learning and a database for the board game Pente Valentijn Muijrers 3275183 Valentijn.Muijrers@phil.uu.nl Supervisor: Gerard Vreeswijk 7,5 ECTS

More information

CS 188: Artificial Intelligence

CS 188: Artificial Intelligence CS 188: Artificial Intelligence Adversarial Search Instructor: Stuart Russell University of California, Berkeley Game Playing State-of-the-Art Checkers: 1950: First computer player. 1959: Samuel s self-taught

More information

Comparing Methods for Solving Kuromasu Puzzles

Comparing Methods for Solving Kuromasu Puzzles Comparing Methods for Solving Kuromasu Puzzles Leiden Institute of Advanced Computer Science Bachelor Project Report Tim van Meurs Abstract The goal of this bachelor thesis is to examine different methods

More information

Mind Ninja The Game of Boundless Forms

Mind Ninja The Game of Boundless Forms Mind Ninja The Game of Boundless Forms Nick Bentley 2007-2008. email: nickobento@gmail.com Overview Mind Ninja is a deep board game for two players. It is 2007 winner of the prestigious international board

More information

Deep Green. System for real-time tracking and playing the board game Reversi. Final Project Submitted by: Nadav Erell

Deep Green. System for real-time tracking and playing the board game Reversi. Final Project Submitted by: Nadav Erell Deep Green System for real-time tracking and playing the board game Reversi Final Project Submitted by: Nadav Erell Introduction to Computational and Biological Vision Department of Computer Science, Ben-Gurion

More information

CS 380: ARTIFICIAL INTELLIGENCE MONTE CARLO SEARCH. Santiago Ontañón

CS 380: ARTIFICIAL INTELLIGENCE MONTE CARLO SEARCH. Santiago Ontañón CS 380: ARTIFICIAL INTELLIGENCE MONTE CARLO SEARCH Santiago Ontañón so367@drexel.edu Recall: Adversarial Search Idea: When there is only one agent in the world, we can solve problems using DFS, BFS, ID,

More information

Adversarial search (game playing)

Adversarial search (game playing) Adversarial search (game playing) References Russell and Norvig, Artificial Intelligence: A modern approach, 2nd ed. Prentice Hall, 2003 Nilsson, Artificial intelligence: A New synthesis. McGraw Hill,

More information

Adversarial Search. Soleymani. Artificial Intelligence: A Modern Approach, 3 rd Edition, Chapter 5

Adversarial Search. Soleymani. Artificial Intelligence: A Modern Approach, 3 rd Edition, Chapter 5 Adversarial Search CE417: Introduction to Artificial Intelligence Sharif University of Technology Spring 2017 Soleymani Artificial Intelligence: A Modern Approach, 3 rd Edition, Chapter 5 Outline Game

More information

PROBABILITY M.K. HOME TUITION. Mathematics Revision Guides. Level: GCSE Foundation Tier

PROBABILITY M.K. HOME TUITION. Mathematics Revision Guides. Level: GCSE Foundation Tier Mathematics Revision Guides Probability Page 1 of 18 M.K. HOME TUITION Mathematics Revision Guides Level: GCSE Foundation Tier PROBABILITY Version: 2.1 Date: 08-10-2015 Mathematics Revision Guides Probability

More information

37 Game Theory. Bebe b1 b2 b3. a Abe a a A Two-Person Zero-Sum Game

37 Game Theory. Bebe b1 b2 b3. a Abe a a A Two-Person Zero-Sum Game 37 Game Theory Game theory is one of the most interesting topics of discrete mathematics. The principal theorem of game theory is sublime and wonderful. We will merely assume this theorem and use it to

More information

2 Textual Input Language. 1.1 Notation. Project #2 2

2 Textual Input Language. 1.1 Notation. Project #2 2 CS61B, Fall 2015 Project #2: Lines of Action P. N. Hilfinger Due: Tuesday, 17 November 2015 at 2400 1 Background and Rules Lines of Action is a board game invented by Claude Soucie. It is played on a checkerboard

More information

Adversarial Reasoning: Sampling-Based Search with the UCT algorithm. Joint work with Raghuram Ramanujan and Ashish Sabharwal

Adversarial Reasoning: Sampling-Based Search with the UCT algorithm. Joint work with Raghuram Ramanujan and Ashish Sabharwal Adversarial Reasoning: Sampling-Based Search with the UCT algorithm Joint work with Raghuram Ramanujan and Ashish Sabharwal Upper Confidence bounds for Trees (UCT) n The UCT algorithm (Kocsis and Szepesvari,

More information