Games vs. search problems

Size: px
Start display at page:

Download "Games vs. search problems"

Transcription

1 Games vs. search problems Unpredictable opponent solution is a strategy specifying a move for every possible opponent reply Time limits unlikely to find goal, must approximate Plan of attack: Computer considers possible lines of play (Babbage, 1846) Algorithm for perfect play (Zermelo, 1912; Von Neumann, 1944) Finite horizon, approximate evaluation (Zuse, 1945; Wiener, 1948; Shannon, 1950) First chess program (Turing, 1951) Machine learning to improve evaluation accuracy (Samuel, ) Pruning to allow deeper search (McCarthy, 1956) Chapter 6 3

2 perfect information imperfect information Types of games deterministic chance chess, checkers, go, othello backgammon monopoly battleships, blind tictactoe bridge, poker, scrabble nuclear war Chapter 6 4

3 Game tree (2-player, deterministic, turns) MAX (X) X X X MIN (O) X X X X X X X O X O X... MAX (X) O X O X X O X O... MIN (O) X X TERMINAL Utility X O X X O X X O X O X O O X X O X X O X O O Chapter 6 5

4 Minimax Perfect play for deterministic, perfect-information games Idea: choose move to position with highest minimax value = best achievable payoff against best play E.g., 2-ply game: MAX 3 A A A MIN A 11 A 12 A 13 A A A A 31 A 32 A Chapter 6 6

5 Minimax algorithm function Minimax-Decision(state) returns an action inputs: state, current state in game return the a in Actions(state) maximizing Min-Value(Result(a, state)) function Max-Value(state) returns a utility value if Terminal-Test(state) then return Utility(state) v for a, s in Successors(state) do v Max(v, Min-Value(s)) return v function Min-Value(state) returns a utility value if Terminal-Test(state) then return Utility(state) v for a, s in Successors(state) do v Min(v, Max-Value(s)) return v Chapter 6 7

6 Complete?? Properties of minimax Chapter 6 8

7 Properties of minimax Complete?? Only if tree is finite (chess has specific rules for this). NB a finite strategy can exist even in an infinite tree! Optimal?? Chapter 6 9

8 Properties of minimax Complete?? Yes, if tree is finite (chess has specific rules for this) Optimal?? Yes, against an optimal opponent. Otherwise?? Time complexity?? Chapter 6 10

9 Properties of minimax Complete?? Yes, if tree is finite (chess has specific rules for this) Optimal?? Yes, against an optimal opponent. Otherwise?? Time complexity?? O(b m ) Space complexity?? Chapter 6 11

10 Properties of minimax Complete?? Yes, if tree is finite (chess has specific rules for this) Optimal?? Yes, against an optimal opponent. Otherwise?? Time complexity?? O(b m ) Space complexity?? O(bm) (depth-first exploration) For chess, b 35, m 100 for reasonable games exact solution completely infeasible But do we need to explore every path? Chapter 6 12

11 Imperfect Decisions In complex games, there is not enough time to generate the entire search tree Can modify the minimax strategy by changing the utility function to an evaluation function (a heuristic) and to cut the search before the terminal states are reached using a Cutoff-Test One kind of evaluation function is a weighted linear function w 1 f 1 + w 2 f 2 + w 3 f w n f n the w s are weights and the f s are features of the particular position Non-linear functions can also be used - but they are harder to develop (may be learned?) Cut-offs may be simple (such as providing a depth limit) or use iterative deepening to go as far down the search tree as time allows In general, simple strategies for cut-off have problems

12 Using an evaluation function in Tic-Tac-Toe :

13 Depth-First Version of MINIMAX - search proceeds recursively from left to right in a depth-first fashion To determine the minimax value of V(J): 1. if J is terminal, return V(J)=e(J); otherwise 2. Generate J's successors J 1, J 2, J 3,...J b. 3. Evaluate V(J 1 ), V(J 2 ), V(J 3 ),...V(J b ) from left to right 4. If J is a MAX node, return V(J) = max[v(j 1 ), V(J 2 ), V(J 3 ),...V(J b )] 5. if J is a MIN node, return V(J) = min[ V(J 1 ), V(J 2 ), V(J 3 ),...V(J b ) ] There is no need to generate all successors at once and keep them in storage until all are evaluated. Can do this in a backtracking style too and avoid all the storage costs Backtracking Version of MINIMAX To determine the minimax value of V(J): 1. if J is terminal, return V(J)=e(J); otherwise 2. for k=1, 2,..., b do: a. Generate J k, the kth successor of J. b. Evaluate V(J k ) c. if k=1, set CV(J) to V(J 1 ), Otherwise for k>=2, set CV(J) to max[cv(j), V(J k )] if J is MAX or set CV(J) to min[cv(j), V(J k )] if J is MIN 3. return V(J) = CV(J) CV(J) represents the current value of the node J and is updated each time a child node is evaluated. In both versions, the evaluation of a node is not complete until all of it successors have been evaluated.

14 Alpha-Beta Pruning Alpha-Beta pruning modifies a minimax search so that not all branches need be examined (intuition: as soon as a branch is found to lead to disaster it is no longer explored) Let α be the value of the best choice (largest) found so far along the path for MAX and β to be the best choice (smallest) so far along the path for MIN. A sub-tree is pruned as soon as it is determined that it is worse than the current α or β value. Note that the current value of a MAX node can never decrease (because we always seek the maximum of its successors) and that of a MIN node can never increase (because we always seek the minimum of its successors) Branches are cut off according to dynamically adjusted bounds: 1. The α-bound - The cutoff for a MIN node J is a lower bound called α, equal to the highest current value of all MAX ancestors of J. The exploration of J can be terminated as soon as its current value CV equals or falls below α. 2. The β-bound - The cutoff for a MAX node J is an upper bound called β, equal to the lowest current value of all MIN ancestors of J. The exploration of J can be terminated as soon as its current value CV equals or rises above β.

15 Recursive algorithm for pruning and bound-updating is a procedure called V(J; α,β) α and β are two parameters, α < β; they are set to be the highest current value of all MAX ancestors of J and the lowest current value of all MIN ancestors of J, respectively Procedure returns V(J), the minimax value of J if it lies between α and β; Otherwise it returns α (if V(J)<=α) or β (if(v(j)>= β). If J is a root of a game tree, its minimax value will be obtained by V(J; -, + ) V(J; α,β) 1. if J is terminal return V(J)=e(J). otherwise, let J 1, J 2, J 3,...J b be the successors of J, set k to 1, if J is a MAX node, go to step 2 else go to step 2' 2. Set α to max[α, V(J k ; α,β) ] 2'. Set β to min[β, V(J k ; α,β) ] 3. if α >= β, return β, else continue 3'. if β <= α, return α, else continue 4. if k=b, return α; 4'. if k=b, return β; else set k to k+1 and go to step 2 else set k to k+1 and go to step 2' Performance depends on ordering of the successor nodes (if the largest value node is the first checked, for example, then you are done - of course you never really know unless the expansion actually includes a method for achieving the optimal ordering) Pruning occurs at step 3 and 3' by abandoning some set of successors of J

16 Properties of α β Pruning does not affect final result Good move ordering improves effectiveness of pruning With perfect ordering, time complexity = O(b m/2 ) doubles solvable depth A simple example of the value of reasoning about which computations are relevant (a form of metareasoning) Unfortunately, is still impossible! Chapter 6 20

17 Expected Values in Games of Chance Suppose that dice are involved in a game; how can we deal with this? A roll of the dice determines the set of legal moves possible for a given turn. Can extend the game tree already developed by including chance nodes in addition to MAX and MIN nodes

18 A chance node represents a possible roll of the dice. For a normal set of dice, there are 21 unique rolls. The probability of doubles is 1/36 and the probability of one of the other 15 rolls is 1/18. In backgammon 5-6 is the same as 6-5. Each chance node has branches leading from it that represent the possible moves for that particular roll. For example if White rolls 3 and 5, there are two reasonable moves (and a few unreasonable ones too)

19 Utility values are set by the following formula (represents the score for a full backgammon game): stake x [straight loss (1) or gammon (2) or backgammon (3)] x [doubling cube value (1, 2, 4, 8, 16, 32, or 64)] = total score Stake and doubling is ignored for the rest of this discussion. So utility value is one of -3, -2, -1, 1, 2, 3. Recall that for a minimax strategy, the game was deterministic; the minimax value of a particular node is fully determined by the utility values of the leaf nodes of the game tree. Here, one can only compute an expected value over all possible rolls. To compute this expected value, terminal nodes still need a utility function for assignment of value of board position.

20 Say a particular chance node C is to be evaluated, a chance node whose successors are MAX nodes. d i are the possible rolls of the dice, and P(d i ) is the probability of obtaining a particular roll d i. Let S(C, d i ) be the set of positions generated by applying the legal moves for dice roll d i at position C, then expectimax(c)= P(d i ) max (utility(s)) i s S (C,d i ) What is this really doing? The expected value of a position is the weighted sum of the utilities, where the weight is the probability of that utility. If the chance node to be evaluated has MIN successors, the corresponding formula is expectimin(c)= P(d i ) min (utility(s)) s S (C,d i ) i Using the usual minimax algorithm as a foundation, the expected value version of minimax includes modifications. The expectimax formula is not applied at each level. Starting from the terminal nodes, and moving upwards the formulae applied are:

21 This assumes that one can generate the entire game tree, and for backgammon this is expensive. Number of possible states is estimated at over Also have high branching factor due to the dice rolls. At each roll there are 21 dice combinations possible, with an average of abou 20 legal moves per roll - so there are over 400 branches! This is much larger than in checkers and chess (typical branching ratios quoted for these game are 8-10 for checkers and for chess), and too large to reach significant depth. So consider ways of approximating the utility values.

22 Hans Berliner, Scientific American, 243(1), p64-73, 1980 Berliner's program BKG 9.8 was the first computer program to defeat a world champion at any board or card game. Used evaluations functions: His experience was the following. 1. Started with a 'standard' function - each term represented a particular feature of a position and used consta coefficients to indicate the importance of each feature. The problem is that a constant coefficient represents only the average importance of a feature 2. Divided backgammon positions into classes, each with a different function. (motivated by alpha-beta pruning - could ignore whole classes) Problems with borders between classes - evaluation functions should yield close values but didn't always. 3. Transitions between classes made smooth not abrupt. Led to the SNAC approach (smooth non-linear application coefficients). Included application coefficients that were special, slowly changing variables that controlled the transition.

23 Checkers One Jump Ahead: Challenging Human Supremacy in Checkers J. Shaeffer, 1997, Springer-Verlag 2 players 12 pieces each Goal: Avoid being the player who can no longer move (usually when a player has no pieces left) Rules: Move forward on dark diagonal, 1 square at a time Opponent's piece captured when jumped to empty square diagonally behind opponent's piece Creation of a "king," a piece that can move backward and forward, occurs when piece is moved to opponent's last row

24 Is checkers complex? Here are the total number of checker positions sorted according to the number of piece on the board. # PIECES # POSITIONS , , ,092, ,688, ,503,611, ,779,531, ,309,208, ,048,627,642, ,778,882,769, ,669,578,902, ,695,618,078,654, ,726,900,031,328, ,134,911,067,979, ,511,510,918,189, ,888,183,557,922, ,905,162,728,973,680, ,568,043,414,939,516, ,661,954,506,100,113, ,352,957,062,510,379, ,459,728,874,435,248, ,435,747,136,817,856, ,406,908,049,181,900, ,072,726,844,888,186,880 Total: 500,995,484,682,338,672,639

25 Of particular interest are those positions where either the material is even if there is an even number of piece on the board, or the difference is no more than one when there are an odd number of pieces present (for example, 4 vs 3 and 3 vs 4 for 7 pieces). 1 vs 0: 60 (x 2) 1 vs 1: 3,488 2 vs 1: 98,016 (x 2) 2 vs 2: 2,662,932 3 vs 3: 46,520,744 (x 2) 3 vs 3: 783,806,128 4 vs 3: 9,527,629,380 (x 2) 4 vs 4: 111,378,534,401 5 vs 4: 998,874,699,888 (x 2) 5 vs 5: 8,586,481,972,128 6 vs 5: 58,769,595,279,296 (x 2) 6 vs 6: 384,033,878,250,176 7 vs 6: 2,046,244,120,757,760 (x 2) 7 vs 7: 10,359,927,057,187,840 8 vs 7: 43,428,742,062,013,440 (x 2) 8 vs 8: 171,975,762,422,069,760 9 vs 8: 569,058,493,921,640,448 (x 2) 9 vs 9: 1,765,698,358,650,175,488 A vs 9: 4,596,454,069,579,874,304 (x 2) A vs A: 11,113,460,838,901,284,864 B vs A: 22,520,313,165,772,750,848 (x 2) B vs B: 41,842,926,176,229,654,528 C vs B: 64,703,454,024,590,950,400 (x 2) C vs C: 90,072,726,844,888,186,880 Total number of positions: 329,847,169,676,858,217,781 Brute force is hopeless.

26 Some History First checkers program written started in late 40's by Arthur Samuel at IBM (running several 'test' computers at a time overnight!). Used 'checker books' Used: alpha-beta search convergence forward pruning (prune when a-b values become sufficiently close so unlikely that much of an advantag would be found by pursing the sub-tree) tapered marginal forward pruning (a-b pruning where a constant is added/subtracted to the backed-up values; tapered, the value of the constant changes as the level increases) shallow search for tapered n-best forward pruning (only the n-best successors are pursued; n decreases as depth of search increases) and plausibility move ordering termination criteria - game over, minimum depth, maximum depth, forward pruning, dead position 27 checkers features - with a linear evaluation function. Defined new features that he called signatures in terms of 27 original features. The signatures were no longer linear combinations of features; non-linearities in terms of feature interactions were possible Also book moves were stored on magnetic tape (remember what that is?) and the programmer would control a 'sense switch on the computer to tell the program to run a book move

27 Chinook is the World Man-Machine Champion, the first computer program to win a human world championship. This feat is recognized by the Guinness Book of World Records. (on-line publications as well as complete championship games are available at their web site). Chinook was developed by a team of researchers led by Dr. Jonathan Schaeffer of the Department of Computing Science at the University of Alberta. Chinook's strength comes from deep search, a good evaluation function and a database of all endgames with pieces or less Typical checkers position has 8 legal moves (without captures), (chess 35-40) and a capture move has 1.25 Uses alpha-beta search (minimum depth of 19 ply) with iterative deepening 2 ply at a time Chinook divides the game into 5 phases: opening, middlegame, early endgame, late endgame, and database. Each of the first 4 phases uses a linear evaluation function of 22 variables with weights being set manually. The last phase needs no evaluation function because it has perfect information (2.5 x piece positions) There are a few positions where things are subtle; Chinook is unable to search deep enough to uncover these subtleties. Chinook uses an anti-book...a database of positions to avoid to help with these (about 2000 of them)

28 Othello (Reversi) 2 players Black-and-white disks Goal: Have most disks on the board at the end of the game Rules: Players alternate placing disks on unoccupied board spaces If opponent's disks are trapped between other player's disks, opponent's disks are flipped to the other player's color

29 LOGISTELLO written by Michael Buro. beat the world champion Othello player, Takeshi Murakami of Japan, in a match held August of It used a neural network to learn from previous games to improve its knowledge of the game over tim and beat Takeshi 6 out of 6 games, Evaluation features game stage dependent tables for each of the following patterns: horizontals/verticals of length 8 diagonals of length 4-8 3x3 corner 2x5 corner edge+2x Feature combination linear Search NegaScout with corner quiescence search and multi-probcut iterative deepening Move sorting hash-table containing moves and value bounds (2 21 entries) response killer lists shallow searches Search speed (on a Pentium-Pro 200) middle-game: ~160,000 nodes/sec endgame: ~480,000 nodes/sec Search depth (in a 2x30 minutes game) middle-game: selective including brute-force ply endgame: win/loss/draw determination at empty squares, exact score 1-2 ply later Opening book

30 consists for the moment of about games and evaluations of "best" move alternatives is automatically updated currently several machines are working all day long on book improvement

31 Othello Programs that Learn Genetic Algorithms seem very productive - Darwersi is one of these programs Genetic algorithms have six basic steps in common. Need to determine a representation for members of the population and a way to measure 'fitness'. First, a set of potential solutions must be initialized to form the starting population. Second, each solution is evaluated according to its fitness. Third, new solutions are created using mutation and crossover on the current population, typically with mor crossover than mutation, say a 3:1 ratio. Conserving and combining features is generally more helpful than varying them. A mutated descendent will differ from its parent in only a single bit. Crossover requires two parents and produces two offspring. Select a random point along the binary vector and split each parent at this point. Selecting those individuals to breed requires some element of chance. In nature some animals are lucky and some unlucky but those with better genes reproduce more. Genetic algorithms allow more offspring from high scoring individual than from low scoring individuals. Fourth, if there is no space for the new offspring, room must be made within the current population for the new individuals. Fifth, the new solutions are evaluated using the scoring system and inserted into the population. Sixth, if there is no more time then stop, otherwise go back to step three and make some more individuals.

32

33 CHESS 2 players ; 16 pieces each (1 king, 1 queen, 2 rooks, 2 bishops, 2 knights, 8 pawns) Goal: Capture opponent's king (checkmate) Rules: Pieces are captured when landed on by opponent's piece Type of piece dictates movement options

34 Early Chess Programs Alan Turing designed the first chess program for a computer in 1951; it was hand-simulated and never actually programmed. He used a depth-first minimax procedure. The Los Alamos program written by Kister et al. in 1957 used a depth-first minimax for a 6x6 chessboard. Allen Newell was the first to apply alpha-beta to chess in The evaluation functions were based on experiments on several world champions. John McCarthy later also used alpha-beta search with a linear evaluation function, competed with a program developed at the Moscow Institute of Theoretical and Experimental Physics by G.M. Adelson-Velskiy, and 1968 the Moscow program beat the Stanford program 2-1. Richard Greenblatt, Donald Eastlake and Stephen Crocker at MIT wrote an early chess program using alpha beta search in 1967 called Mac Hack. Their program evaluates the moves from a position and not from the successor positions (for efficiency) - so search is shallow (1 level). It uses those results for plausibility ordering of moves and for a tapered n-best forward pruning of moves. The program also has book openings and detects duplicate positions in the game tree in order to both avoid duplicate searches and detect draws b repetition of positions. It makes its move in about a minute - and most good players beat it easily.

35 Deep Blue Deep Blue's evaluation function looks at four basic chess values: material, position, King safety and tempo. Material is based on the "worth" of particular chess pieces. For example, if a pawn is valued at 1, then the rook is worth 5 and the Queen is valued at 9. The King, of course, is beyond value because his loss means th loss of the game. The simplest way to understand position is by looking at your pieces and counting the number of safe square they can attack. King safety is a defensive aspect of position. It is determined by assigning a value to the safety of the King's position in order to know how to make a purely defensive move. Tempo is related to position but focuses on the race to develop control of the board. A player is said to "lose a tempo" if he dillydallies while the opponent is making more productive advances. Deep Blue is not only the finest chess-playing computer in the world, it is also the fastest. This makes perfec sense, because history has proven that the fastest computers conduct the most extensive searches into possib positions. More searches gives the computer a wider array of moves to choose from and therefore a greater chance of choosing the optimum move. Deep Blue employs a system called selective extensions to examine chessboard positions.

36 Selective extensions allow the computer to more efficiently search deeply into critical board arrangements. Instead of attempting to conduct an exhaustive "brute force" search into every possible position, Deep Blue selectively chooses distinct paths to follow, eliminating irrelevant searches in the process. Deep Blue uses "live" software that can actually generate up to 200,000,000 positions per second when searching for the optimum move. The software begins this process by taking a strategic look at the board. It then computes everything it knows about the current position, integrates the chess information preprogrammed by the development team, and then generates a multitude of new possible arrangements. From these, it then chooses its best possible next move. Deep Blue's extensive searches make full use of the computer's massively parallel design. "At the search lev you're saying 'OK, here's the position. I need to search all the moves," says Joe Hoane, the Deep Blue development team member in charge of software. "And you go search all the moves, all at the same time, preferably on a bunch of different computers." The software inside of Deep Blue is one all-inclusive program written in C, running under the AIX operatin system. Deep Blue utilizes the IBM SP Parallel System called MPI. "It's a message-passing system," says Hoane. "So the search is just all control logic. You're passing control messages back and forth that say, well what am I doing? Did you finish this? OK, here's your next job. That kind of thing at the SP level." The latest iteration of the Deep Blue computer is a 32-node IBM RS/6000 SP high-performance computer, which utilizes the new Power Two Super Chip processors (P2SC). Each node of the SP employs a single microchannel card containing 8 dedicated VLSI chess processors, for a total of 256 processors working in tandem. The net result is a scalable, highly parallel system capable of calculating 60 billion moves within three minutes, which is the time allotted to each player's move in classical chess.

37 Deep Blue vs Gary Kasparov 1. Deep Blue can examine and evaluate up to 200,000,000 chess positions per second. Garry Kasparov can examine and evaluate up to three chess positions per second 2. Deep Blue has a small amount of chess knowledge and an enormous amount of calculation ability. Garry Kasparov has a large amount of chess knowledge and a somewhat smaller amount of calculation ability. 3. Garry Kasparov uses his tremendous sense of feeling and intuition to play world champion-calibre chess. Deep Blue is a machine that is incapable of feeling or intuition. 4. Deep Blue has benefitted from the guidance of five IBM research scientists and one international grandmaster. Garry Kasparov is guided by his coach Yuri Dokhoian and by his own driving passion play the finest chess in the world. 5. Garry Kasparov is able to learn and adapt very quickly from his own successes and mistakes. Deep Blue, as it stands today, is not a "learning system." It is therefore not capable of utilizing artificial intelligence to eith learn from its opponent or "think" about the current position of the chessboard. 6. Deep Blue can never forget, be distracted or feel intimidated by external forces (such as Kasparov's infamous "stare"). Garry Kasparov is an intense competitor, but he is still susceptible to human frailties such as fatigue, boredom and loss of concentration. 7. Deep Blue is stunningly effective at solving chess problems, but it is less "intelligent" than even the stupidest human. Garry Kasparov is highly intelligent. He has authored three books, speaks a variety of languages, is active politically and is regular guest speaker at international conferences.

38 8. Any changes in the way Deep Blue plays chess must be performed by the members of the development team between games. Garry Kasparov can alter the way he plays at any time before, during, and/or after each game. 9. Garry Kasparov is skilled at evaluating his opponent, sensing their weaknesses, then taking advantage of those weakness While Deep Blue is quite adept at evaluating chess positions, it cannot evaluate its opponent's weaknesses. 10. Garry Kasparov is able to determine his next move by selectively searching through the possible positions. Deep Blue must conduct a very thorough search into the possible positions to determine the most optimal move (which isn't so bad when you can search up to 200 million positions per second).

Game playing. Chapter 6. Chapter 6 1

Game playing. Chapter 6. Chapter 6 1 Game playing Chapter 6 Chapter 6 1 Outline Games Perfect play minimax decisions α β pruning Resource limits and approximate evaluation Games of chance Games of imperfect information Chapter 6 2 Games vs.

More information

CS 380: ARTIFICIAL INTELLIGENCE ADVERSARIAL SEARCH. Santiago Ontañón

CS 380: ARTIFICIAL INTELLIGENCE ADVERSARIAL SEARCH. Santiago Ontañón CS 380: ARTIFICIAL INTELLIGENCE ADVERSARIAL SEARCH Santiago Ontañón so367@drexel.edu Recall: Problem Solving Idea: represent the problem we want to solve as: State space Actions Goal check Cost function

More information

Games vs. search problems. Adversarial Search. Types of games. Outline

Games vs. search problems. Adversarial Search. Types of games. Outline Games vs. search problems Unpredictable opponent solution is a strategy specifying a move for every possible opponent reply dversarial Search Chapter 5 Time limits unlikely to find goal, must approximate

More information

Game playing. Chapter 6. Chapter 6 1

Game playing. Chapter 6. Chapter 6 1 Game playing Chapter 6 Chapter 6 1 Outline Games Perfect play minimax decisions α β pruning Resource limits and approximate evaluation Games of chance Games of imperfect information Chapter 6 2 Games vs.

More information

Games vs. search problems. Game playing Chapter 6. Outline. Game tree (2-player, deterministic, turns) Types of games. Minimax

Games vs. search problems. Game playing Chapter 6. Outline. Game tree (2-player, deterministic, turns) Types of games. Minimax Game playing Chapter 6 perfect information imperfect information Types of games deterministic chess, checkers, go, othello battleships, blind tictactoe chance backgammon monopoly bridge, poker, scrabble

More information

Game Playing. Philipp Koehn. 29 September 2015

Game Playing. Philipp Koehn. 29 September 2015 Game Playing Philipp Koehn 29 September 2015 Outline 1 Games Perfect play minimax decisions α β pruning Resource limits and approximate evaluation Games of chance Games of imperfect information 2 games

More information

Outline. Game playing. Types of games. Games vs. search problems. Minimax. Game tree (2-player, deterministic, turns) Games

Outline. Game playing. Types of games. Games vs. search problems. Minimax. Game tree (2-player, deterministic, turns) Games utline Games Game playing Perfect play minimax decisions α β pruning Resource limits and approximate evaluation Chapter 6 Games of chance Games of imperfect information Chapter 6 Chapter 6 Games vs. search

More information

ADVERSARIAL SEARCH. Chapter 5

ADVERSARIAL SEARCH. Chapter 5 ADVERSARIAL SEARCH Chapter 5... every game of skill is susceptible of being played by an automaton. from Charles Babbage, The Life of a Philosopher, 1832. Outline Games Perfect play minimax decisions α

More information

CS 380: ARTIFICIAL INTELLIGENCE

CS 380: ARTIFICIAL INTELLIGENCE CS 380: ARTIFICIAL INTELLIGENCE ADVERSARIAL SEARCH 10/23/2013 Santiago Ontañón santi@cs.drexel.edu https://www.cs.drexel.edu/~santi/teaching/2013/cs380/intro.html Recall: Problem Solving Idea: represent

More information

Game playing. Chapter 5. Chapter 5 1

Game playing. Chapter 5. Chapter 5 1 Game playing Chapter 5 Chapter 5 1 Outline Games Perfect play minimax decisions α β pruning Resource limits and approximate evaluation Games of chance Games of imperfect information Chapter 5 2 Types of

More information

Game playing. Chapter 5, Sections 1 6

Game playing. Chapter 5, Sections 1 6 Game playing Chapter 5, Sections 1 6 Artificial Intelligence, spring 2013, Peter Ljunglöf; based on AIMA Slides c Stuart Russel and Peter Norvig, 2004 Chapter 5, Sections 1 6 1 Outline Games Perfect play

More information

Game Playing: Adversarial Search. Chapter 5

Game Playing: Adversarial Search. Chapter 5 Game Playing: Adversarial Search Chapter 5 Outline Games Perfect play minimax search α β pruning Resource limits and approximate evaluation Games of chance Games of imperfect information Games vs. Search

More information

Game playing. Outline

Game playing. Outline Game playing Chapter 6, Sections 1 8 CS 480 Outline Perfect play Resource limits α β pruning Games of chance Games of imperfect information Games vs. search problems Unpredictable opponent solution is

More information

Adversarial search (game playing)

Adversarial search (game playing) Adversarial search (game playing) References Russell and Norvig, Artificial Intelligence: A modern approach, 2nd ed. Prentice Hall, 2003 Nilsson, Artificial intelligence: A New synthesis. McGraw Hill,

More information

Adversarial Search Aka Games

Adversarial Search Aka Games Adversarial Search Aka Games Chapter 5 Some material adopted from notes by Charles R. Dyer, U of Wisconsin-Madison Overview Game playing State of the art and resources Framework Game trees Minimax Alpha-beta

More information

Game Playing. Dr. Richard J. Povinelli. Page 1. rev 1.1, 9/14/2003

Game Playing. Dr. Richard J. Povinelli. Page 1. rev 1.1, 9/14/2003 Game Playing Dr. Richard J. Povinelli rev 1.1, 9/14/2003 Page 1 Objectives You should be able to provide a definition of a game. be able to evaluate, compare, and implement the minmax and alpha-beta algorithms,

More information

Lecture 5: Game Playing (Adversarial Search)

Lecture 5: Game Playing (Adversarial Search) Lecture 5: Game Playing (Adversarial Search) CS 580 (001) - Spring 2018 Amarda Shehu Department of Computer Science George Mason University, Fairfax, VA, USA February 21, 2018 Amarda Shehu (580) 1 1 Outline

More information

Foundations of Artificial Intelligence

Foundations of Artificial Intelligence Foundations of Artificial Intelligence 6. Board Games Search Strategies for Games, Games with Chance, State of the Art Joschka Boedecker and Wolfram Burgard and Bernhard Nebel Albert-Ludwigs-Universität

More information

Game Playing. Why do AI researchers study game playing? 1. It s a good reasoning problem, formal and nontrivial.

Game Playing. Why do AI researchers study game playing? 1. It s a good reasoning problem, formal and nontrivial. Game Playing Why do AI researchers study game playing? 1. It s a good reasoning problem, formal and nontrivial. 2. Direct comparison with humans and other computer programs is easy. 1 What Kinds of Games?

More information

Foundations of Artificial Intelligence

Foundations of Artificial Intelligence Foundations of Artificial Intelligence 6. Board Games Search Strategies for Games, Games with Chance, State of the Art Joschka Boedecker and Wolfram Burgard and Frank Hutter and Bernhard Nebel Albert-Ludwigs-Universität

More information

Game-Playing & Adversarial Search

Game-Playing & Adversarial Search Game-Playing & Adversarial Search This lecture topic: Game-Playing & Adversarial Search (two lectures) Chapter 5.1-5.5 Next lecture topic: Constraint Satisfaction Problems (two lectures) Chapter 6.1-6.4,

More information

Programming Project 1: Pacman (Due )

Programming Project 1: Pacman (Due ) Programming Project 1: Pacman (Due 8.2.18) Registration to the exams 521495A: Artificial Intelligence Adversarial Search (Min-Max) Lectured by Abdenour Hadid Adjunct Professor, CMVS, University of Oulu

More information

COMP219: COMP219: Artificial Intelligence Artificial Intelligence Dr. Annabel Latham Lecture 12: Game Playing Overview Games and Search

COMP219: COMP219: Artificial Intelligence Artificial Intelligence Dr. Annabel Latham Lecture 12: Game Playing Overview Games and Search COMP19: Artificial Intelligence COMP19: Artificial Intelligence Dr. Annabel Latham Room.05 Ashton Building Department of Computer Science University of Liverpool Lecture 1: Game Playing 1 Overview Last

More information

Last update: March 9, Game playing. CMSC 421, Chapter 6. CMSC 421, Chapter 6 1

Last update: March 9, Game playing. CMSC 421, Chapter 6. CMSC 421, Chapter 6 1 Last update: March 9, 2010 Game playing CMSC 421, Chapter 6 CMSC 421, Chapter 6 1 Finite perfect-information zero-sum games Finite: finitely many agents, actions, states Perfect information: every agent

More information

Adversarial Search and Game- Playing C H A P T E R 6 C M P T : S P R I N G H A S S A N K H O S R A V I

Adversarial Search and Game- Playing C H A P T E R 6 C M P T : S P R I N G H A S S A N K H O S R A V I Adversarial Search and Game- Playing C H A P T E R 6 C M P T 3 1 0 : S P R I N G 2 0 1 1 H A S S A N K H O S R A V I Adversarial Search Examine the problems that arise when we try to plan ahead in a world

More information

Adversarial Search (Game Playing)

Adversarial Search (Game Playing) Artificial Intelligence Adversarial Search (Game Playing) Chapter 5 Adapted from materials by Tim Finin, Marie desjardins, and Charles R. Dyer Outline Game playing State of the art and resources Framework

More information

Today. Types of Game. Games and Search 1/18/2010. COMP210: Artificial Intelligence. Lecture 10. Game playing

Today. Types of Game. Games and Search 1/18/2010. COMP210: Artificial Intelligence. Lecture 10. Game playing COMP10: Artificial Intelligence Lecture 10. Game playing Trevor Bench-Capon Room 15, Ashton Building Today We will look at how search can be applied to playing games Types of Games Perfect play minimax

More information

Adversarial Search. Soleymani. Artificial Intelligence: A Modern Approach, 3 rd Edition, Chapter 5

Adversarial Search. Soleymani. Artificial Intelligence: A Modern Approach, 3 rd Edition, Chapter 5 Adversarial Search CE417: Introduction to Artificial Intelligence Sharif University of Technology Spring 2017 Soleymani Artificial Intelligence: A Modern Approach, 3 rd Edition, Chapter 5 Outline Game

More information

Artificial Intelligence. Topic 5. Game playing

Artificial Intelligence. Topic 5. Game playing Artificial Intelligence Topic 5 Game playing broadening our world view dealing with incompleteness why play games? perfect decisions the Minimax algorithm dealing with resource limits evaluation functions

More information

Contents. Foundations of Artificial Intelligence. Problems. Why Board Games?

Contents. Foundations of Artificial Intelligence. Problems. Why Board Games? Contents Foundations of Artificial Intelligence 6. Board Games Search Strategies for Games, Games with Chance, State of the Art Wolfram Burgard, Bernhard Nebel, and Martin Riedmiller Albert-Ludwigs-Universität

More information

CS 331: Artificial Intelligence Adversarial Search II. Outline

CS 331: Artificial Intelligence Adversarial Search II. Outline CS 331: Artificial Intelligence Adversarial Search II 1 Outline 1. Evaluation Functions 2. State-of-the-art game playing programs 3. 2 player zero-sum finite stochastic games of perfect information 2 1

More information

Game playing. Chapter 5, Sections 1{5. AIMA Slides cstuart Russell and Peter Norvig, 1998 Chapter 5, Sections 1{5 1

Game playing. Chapter 5, Sections 1{5. AIMA Slides cstuart Russell and Peter Norvig, 1998 Chapter 5, Sections 1{5 1 Game playing Chapter 5, Sections 1{5 AIMA Slides cstuart Russell and Peter Norvig, 1998 Chapter 5, Sections 1{5 1 } Perfect play } Resource limits } { pruning } Games of chance Outline AIMA Slides cstuart

More information

CS 771 Artificial Intelligence. Adversarial Search

CS 771 Artificial Intelligence. Adversarial Search CS 771 Artificial Intelligence Adversarial Search Typical assumptions Two agents whose actions alternate Utility values for each agent are the opposite of the other This creates the adversarial situation

More information

Game-playing AIs: Games and Adversarial Search FINAL SET (w/ pruning study examples) AIMA

Game-playing AIs: Games and Adversarial Search FINAL SET (w/ pruning study examples) AIMA Game-playing AIs: Games and Adversarial Search FINAL SET (w/ pruning study examples) AIMA 5.1-5.2 Games: Outline of Unit Part I: Games as Search Motivation Game-playing AI successes Game Trees Evaluation

More information

Foundations of AI. 5. Board Games. Search Strategies for Games, Games with Chance, State of the Art. Wolfram Burgard and Luc De Raedt SA-1

Foundations of AI. 5. Board Games. Search Strategies for Games, Games with Chance, State of the Art. Wolfram Burgard and Luc De Raedt SA-1 Foundations of AI 5. Board Games Search Strategies for Games, Games with Chance, State of the Art Wolfram Burgard and Luc De Raedt SA-1 Contents Board Games Minimax Search Alpha-Beta Search Games with

More information

COMP219: Artificial Intelligence. Lecture 13: Game Playing

COMP219: Artificial Intelligence. Lecture 13: Game Playing CMP219: Artificial Intelligence Lecture 13: Game Playing 1 verview Last time Search with partial/no observations Belief states Incremental belief state search Determinism vs non-determinism Today We will

More information

Adversarial Search and Game Playing

Adversarial Search and Game Playing Games Adversarial Search and Game Playing Russell and Norvig, 3 rd edition, Ch. 5 Games: multi-agent environment q What do other agents do and how do they affect our success? q Cooperative vs. competitive

More information

CS 188: Artificial Intelligence

CS 188: Artificial Intelligence CS 188: Artificial Intelligence Adversarial Search Instructor: Stuart Russell University of California, Berkeley Game Playing State-of-the-Art Checkers: 1950: First computer player. 1959: Samuel s self-taught

More information

Foundations of AI. 6. Adversarial Search. Search Strategies for Games, Games with Chance, State of the Art. Wolfram Burgard & Bernhard Nebel

Foundations of AI. 6. Adversarial Search. Search Strategies for Games, Games with Chance, State of the Art. Wolfram Burgard & Bernhard Nebel Foundations of AI 6. Adversarial Search Search Strategies for Games, Games with Chance, State of the Art Wolfram Burgard & Bernhard Nebel Contents Game Theory Board Games Minimax Search Alpha-Beta Search

More information

Adversarial Search. Human-aware Robotics. 2018/01/25 Chapter 5 in R&N 3rd Ø Announcement: Slides for this lecture are here:

Adversarial Search. Human-aware Robotics. 2018/01/25 Chapter 5 in R&N 3rd Ø Announcement: Slides for this lecture are here: Adversarial Search 2018/01/25 Chapter 5 in R&N 3rd Ø Announcement: q Slides for this lecture are here: http://www.public.asu.edu/~yzhan442/teaching/cse471/lectures/adversarial.pdf Slides are largely based

More information

Adversarial Search. Chapter 5. Mausam (Based on slides of Stuart Russell, Andrew Parks, Henry Kautz, Linda Shapiro) 1

Adversarial Search. Chapter 5. Mausam (Based on slides of Stuart Russell, Andrew Parks, Henry Kautz, Linda Shapiro) 1 Adversarial Search Chapter 5 Mausam (Based on slides of Stuart Russell, Andrew Parks, Henry Kautz, Linda Shapiro) 1 Game Playing Why do AI researchers study game playing? 1. It s a good reasoning problem,

More information

Artificial Intelligence, CS, Nanjing University Spring, 2018, Yang Yu. Lecture 4: Search 3.

Artificial Intelligence, CS, Nanjing University Spring, 2018, Yang Yu. Lecture 4: Search 3. Artificial Intelligence, CS, Nanjing University Spring, 2018, Yang Yu Lecture 4: Search 3 http://cs.nju.edu.cn/yuy/course_ai18.ashx Previously... Path-based search Uninformed search Depth-first, breadth

More information

CS440/ECE448 Lecture 9: Minimax Search. Slides by Svetlana Lazebnik 9/2016 Modified by Mark Hasegawa-Johnson 9/2017

CS440/ECE448 Lecture 9: Minimax Search. Slides by Svetlana Lazebnik 9/2016 Modified by Mark Hasegawa-Johnson 9/2017 CS440/ECE448 Lecture 9: Minimax Search Slides by Svetlana Lazebnik 9/2016 Modified by Mark Hasegawa-Johnson 9/2017 Why study games? Games are a traditional hallmark of intelligence Games are easy to formalize

More information

6. Games. COMP9414/ 9814/ 3411: Artificial Intelligence. Outline. Mechanical Turk. Origins. origins. motivation. minimax search

6. Games. COMP9414/ 9814/ 3411: Artificial Intelligence. Outline. Mechanical Turk. Origins. origins. motivation. minimax search COMP9414/9814/3411 16s1 Games 1 COMP9414/ 9814/ 3411: Artificial Intelligence 6. Games Outline origins motivation Russell & Norvig, Chapter 5. minimax search resource limits and heuristic evaluation α-β

More information

Games and Adversarial Search

Games and Adversarial Search 1 Games and Adversarial Search BBM 405 Fundamentals of Artificial Intelligence Pinar Duygulu Hacettepe University Slides are mostly adapted from AIMA, MIT Open Courseware and Svetlana Lazebnik (UIUC) Spring

More information

Adversarial Search Lecture 7

Adversarial Search Lecture 7 Lecture 7 How can we use search to plan ahead when other agents are planning against us? 1 Agenda Games: context, history Searching via Minimax Scaling α β pruning Depth-limiting Evaluation functions Handling

More information

Intuition Mini-Max 2

Intuition Mini-Max 2 Games Today Saying Deep Blue doesn t really think about chess is like saying an airplane doesn t really fly because it doesn t flap its wings. Drew McDermott I could feel I could smell a new kind of intelligence

More information

CS 5522: Artificial Intelligence II

CS 5522: Artificial Intelligence II CS 5522: Artificial Intelligence II Adversarial Search Instructor: Alan Ritter Ohio State University [These slides were adapted from CS188 Intro to AI at UC Berkeley. All materials available at http://ai.berkeley.edu.]

More information

Games CSE 473. Kasparov Vs. Deep Junior August 2, 2003 Match ends in a 3 / 3 tie!

Games CSE 473. Kasparov Vs. Deep Junior August 2, 2003 Match ends in a 3 / 3 tie! Games CSE 473 Kasparov Vs. Deep Junior August 2, 2003 Match ends in a 3 / 3 tie! Games in AI In AI, games usually refers to deteristic, turntaking, two-player, zero-sum games of perfect information Deteristic:

More information

Foundations of AI. 6. Board Games. Search Strategies for Games, Games with Chance, State of the Art

Foundations of AI. 6. Board Games. Search Strategies for Games, Games with Chance, State of the Art Foundations of AI 6. Board Games Search Strategies for Games, Games with Chance, State of the Art Wolfram Burgard, Andreas Karwath, Bernhard Nebel, and Martin Riedmiller SA-1 Contents Board Games Minimax

More information

Outline. Game Playing. Game Problems. Game Problems. Types of games Playing a perfect game. Playing an imperfect game

Outline. Game Playing. Game Problems. Game Problems. Types of games Playing a perfect game. Playing an imperfect game Outline Game Playing ECE457 Applied Artificial Intelligence Fall 2007 Lecture #5 Types of games Playing a perfect game Minimax search Alpha-beta pruning Playing an imperfect game Real-time Imperfect information

More information

CPS 570: Artificial Intelligence Two-player, zero-sum, perfect-information Games

CPS 570: Artificial Intelligence Two-player, zero-sum, perfect-information Games CPS 57: Artificial Intelligence Two-player, zero-sum, perfect-information Games Instructor: Vincent Conitzer Game playing Rich tradition of creating game-playing programs in AI Many similarities to search

More information

Game Playing State-of-the-Art

Game Playing State-of-the-Art Adversarial Search [These slides were created by Dan Klein and Pieter Abbeel for CS188 Intro to AI at UC Berkeley. All CS188 materials are available at http://ai.berkeley.edu.] Game Playing State-of-the-Art

More information

Ch.4 AI and Games. Hantao Zhang. The University of Iowa Department of Computer Science. hzhang/c145

Ch.4 AI and Games. Hantao Zhang. The University of Iowa Department of Computer Science.   hzhang/c145 Ch.4 AI and Games Hantao Zhang http://www.cs.uiowa.edu/ hzhang/c145 The University of Iowa Department of Computer Science Artificial Intelligence p.1/29 Chess: Computer vs. Human Deep Blue is a chess-playing

More information

Set 4: Game-Playing. ICS 271 Fall 2017 Kalev Kask

Set 4: Game-Playing. ICS 271 Fall 2017 Kalev Kask Set 4: Game-Playing ICS 271 Fall 2017 Kalev Kask Overview Computer programs that play 2-player games game-playing as search with the complication of an opponent General principles of game-playing and search

More information

Artificial Intelligence. Minimax and alpha-beta pruning

Artificial Intelligence. Minimax and alpha-beta pruning Artificial Intelligence Minimax and alpha-beta pruning In which we examine the problems that arise when we try to plan ahead to get the best result in a world that includes a hostile agent (other agent

More information

Adversary Search. Ref: Chapter 5

Adversary Search. Ref: Chapter 5 Adversary Search Ref: Chapter 5 1 Games & A.I. Easy to measure success Easy to represent states Small number of operators Comparison against humans is possible. Many games can be modeled very easily, although

More information

CITS3001. Algorithms, Agents and Artificial Intelligence. Semester 2, 2016 Tim French

CITS3001. Algorithms, Agents and Artificial Intelligence. Semester 2, 2016 Tim French CITS3001 Algorithms, Agents and Artificial Intelligence Semester 2, 2016 Tim French School of Computer Science & Software Eng. The University of Western Australia 8. Game-playing AIMA, Ch. 5 Objectives

More information

CPS331 Lecture: Search in Games last revised 2/16/10

CPS331 Lecture: Search in Games last revised 2/16/10 CPS331 Lecture: Search in Games last revised 2/16/10 Objectives: 1. To introduce mini-max search 2. To introduce the use of static evaluation functions 3. To introduce alpha-beta pruning Materials: 1.

More information

Artificial Intelligence

Artificial Intelligence Artificial Intelligence Adversarial Search Instructors: David Suter and Qince Li Course Delivered @ Harbin Institute of Technology [Many slides adapted from those created by Dan Klein and Pieter Abbeel

More information

School of EECS Washington State University. Artificial Intelligence

School of EECS Washington State University. Artificial Intelligence School of EECS Washington State University Artificial Intelligence 1 } Classic AI challenge Easy to represent Difficult to solve } Zero-sum games Total final reward to all players is constant } Perfect

More information

Artificial Intelligence Adversarial Search

Artificial Intelligence Adversarial Search Artificial Intelligence Adversarial Search Adversarial Search Adversarial search problems games They occur in multiagent competitive environments There is an opponent we can t control planning again us!

More information

Adversarial Search 1

Adversarial Search 1 Adversarial Search 1 Adversarial Search The ghosts trying to make pacman loose Can not come up with a giant program that plans to the end, because of the ghosts and their actions Goal: Eat lots of dots

More information

Adversarial Search (a.k.a. Game Playing)

Adversarial Search (a.k.a. Game Playing) Adversarial Search (a.k.a. Game Playing) Chapter 5 (Adapted from Stuart Russell, Dan Klein, and others. Thanks guys!) Outline Games Perfect play: principles of adversarial search minimax decisions α β

More information

Game Playing State-of-the-Art. CS 188: Artificial Intelligence. Behavior from Computation. Video of Demo Mystery Pacman. Adversarial Search

Game Playing State-of-the-Art. CS 188: Artificial Intelligence. Behavior from Computation. Video of Demo Mystery Pacman. Adversarial Search CS 188: Artificial Intelligence Adversarial Search Instructor: Marco Alvarez University of Rhode Island (These slides were created/modified by Dan Klein, Pieter Abbeel, Anca Dragan for CS188 at UC Berkeley)

More information

CS 188: Artificial Intelligence

CS 188: Artificial Intelligence CS 188: Artificial Intelligence Adversarial Search Prof. Scott Niekum The University of Texas at Austin [These slides are based on those of Dan Klein and Pieter Abbeel for CS188 Intro to AI at UC Berkeley.

More information

Chapter 6. Overview. Why study games? State of the art. Game playing State of the art and resources Framework

Chapter 6. Overview. Why study games? State of the art. Game playing State of the art and resources Framework Overview Chapter 6 Game playing State of the art and resources Framework Game trees Minimax Alpha-beta pruning Adding randomness Some material adopted from notes by Charles R. Dyer, University of Wisconsin-Madison

More information

Artificial Intelligence

Artificial Intelligence Artificial Intelligence Adversarial Search Vibhav Gogate The University of Texas at Dallas Some material courtesy of Rina Dechter, Alex Ihler and Stuart Russell, Luke Zettlemoyer, Dan Weld Adversarial

More information

CS 4700: Foundations of Artificial Intelligence

CS 4700: Foundations of Artificial Intelligence CS 4700: Foundations of Artificial Intelligence selman@cs.cornell.edu Module: Adversarial Search R&N: Chapter 5 1 Outline Adversarial Search Optimal decisions Minimax α-β pruning Case study: Deep Blue

More information

Adversarial Search. CMPSCI 383 September 29, 2011

Adversarial Search. CMPSCI 383 September 29, 2011 Adversarial Search CMPSCI 383 September 29, 2011 1 Why are games interesting to AI? Simple to represent and reason about Must consider the moves of an adversary Time constraints Russell & Norvig say: Games,

More information

ARTIFICIAL INTELLIGENCE (CS 370D)

ARTIFICIAL INTELLIGENCE (CS 370D) Princess Nora University Faculty of Computer & Information Systems ARTIFICIAL INTELLIGENCE (CS 370D) (CHAPTER-5) ADVERSARIAL SEARCH ADVERSARIAL SEARCH Optimal decisions Min algorithm α-β pruning Imperfect,

More information

Adversarial Search. Read AIMA Chapter CIS 421/521 - Intro to AI 1

Adversarial Search. Read AIMA Chapter CIS 421/521 - Intro to AI 1 Adversarial Search Read AIMA Chapter 5.2-5.5 CIS 421/521 - Intro to AI 1 Adversarial Search Instructors: Dan Klein and Pieter Abbeel University of California, Berkeley [These slides were created by Dan

More information

Adversarial Search. Chapter 5. Mausam (Based on slides of Stuart Russell, Andrew Parks, Henry Kautz, Linda Shapiro, Diane Cook) 1

Adversarial Search. Chapter 5. Mausam (Based on slides of Stuart Russell, Andrew Parks, Henry Kautz, Linda Shapiro, Diane Cook) 1 Adversarial Search Chapter 5 Mausam (Based on slides of Stuart Russell, Andrew Parks, Henry Kautz, Linda Shapiro, Diane Cook) 1 Game Playing Why do AI researchers study game playing? 1. It s a good reasoning

More information

Adversarial Search and Game Playing. Russell and Norvig: Chapter 5

Adversarial Search and Game Playing. Russell and Norvig: Chapter 5 Adversarial Search and Game Playing Russell and Norvig: Chapter 5 Typical case 2-person game Players alternate moves Zero-sum: one player s loss is the other s gain Perfect information: both players have

More information

Artificial Intelligence Search III

Artificial Intelligence Search III Artificial Intelligence Search III Lecture 5 Content: Search III Quick Review on Lecture 4 Why Study Games? Game Playing as Search Special Characteristics of Game Playing Search Ingredients of 2-Person

More information

Games and Adversarial Search II

Games and Adversarial Search II Games and Adversarial Search II Alpha-Beta Pruning (AIMA 5.3) Some slides adapted from Richard Lathrop, USC/ISI, CS 271 Review: The Minimax Rule Idea: Make the best move for MAX assuming that MIN always

More information

Artificial Intelligence

Artificial Intelligence Artificial Intelligence CS482, CS682, MW 1 2:15, SEM 201, MS 227 Prerequisites: 302, 365 Instructor: Sushil Louis, sushil@cse.unr.edu, http://www.cse.unr.edu/~sushil Non-classical search - Path does not

More information

Ar#ficial)Intelligence!!

Ar#ficial)Intelligence!! Introduc*on! Ar#ficial)Intelligence!! Roman Barták Department of Theoretical Computer Science and Mathematical Logic So far we assumed a single-agent environment, but what if there are more agents and

More information

Announcements. Homework 1. Project 1. Due tonight at 11:59pm. Due Friday 2/8 at 4:00pm. Electronic HW1 Written HW1

Announcements. Homework 1. Project 1. Due tonight at 11:59pm. Due Friday 2/8 at 4:00pm. Electronic HW1 Written HW1 Announcements Homework 1 Due tonight at 11:59pm Project 1 Electronic HW1 Written HW1 Due Friday 2/8 at 4:00pm CS 188: Artificial Intelligence Adversarial Search and Game Trees Instructors: Sergey Levine

More information

CSE 473: Artificial Intelligence. Outline

CSE 473: Artificial Intelligence. Outline CSE 473: Artificial Intelligence Adversarial Search Dan Weld Based on slides from Dan Klein, Stuart Russell, Pieter Abbeel, Andrew Moore and Luke Zettlemoyer (best illustrations from ai.berkeley.edu) 1

More information

CS 440 / ECE 448 Introduction to Artificial Intelligence Spring 2010 Lecture #5

CS 440 / ECE 448 Introduction to Artificial Intelligence Spring 2010 Lecture #5 CS 440 / ECE 448 Introduction to Artificial Intelligence Spring 2010 Lecture #5 Instructor: Eyal Amir Grad TAs: Wen Pu, Yonatan Bisk Undergrad TAs: Sam Johnson, Nikhil Johri Topics Game playing Game trees

More information

Artificial Intelligence

Artificial Intelligence Artificial Intelligence CS482, CS682, MW 1 2:15, SEM 201, MS 227 Prerequisites: 302, 365 Instructor: Sushil Louis, sushil@cse.unr.edu, http://www.cse.unr.edu/~sushil Games and game trees Multi-agent systems

More information

Game Playing AI Class 8 Ch , 5.4.1, 5.5

Game Playing AI Class 8 Ch , 5.4.1, 5.5 Game Playing AI Class Ch. 5.-5., 5.4., 5.5 Bookkeeping HW Due 0/, :59pm Remaining CSP questions? Cynthia Matuszek CMSC 6 Based on slides by Marie desjardin, Francisco Iacobelli Today s Class Clear criteria

More information

Game-playing AIs: Games and Adversarial Search I AIMA

Game-playing AIs: Games and Adversarial Search I AIMA Game-playing AIs: Games and Adversarial Search I AIMA 5.1-5.2 Games: Outline of Unit Part I: Games as Search Motivation Game-playing AI successes Game Trees Evaluation Functions Part II: Adversarial Search

More information

Lecture 14. Questions? Friday, February 10 CS 430 Artificial Intelligence - Lecture 14 1

Lecture 14. Questions? Friday, February 10 CS 430 Artificial Intelligence - Lecture 14 1 Lecture 14 Questions? Friday, February 10 CS 430 Artificial Intelligence - Lecture 14 1 Outline Chapter 5 - Adversarial Search Alpha-Beta Pruning Imperfect Real-Time Decisions Stochastic Games Friday,

More information

CS 188: Artificial Intelligence Spring 2007

CS 188: Artificial Intelligence Spring 2007 CS 188: Artificial Intelligence Spring 2007 Lecture 7: CSP-II and Adversarial Search 2/6/2007 Srini Narayanan ICSI and UC Berkeley Many slides over the course adapted from Dan Klein, Stuart Russell or

More information

Local Search. Hill Climbing. Hill Climbing Diagram. Simulated Annealing. Simulated Annealing. Introduction to Artificial Intelligence

Local Search. Hill Climbing. Hill Climbing Diagram. Simulated Annealing. Simulated Annealing. Introduction to Artificial Intelligence Introduction to Artificial Intelligence V22.0472-001 Fall 2009 Lecture 6: Adversarial Search Local Search Queue-based algorithms keep fallback options (backtracking) Local search: improve what you have

More information

Game Playing. Garry Kasparov and Deep Blue. 1997, GM Gabriel Schwartzman's Chess Camera, courtesy IBM.

Game Playing. Garry Kasparov and Deep Blue. 1997, GM Gabriel Schwartzman's Chess Camera, courtesy IBM. Game Playing Garry Kasparov and Deep Blue. 1997, GM Gabriel Schwartzman's Chess Camera, courtesy IBM. Game Playing In most tree search scenarios, we have assumed the situation is not going to change whilst

More information

CS 188: Artificial Intelligence Spring Announcements

CS 188: Artificial Intelligence Spring Announcements CS 188: Artificial Intelligence Spring 2011 Lecture 7: Minimax and Alpha-Beta Search 2/9/2011 Pieter Abbeel UC Berkeley Many slides adapted from Dan Klein 1 Announcements W1 out and due Monday 4:59pm P2

More information

Unit-III Chap-II Adversarial Search. Created by: Ashish Shah 1

Unit-III Chap-II Adversarial Search. Created by: Ashish Shah 1 Unit-III Chap-II Adversarial Search Created by: Ashish Shah 1 Alpha beta Pruning In case of standard ALPHA BETA PRUNING minimax tree, it returns the same move as minimax would, but prunes away branches

More information

Game Playing State-of-the-Art CSE 473: Artificial Intelligence Fall Deterministic Games. Zero-Sum Games 10/13/17. Adversarial Search

Game Playing State-of-the-Art CSE 473: Artificial Intelligence Fall Deterministic Games. Zero-Sum Games 10/13/17. Adversarial Search CSE 473: Artificial Intelligence Fall 2017 Adversarial Search Mini, pruning, Expecti Dieter Fox Based on slides adapted Luke Zettlemoyer, Dan Klein, Pieter Abbeel, Dan Weld, Stuart Russell or Andrew Moore

More information

Adversarial Search. Hal Daumé III. Computer Science University of Maryland CS 421: Introduction to Artificial Intelligence 9 Feb 2012

Adversarial Search. Hal Daumé III. Computer Science University of Maryland CS 421: Introduction to Artificial Intelligence 9 Feb 2012 1 Hal Daumé III (me@hal3.name) Adversarial Search Hal Daumé III Computer Science University of Maryland me@hal3.name CS 421: Introduction to Artificial Intelligence 9 Feb 2012 Many slides courtesy of Dan

More information

Path Planning as Search

Path Planning as Search Path Planning as Search Paul Robertson 16.410 16.413 Session 7 Slides adapted from: Brian C. Williams 6.034 Tomas Lozano Perez, Winston, and Russell and Norvig AIMA 1 Assignment Remember: Online problem

More information

Adversarial Search: Game Playing. Reading: Chapter

Adversarial Search: Game Playing. Reading: Chapter Adversarial Search: Game Playing Reading: Chapter 6.5-6.8 1 Games and AI Easy to represent, abstract, precise rules One of the first tasks undertaken by AI (since 1950) Better than humans in Othello and

More information

Game Playing Beyond Minimax. Game Playing Summary So Far. Game Playing Improving Efficiency. Game Playing Minimax using DFS.

Game Playing Beyond Minimax. Game Playing Summary So Far. Game Playing Improving Efficiency. Game Playing Minimax using DFS. Game Playing Summary So Far Game tree describes the possible sequences of play is a graph if we merge together identical states Minimax: utility values assigned to the leaves Values backed up the tree

More information

Announcements. CS 188: Artificial Intelligence Fall Local Search. Hill Climbing. Simulated Annealing. Hill Climbing Diagram

Announcements. CS 188: Artificial Intelligence Fall Local Search. Hill Climbing. Simulated Annealing. Hill Climbing Diagram CS 188: Artificial Intelligence Fall 2008 Lecture 6: Adversarial Search 9/16/2008 Dan Klein UC Berkeley Many slides over the course adapted from either Stuart Russell or Andrew Moore 1 Announcements Project

More information

Outline. Introduction. Game-Tree Search. What are games and why are they interesting? History and State-of-the-art in Game Playing

Outline. Introduction. Game-Tree Search. What are games and why are they interesting? History and State-of-the-art in Game Playing Outline Introduction Game-Tree Search Minimax Negamax α-β pruning Real-time Game-Tree Search What are games and why are they interesting? History and State-of-the-art in Game Playing NegaScout evaluation

More information

Adversarial Search. Rob Platt Northeastern University. Some images and slides are used from: AIMA CS188 UC Berkeley

Adversarial Search. Rob Platt Northeastern University. Some images and slides are used from: AIMA CS188 UC Berkeley Adversarial Search Rob Platt Northeastern University Some images and slides are used from: AIMA CS188 UC Berkeley What is adversarial search? Adversarial search: planning used to play a game such as chess

More information

Solving Problems by Searching: Adversarial Search

Solving Problems by Searching: Adversarial Search Course 440 : Introduction To rtificial Intelligence Lecture 5 Solving Problems by Searching: dversarial Search bdeslam Boularias Friday, October 7, 2016 1 / 24 Outline We examine the problems that arise

More information

CSE 573: Artificial Intelligence

CSE 573: Artificial Intelligence CSE 573: Artificial Intelligence Adversarial Search Dan Weld Based on slides from Dan Klein, Stuart Russell, Pieter Abbeel, Andrew Moore and Luke Zettlemoyer (best illustrations from ai.berkeley.edu) 1

More information