Adversarial search (game playing) References Russell and Norvig, Artificial Intelligence: A modern approach, 2nd ed. Prentice Hall, 2003 Nilsson, Artificial intelligence: A New synthesis. McGraw Hill, 2001
Outline Perfect play Resource limits α- β pruning Games of chance Games of imperfect information
Games vs. search problems Unpredictable opponent: we cannot know its next move: solution is a strategy specifying a move for every possible opponent reply There are time limits: unlikely to find goal, must approximate
Games vs. search problems History: Computer considers possible lines of play (Babbage, 1846) Algorithm for perfect play (Zermelo, 1912; Von Neumann, 1944) Finite horizon, approximate evaluation (Zuse, 1945; Wiener, 1948; Shannon, 1950) First chess program (Turing, 1951) Machine learning to improve evaluation accuracy (Samuel, 1952--57) Pruning to allow deeper search (McCarthy, 1956)
Types of games Complete information Deterministic Chess, checkers, go, othello, tictactoe Chance Backgammon, Monopoly Incomplete information Bridge, poker, scrabble, nuclear war Other applications: robots that cooperate or compete
Game tree (tic-tac-toe) two players take turns deterministic game
Minimax Perfect play for deterministic, perfectinformation games Idea: choose move whose associate position has the highest minimax value = best achievable payoff against best play e.g., 2-ply game: (ply = turn or half move; one move=two plies) In the following, let us call MAX player1 and MIN player2 (the reason will be clear later on)
Minimax
Minimax Algorithm defun MINIMAX-DECISION (game, state) returns action value - for each <a, s> in SUCCESSORS(game, state) do v MINIMAX-VALUE(game, s) if value < v then action a, value v return action defun MINIMAX-VALUE (game, state) returns payoff if TERMINAL?(state) then return UTILITY(state) if [MAX is to move in state] then return [the highest MINIMAX-VALUE of SUCCESSORS(state)] else return [the lowest MINIMAX-VALUE of SUCCESORS(state)]
Minimax properties Completeness: Only if the tree is finite Optimality: Only if the tree is finite Time complexity: O(b m ) Space complexity: O(bm) (depth first exploration)
Minimax properties For chess, b approx 35, m approx 100 (= 2 x 50 ply) for reasonable games Thus, an exact solution is completely infeasible
Minimax properties Minimax separates node generation from node evaluation: 1. generates the tree 2. assigns values to the leaves 3. propagates them upwards
Resource limits Suppose we have 100 seconds to decide and we can explore 10 4 nodes/sec: Can afford 10 6 nodes per move Standard approach: Test to prune a branch (e.g. depth-limit cutoff) Evaluation function (EVAL): estimate of desirability of a position
Evaluation function Black to move White slightly better White to move Black winning Chess: typically linear weighted sum of features Eval(s) = w 1 f 1 (s) + w 2 f 2 (s) + + w n f n (s) e.g. w 1 = 9 with f 1 (s) = #white-queens - #black-queens
Exact values don t matter Behaviour is preserved under any monotonic transformation of EVAL
Cutting off search MINIMAX-CUTOFF is the same as MINIMAX- VALUE except that: - TERMINAL? is replaced by CUTOFF? - UTILITY is replaced by EVAL
Cutting off search Does it work in practice? Without cut off: If b m = 10 6 and b=35 then m=4 (ply number) A 4-ply lookahead means a hopeless chess player 4-ply: human beginner 8-ply: typical PC, human master 12-ply: Deep Blue, Kasparov
Cutting off search The most straightforward approach to controlling the amount of search is to set a fixed depth limit The depth must be chosen so that the amount of time used will not exceed what rules of the game allow A slightly more robust approach is to apply iterative deepening: when the time runs out, the program returns the move selected by the deepest completed search
Cutting off search These approaches can have some disastrous consequences because of the approximate nature of the evaluation function Obviously, a more sophisticated cutoff test is needed In particular, the evaluation function should only be applied to positions that are quiescent, that is, unlikely to exhibit wild swings in value in the near future
Cutting off search This extra search is called a quiescence search: sometimes it is restricted to consider only certain types of moves, such as capture moves, that will quickly resolve the uncertainties in the position
Cutting off search Another problem, the horizon problem, is more difficult to eliminate It arises when the program is facing a move by the opponent that causes serious damage and is ultimately unavoidable (e.g., a pawn that can be promoted to queen by the opponent)
Cutting off search: alpha-beta pruning Fortunately, it is possible to compute the correct minimax decision without looking at every node in the search tree The process of eliminating a branch of the search tree is called pruning The standard pruning technique for minimax is called alpha-beta-pruning
Cutting off search: alpha-beta pruning General principle: Consider a node n somewhere in the tree, such that MAX (player 1) has a choice to moving to that node If MAX has a better node m either at the parent node of n, or at any choice point further up, then n will never be reached in actual play So, once we have found out enough about n (by examining some of its descendants) to reach this conclusion, we can prune it Pruning does not affect final result
α- β pruning: an example
α- β pruning example
Why is it called α- β pruning? α is the best value (to MAX) found so far in current path If V is worse than α, MAX will avoid it, prune that branch Define MIN β similarly for
α- β pruning properties Time complexity = O(b m/2 ) - doubles depth of search - can easily reach depth 8 (good chess player)
The α- β algorithm defun ALFA-BETA-SEARCH(game, state) returns action max-value MAX-VALUE ( game, state, -, + ) for each <action, value> in SUCCESSORS(game, state) do if value = max-value then return action
The α- β algorithm defun MAX-VALUE ( game, state, α, β ) returns value if CUTOFF?(state) then return EVAL-COST(game, state) for each <a, s> in SUCCESSORS(game, state) do α max ( α, MIN-VALUE ( game, s, α, β ) ) if α >= β then return β return α defun MIN-VALUE ( state, game, α, β ) returns value if CUTOFF?(state) then return EVAL-COST(game, state) for each <a, s> in SUCCESSORS(game, state) do β min ( β, MAX-VALUE ( game, s, α, β ) ) if β <= α then return α return β
Deterministic games: state of the art Checkers Chinook ended 40-year-reign of human world champion Marion Tinsley in 1994. Used an endgame database defining perfect play for all positions involving 8 or fewer pieces on the board, a total of 443,748,401,247 positions Chess Deep Blue defeated human world champion Gary Kasparov in a six-game match in 1997. Deep Blue searches 200 million positions per second, uses very sophisticated evaluation, and undisclosed methods for extending some lines of search up to 40 ply
Deterministic games: state of the art Othello human champions refuse to compete against computers, which are too good Go human champions refuse to compete against computers, which are too bad. In go, b > 300, so most programs use pattern knowledge bases to suggest plausible moves
Non-deterministic games
Non-deterministic games In non-deterministic games, chance introduced by dice, card-shuffling, etc. A non-deterministic game must introduce also chance nodes in addition to MAX and MIN nodes
Non-deterministic games Simplified example with coin-flipping:
Algorithm for non-deterministic games EXPECT-MINIMAX deals with chance nodes and gives perfect play (according to the game theory) EXPECT-MINIMAX is just like MINIMAX, except that we must also handle chance nodes
Algorithm for non-deterministic games Each of the possible positions no longer has a definite minimax value (which in deterministic games was the utility of the leaf reached by best play) Instead, we can only calculate an average or expected value where the average is taken over all the possible dice rolls that could occur
Non-deterministic games Let us get back to the coin-flipping example:
Non-deterministic games Let us get back to the coin-flipping example: 3 EXPECT-MAX
Minimax Algorithm for non-det. games defun EXPECT-MINIMAX-DECISION (game, state) returns action value - for each <a, s> in SUCCESSORS(game, state) do v EXPECT-MINIMAX-VAL(game, s) if value < v then action a, value v return action
Minimax Algorithm for non-det. games defun EXPECT-MINIMAX-VAL (game, state) returns payoff if TERMINAL?(state) then return UTILITY(state) if [state is a MAX node] then return [highest EXPECT-MINIMAX-VAL of SUCCESSORS(state)] if [state is a MIN node] then return [lowest EXPECT-MINIMAX-VAL of SUCCESSORS(state)] if [state is a CHANCE node] then return [average of EXPECT-MINIMAX-VAL of SUCCESSORS(state)]
DRAFT α- β algorithm for non-det. games defun EXP-ALFA-BETA-SEARCH(game, state) returns an action max-value EXP-MAX-VALUE ( game, state, -, + ) for each <action, value> in SUCCESSORS(game, state) do if value = max-value then return action
DRAFT The α- β algorithm for non-det. games defun EXP-MAX-VALUE ( game, state, α, β ) returns value if CUTOFF?(state) then return EVAL-COST(game, state) for each <a, s> in SUCCESSORS(game, state) do α max ( α, MIN-VALUE ( game, s, α, β ) ) if α >= β then return β return α defun EXP-MIN-VALUE ( state, game, α, β ) returns value if CUTOFF?(state) then return EVAL-COST(game, state) for each <a, s> in SUCCESSORS(game, state) do β min ( β, MAX-VALUE ( game, s, α, β ) ) if β <= α then return α return β
Summary Games are fun to work on! (and dangerous) They illustrate several important points about AI perfection is unattainable: must approximate good idea to think about what to think about uncertainty constrains the assignment of values to states Games are to AI as grand prix racing is to automobile design