Adversarial Search (a.k.a. Game Playing)

Similar documents
Game playing. Chapter 6. Chapter 6 1

CS 380: ARTIFICIAL INTELLIGENCE ADVERSARIAL SEARCH. Santiago Ontañón

Games vs. search problems. Game playing Chapter 6. Outline. Game tree (2-player, deterministic, turns) Types of games. Minimax

Game playing. Chapter 6. Chapter 6 1

Game playing. Chapter 5. Chapter 5 1

Outline. Game playing. Types of games. Games vs. search problems. Minimax. Game tree (2-player, deterministic, turns) Games

CS 380: ARTIFICIAL INTELLIGENCE

Game Playing. Philipp Koehn. 29 September 2015

Games vs. search problems. Adversarial Search. Types of games. Outline

ADVERSARIAL SEARCH. Chapter 5

Game playing. Outline

Game Playing: Adversarial Search. Chapter 5

Lecture 5: Game Playing (Adversarial Search)

Game Playing. Dr. Richard J. Povinelli. Page 1. rev 1.1, 9/14/2003

Game playing. Chapter 5, Sections 1 6

Game playing. Chapter 5, Sections 1{5. AIMA Slides cstuart Russell and Peter Norvig, 1998 Chapter 5, Sections 1{5 1

Adversarial search (game playing)

Artificial Intelligence. Topic 5. Game playing

Artificial Intelligence, CS, Nanjing University Spring, 2018, Yang Yu. Lecture 4: Search 3.

Ch.4 AI and Games. Hantao Zhang. The University of Iowa Department of Computer Science. hzhang/c145

CS 188: Artificial Intelligence

Last update: March 9, Game playing. CMSC 421, Chapter 6. CMSC 421, Chapter 6 1

Programming Project 1: Pacman (Due )

CS 188: Artificial Intelligence Spring Game Playing in Practice

Game-Playing & Adversarial Search

Announcements. Homework 1. Project 1. Due tonight at 11:59pm. Due Friday 2/8 at 4:00pm. Electronic HW1 Written HW1

Adversarial Search Lecture 7

Adversarial Search and Game Playing

CSE 473: Artificial Intelligence. Outline

Artificial Intelligence

Adversarial Search and Game- Playing C H A P T E R 6 C M P T : S P R I N G H A S S A N K H O S R A V I

Adversarial Search. CMPSCI 383 September 29, 2011

COMP219: COMP219: Artificial Intelligence Artificial Intelligence Dr. Annabel Latham Lecture 12: Game Playing Overview Games and Search

Adversarial Search. Chapter 5. Mausam (Based on slides of Stuart Russell, Andrew Parks, Henry Kautz, Linda Shapiro) 1

Games CSE 473. Kasparov Vs. Deep Junior August 2, 2003 Match ends in a 3 / 3 tie!

Game Playing. Why do AI researchers study game playing? 1. It s a good reasoning problem, formal and nontrivial.

Adversarial Search. Hal Daumé III. Computer Science University of Maryland CS 421: Introduction to Artificial Intelligence 9 Feb 2012

Game-playing AIs: Games and Adversarial Search FINAL SET (w/ pruning study examples) AIMA

CS 188: Artificial Intelligence Spring Announcements

Game Playing State-of-the-Art

Today. Types of Game. Games and Search 1/18/2010. COMP210: Artificial Intelligence. Lecture 10. Game playing

CS 771 Artificial Intelligence. Adversarial Search

Artificial Intelligence 1: game playing

Artificial Intelligence. Minimax and alpha-beta pruning

CS 188: Artificial Intelligence Spring 2007

Artificial Intelligence Adversarial Search

Game Playing State-of-the-Art CSE 473: Artificial Intelligence Fall Deterministic Games. Zero-Sum Games 10/13/17. Adversarial Search

Adversarial Search. Human-aware Robotics. 2018/01/25 Chapter 5 in R&N 3rd Ø Announcement: Slides for this lecture are here:

Artificial Intelligence

Artificial Intelligence

Artificial Intelligence

Adversarial Search. Read AIMA Chapter CIS 421/521 - Intro to AI 1

Today. Nondeterministic games: backgammon. Algorithm for nondeterministic games. Nondeterministic games in general. See Russell and Norvig, chapter 6

CS 5522: Artificial Intelligence II

CS 331: Artificial Intelligence Adversarial Search II. Outline

Adversarial Search. Soleymani. Artificial Intelligence: A Modern Approach, 3 rd Edition, Chapter 5

Announcements. CS 188: Artificial Intelligence Spring Game Playing State-of-the-Art. Overview. Game Playing. GamesCrafters

CS 188: Artificial Intelligence. Overview

Game Playing State of the Art

6. Games. COMP9414/ 9814/ 3411: Artificial Intelligence. Outline. Mechanical Turk. Origins. origins. motivation. minimax search

Game Playing State-of-the-Art. CS 188: Artificial Intelligence. Behavior from Computation. Video of Demo Mystery Pacman. Adversarial Search

Local Search. Hill Climbing. Hill Climbing Diagram. Simulated Annealing. Simulated Annealing. Introduction to Artificial Intelligence

CSE 573: Artificial Intelligence Autumn 2010

CS 188: Artificial Intelligence

Ar#ficial)Intelligence!!

CS440/ECE448 Lecture 9: Minimax Search. Slides by Svetlana Lazebnik 9/2016 Modified by Mark Hasegawa-Johnson 9/2017

COMP219: Artificial Intelligence. Lecture 13: Game Playing

CSE 573: Artificial Intelligence

School of EECS Washington State University. Artificial Intelligence

Announcements. CS 188: Artificial Intelligence Fall Local Search. Hill Climbing. Simulated Annealing. Hill Climbing Diagram

Outline. Game Playing. Game Problems. Game Problems. Types of games Playing a perfect game. Playing an imperfect game

Adversarial Search. Chapter 5. Mausam (Based on slides of Stuart Russell, Andrew Parks, Henry Kautz, Linda Shapiro, Diane Cook) 1

Games and Adversarial Search

Artificial Intelligence

Foundations of Artificial Intelligence

Adversarial Search Aka Games

Intuition Mini-Max 2

Foundations of Artificial Intelligence

Adversarial Search and Game Playing. Russell and Norvig: Chapter 5

Game Playing AI Class 8 Ch , 5.4.1, 5.5

Adversarial Search (Game Playing)

CITS3001. Algorithms, Agents and Artificial Intelligence. Semester 2, 2016 Tim French

Game-Playing & Adversarial Search Alpha-Beta Pruning, etc.

More Adversarial Search

Set 4: Game-Playing. ICS 271 Fall 2017 Kalev Kask

Adversarial Search 1

Foundations of AI. 6. Adversarial Search. Search Strategies for Games, Games with Chance, State of the Art. Wolfram Burgard & Bernhard Nebel

CSE 40171: Artificial Intelligence. Adversarial Search: Games and Optimality

Unit-III Chap-II Adversarial Search. Created by: Ashish Shah 1

Adversarial Search. Rob Platt Northeastern University. Some images and slides are used from: AIMA CS188 UC Berkeley

Lecture 14. Questions? Friday, February 10 CS 430 Artificial Intelligence - Lecture 14 1

Game-playing AIs: Games and Adversarial Search I AIMA

CSE 473: Ar+ficial Intelligence

CS 4700: Foundations of Artificial Intelligence

ARTIFICIAL INTELLIGENCE (CS 370D)

Contents. Foundations of Artificial Intelligence. Problems. Why Board Games?

Pengju

CSE 473: Artificial Intelligence Autumn 2011

Solving Problems by Searching: Adversarial Search

CPS331 Lecture: Search in Games last revised 2/16/10

Transcription:

Adversarial Search (a.k.a. Game Playing) Chapter 5 (Adapted from Stuart Russell, Dan Klein, and others. Thanks guys!)

Outline Games Perfect play: principles of adversarial search minimax decisions α β pruning Move ordering Imperfect play: dealing with resource limits Cutting of search and approximate evaluation Stochastic games (games of chance) Partially Observable games Card Games 2

Games vs. search problems Search in Ch3&4: Single actor! single player scenario or game, e.g., Boggle. Brain teasers: one player against the game. Could be adversarial, but not directly as part of game e.g. I can find more words than you Adversarial game: Unpredictable opponent shares control of state solution is a strategy à specifying a move for every possible opponent response Time limits unlikely to find goal, must find optimal move with incomplete search Major penalty for inefficiency (you get your clock cleaned) Most commonly: zero-sum games. My gain is your loss = Adversarial Gaming has a deep history in computational thinking Computer considers possible lines of play (Babbage, 1846) Algorithm for perfect play (Zermelo, 1912; Von Neumann, 1944) Finite horizon, approximate evaluation (Zuse, 1945; Wiener, 1948; Shannon, 1950) First chess program (Turing, 1951) Machine learning to improve evaluation accuracy (Samuel, 1952 57) Pruning to allow deeper search (McCarthy, 1956) Plus explosion of more modern results... 3

Types of Games perfect information imperfect information deterministic chess, checkers, go, othello, connect-4, Dctac-toe BaHleship, Blind Dc-tactoe, Kriegspiel chance Backgammon, Monopoly, Chutes-nladders Bridge, Poker, Scrabble Nuclear war Access to Information Perfect Info. Fully observable. Both player see whole board, all of the time Imperfect Info. Not/partially-observable. Blind or partial knowledge of board. Determinism: Deterministic: No element of chance. Players have 100% control over actions taken in game Chance: Some element of chance: die rolls, cards dealing, etc. 4

Game tree (2-player, deterministic, turns) MAX (X) MIN (O) X X X X X X X X X MAX (X) X O X O X O... MIN (O) TERMINAL Utility X O X X O X X O X............... X O X O X O X O X O O X X X O X O X X X O O 1 0 +1... Pondering Game Tree Size... Tic-tac-toe (3x3) Chess Small = 9! = 362,880 terminal nodes 1040 terminal nodes! Never could generate whole tree! 5

Minimax Search Normal Search: Solution = seq. of actions leading to goal. Adversarial Search: Opponent interfering at every step! Solution= Contingent plan of action Finds optimal solution to goal, assuming that opponent makes optimal counter-plays. Essentially an AND-OR tree (Ch4): opponent provides non-determinism Perfect play for deterministic, perfect-information games: Idea: choose move to position with highest minimax value E.g., 2-ply game: MAX 3 A A A 1 2 3 MIN 3 2 2 A 11 A 12 A 13 A 21 A 22 A 23 A 31 A 32 A 33 3 12 8 2 4 6 14 5 2 6

Minimax algorithm function Minimax-Decision(state) returns an ac(on inputs: state, current state in game return the a in Actions(state) maximizing Min-Value(Result(a, state)) function Max-Value(state) returns a u(lity value if Terminal-Test(state) then return Utility(state) v for a, s in Successors(state) do v Max(v, Min-Value(s)) return v function Min-Value(state) returns a u(lity value if Terminal-Test(state) then return Utility(state) v for a, s in Successors(state) do v Min(v, Max-Value(s)) return v 7

Minimax: Reflection Need to understand how minimax works! Recursive depth-first algorithm Max-Value at one level...calls Min-Value at next...calls Max-Value at next. Base case: Hits a terminal state = game is over à has known score (for max) Scores backed up through the tree on recursive return As each node fully explores its children, it can pass its value back Score arriving back at root shows which move current player (max) should make Makes move that maximizes outcome, assuming optimal play by opponent. Multi-player games? Don t have just Max & Min. Have whole set of players A,B,C, etc. Calculate utility vector of scores at each level/node Contains node (board position) value for each player Value of node = utility vector that maximizes benefit for player whose move it is 8

Properties of minimax search Complete?? Yes, if tree is finite (chess has specific rules for this) Minimax performs complete depth-first exploration of game tree Optimal?? Yes, against an optimal opponent. Otherwise?? Time complexity?? O(bm) Space complexity?? O(bm) (depth-first exploration) (m is tree depth) Practical Analysis: For chess, b 35, m 100 (moves) for reasonable games Time cost gets out of range of 3 minute per move standard fast! exact solution completely infeasible! Engage cleverness: do we really need to explore every path in tree? 9

Alpha-Beta (α β) pruning DFS plunges down tree to a terminal state fast! Knows about one complete branch first... Can we use this to avoid searching later branches? Alpha-Beta pruning: MAX 3 MIN 3 Reference: whole tree 3 12 8 10

α β pruning example MAX 3 MIN 3 2 3 12 8 2 X X Reference: whole tree 11

α β pruning example MAX 3 MIN 3 2 14 3 12 8 2 X X 14 Reference: whole tree 12

α β pruning example MAX 3 MIN 3 2 14 5 3 12 8 2 X X 14 5 Reference: whole tree 13

α β pruning example MAX 3 3 MIN 3 2 14 5 2 3 12 8 2 X X 14 5 2 Observant QuesDons: What exactly is it that allowed pruning at <= 2 node? Why no pruning at sibling to right? More on this shortly... Reference: whole tree 14

α β: Re>lection on behavior MAX MIN...... α α α α is set/updated as first branch is explored then sent down subsequent branches to prune with. MAX-n MIN-n V α-β maintains two boundary values as it moves up/down tree α is the best value (to max) found so far off the current path β is the best value found so far at choice points for min Example: If V is worse than α, Max-n will avoid it prune that branch β works similarly for min 15

The α β algorithm function Alpha-Beta-Decision(state) returns an acdon return the a in Actions(state) maximizing Min-Value( Result( a, state)) function Max-Value(state, α, β) returns a u(lity value inputs: state, current state in game α, the value of the best alternadve for max along the path to state β, the value of the best alternadve for min along the path to state if Terminal-Test(state) then return Utility(state) v for a, s in Successors(state) do v Max(v, Min-Value(s, α, β)) if v β then return v α Max(α, v) return v function Min-Value(state, α, β) returns a u(lity value same as Max-Value but with roles of α, β reversed 16

Properties of α β α β observations: Pruning is zero-loss. Final outcome same as without pruning. Great example of meta-reasoning = reasoning about computational process. Here: reasoning about which computations could possibly be relevant (or not) Key to high efficiency in AI programming. Effectiveness depends hugely which path (moves) you examine first. Slide 14: why prune in middle subtree but not in rightmost one. Middle subtree: examines highest value (for max) nodes first! Analysis: Chess has average branching factor around 35 Pruning removes branches (whole subtrees) à effective branching factor = 28. Substantial reduction. Unfortunately, 28 50 is still impossible to search in reasonable time! 17

Move ordering to improve α β efficacy Plan: at any ply: examine higher value (to max) siblings first. Sets the α value tightly à more likely to prune subsequent branches. Strategies: Stats: Static: Prioritize higher value moves like captures, forward moves, etc. Dynamic: prioritize moves that have been good in the past Use IDS: searches to depth=n reveal high values moves for subsequent researches at depth > n. Minimax search = O(b m ) α β with random ordering = about O(b 3m/4 ) à nice reduction α β with strong move ordering = about O(b m/2 ) Effectively reduced b-factor from 35 to 6 in chess! Can ply twice as deep, same time! More power: transpositions Some move chains are transpositions of each other. (aàb, then dàe) gives same board as (dàe, then bàa). Identify and only compute once: can double reachable depth again! 18

Imperfect Game Play Reality check: Thus far: minimax assumes we can search down to bottom of tree Not realistic: minimax is O(b m ) Chess = of 50 moves/game, b about 35 O(35 50 ).or, with theoretical best α β move ordering: O(6 50 ). Huge! Plan: Search as deep as time allows Terminal-test() à Cutoff-test() Cut-off-test(s) decides if we should stop searching at that state/level. If true: apply evaluation function and return value of that board. When to cut off search? Fred Flintstone static approach = just always cut off search at some depth d. Problem: leaves valuable time on the table Reachable depth within t-limit varies depending on board/# pieces/etc. Solution: Use IDS. Search until time is up à return result from latest completed search Bonus: Use info from previous IDS runs to optimize α β move ordering Problem: horizon effect = something bad could happen just beyond search limit Solution: Add quiescence metric. Never cut off search in middle of heavy action. 19

Advanced Techniques: when winning matters Idea 1: Find ways to search deeper. Efficiency: efficient board representation, faster eval functions, etc. Better pruning: maximize efficacy of move ordering subsystem Forward pruning: cut off un-interesting branches of search tree early α β prunes nodes that are provably useless à loss-less Forward pruning guesses à prunes nodes that are probably useless. Danger: could prune away moves that ultimately lead to wins! Strategy: shallow search gets rough node value. Stored info estimates likely utility Idea 2: More sophisticated evaluation function Linear weighted function assume independence of features statically But often it s the combo of pieces that count more at some points in game than others E.g., pair of bishops > two bishops but more so in the end-game Non-linear weighted functions allow more subtle tuning Machine learning can also be used to adjust weights from experience 20

Advanced Techniques: when winning matters Idea 3: Avoid search completely when you can In many games, there are certain rote phases e.g. Chess: whole libraries of books about standard openings/end games Why search down through billions of boards? Look it up! Can just store and look-up moves for standard situations Enter from books and other human knowledge Calculate stats on DB of previously played games à which openings won most? Computers can have advantage of humans here! Human: has general strategy for certain endgames King-rook-king (KRK) endgame, king-bishop-knight-king (KBNK), etc. Computer: with so few pieces, can literally compute winning move sequence! For all possible KRK endings, etc. Computer recognizes a pre-computed sequence à plays perfect deterministic endgame! 21

History: Deterministic Games in practice Checkers: Chess: Chinook ended 40-year-reign of human world champion Marion Tinsley in 1994. Used an endgame database defining perfect play for all positions involving 8 or fewer pieces on the board, a total of 443,748,401,247 positions. Deep Blue defeated human world champion Gary Kasparov in a six-game match in 1997. Deep Blue searches 200 million positions per second, uses very sophisticated evaluation, and undisclosed methods for extending some lines of search up to 40 ply. Othello: Go: human champions refuse to compete against computers, who are too good. 2005: human champions refuse to compete against computers, who are too bad. In go, b > 300, so most programs use pattern knowledge bases to suggest plausible moves. 2017: IBM reveals it has been secretly entering its Go agent in online tournaments. And winning. Beats reigning Go champion four in a row 22

Stochastic (non-deterministic games) Player-at-turn rolls dice: Can now move one piece 5 places, and another piece 6 places Combination of luck and skill Strategy must account for roll of dice = random chance. Plus other player! Backgammon: Dice determine possible moves Can t construct a standard game tree! 23

Non-deterministic Games Chance introduced by: dice, card-shuffling/dealing, drawing cards Minimax à Expectiminimax Chance essentially acts as another player Chance level= sum of expected outcomes, weighted by probability of happening. Simplified example with coin-flipping move inserted into some game: MAX CHANCE 3 1 0.5 0.5 0.5 0.5 MIN 2 4 0 2 2 4 7 4 6 0 5 2 24

Expectiminimax Algorithm Expectiminimax produces perfect play Meaning: best possible play, given the stochastic probabilities involved. Just like Minimax, except we must also handle chance nodes:... If terminal-test(s)=true return Evaluation-fn(s) if state is a Max node then return the highest ExpectiMinimax-Value of Successors(state) if state is a Min node then return the lowest ExpectiMinimax-Value of Successors(state) if state is a chance node then return SUM of probability-weighted(expectiminimax-value of Successors(state))... Dice rolls increase b: 21 possible rolls with 2 dice Backgammon 20 legal moves (can be 6,000 with 1-1 roll) depth 4 = 20 (21 20) 3 1.2 10 9 Thus: As depth increases, probability of reaching a given node shrinks value of lookahead is diminished α β pruning is much less effective (because chance makes pruning less common) TDGammon: uses depth-2 search + very good Eval world-champion level 25

Games of imperfect information g.., card games, where opponent s initial cards are unknown Typically we can calculate a probability for each possible deal Seems just like having one big dice roll at the beginning of the game Idea: compute the minimax value of each action in each deal, then choose the action with highest expected value over all deals Special case: if an action is optimal for all deals, it s optimal. GIB, current best bridge program, approximates this idea by 1) generating 100 deals consistent with bidding information 2) picking the action that wins most tricks on average 26

Partially Observable Games So far: Fully observable games All player can see all functional pieces (state) of the game at all times Many games are fun because of imperfect information Players see only none/part of opponents state. E.g. Poker and similar card games, Battleship, etc. Example: Kriegspiel: Blind chess! White and Black see only a board containing their pieces. On turn: player proposes a move. Referee announced: legal/illegal. If legal: Capture on square X, Check by <direction>, checkmate or stalemate. Plan: Use belief states developed in Ch4! Referee feedback = percepts that update/prune belief states. All believe states NOT equally likely: can calculate probabilities on believe states based predicting optimum play by opponent. Implication: Best to add some randomness to your play: be unpredictable! 27

Card Games Stochastic partial observability Cards dealt randomly at the beginning of game. Deterministic after that. Odds (probability) of possible hands easily calculated. E.g. Bridge, Whist, Hearts, some forms of poker. Plan: Probabilistic weighted search Generate all possible deals of the (missing) cards Solve each one just like a fully observable games (Minimax) Weight each outcome with probability of that hand being dealt Chose move that has the best outcome, averaged over all possible deals. Reality check: In Bridge there are 10+ million possible visible hands. Can t explore all! Idea: Monte Carlo approach: solve random sample of deals Choice of sample set is weighted to include more likely hands. Bidding may add valuable info on hands à changes probabilities. GIB, leading bridge program: generates 100 deals consistent with bidding 28

Summary Games are just specialized search problems. Modifications: Minimax (plus α β pruning) to model opponent player Stochastic choice layers in tree to model chance Belief state management to model partial observability Games illustrate several important points about AI perfection is unattainable in reality must approximate good idea to think about what to think about Meta-level analysis, as in considerations leading to α β pruning Uncertainty constrains the assignment of values to states Increases effective branching factor, could make pruning less effective Optimal decisions depend on information state, not real state As illustrated in partially observable games, when belief state is what matters Games are to AI as grand prix racing is to automobile design Proving ground for hardware, data structures, algorithms and cleverness 29

30