Game Tree Search. Generalizing Search Problems. Two-person Zero-Sum Games. Generalizing Search Problems. CSC384: Intro to Artificial Intelligence

Similar documents
CSC384: Introduction to Artificial Intelligence. Game Tree Search

Game Tree Search. CSC384: Introduction to Artificial Intelligence. Generalizing Search Problem. General Games. What makes something a game?

ARTIFICIAL INTELLIGENCE (CS 370D)

CS510 \ Lecture Ariel Stolerman

Adversarial Search and Game Theory. CS 510 Lecture 5 October 26, 2017

Adversarial Search 1

Adversarial Search Aka Games

Programming Project 1: Pacman (Due )

Computer Science and Software Engineering University of Wisconsin - Platteville. 4. Game Play. CS 3030 Lecture Notes Yan Shi UW-Platteville

Games (adversarial search problems)

Adversarial Search (Game Playing)

Adversarial Search and Game Playing. Russell and Norvig: Chapter 5

Adversarial Search and Game Playing

Multiple Agents. Why can t we all just get along? (Rodney King)

Playing Games. Henry Z. Lo. June 23, We consider writing AI to play games with the following properties:

Module 3. Problem Solving using Search- (Two agent) Version 2 CSE IIT, Kharagpur

CS 440 / ECE 448 Introduction to Artificial Intelligence Spring 2010 Lecture #5

Game Playing AI Class 8 Ch , 5.4.1, 5.5

Adversarial Search and Game- Playing C H A P T E R 6 C M P T : S P R I N G H A S S A N K H O S R A V I

2 person perfect information

Outline. Game Playing. Game Problems. Game Problems. Types of games Playing a perfect game. Playing an imperfect game

Game-Playing & Adversarial Search

CS 1571 Introduction to AI Lecture 12. Adversarial search. CS 1571 Intro to AI. Announcements

CS 2710 Foundations of AI. Lecture 9. Adversarial search. CS 2710 Foundations of AI. Game search

Adversarial Search. Human-aware Robotics. 2018/01/25 Chapter 5 in R&N 3rd Ø Announcement: Slides for this lecture are here:

CS440/ECE448 Lecture 9: Minimax Search. Slides by Svetlana Lazebnik 9/2016 Modified by Mark Hasegawa-Johnson 9/2017

Adversarial Search. Soleymani. Artificial Intelligence: A Modern Approach, 3 rd Edition, Chapter 5

CS 5522: Artificial Intelligence II

game tree complete all possible moves

Game-playing AIs: Games and Adversarial Search FINAL SET (w/ pruning study examples) AIMA

CS 331: Artificial Intelligence Adversarial Search II. Outline

Game Playing State-of-the-Art

Artificial Intelligence

Games CSE 473. Kasparov Vs. Deep Junior August 2, 2003 Match ends in a 3 / 3 tie!

CS 4700: Foundations of Artificial Intelligence

Adversarial Search. Rob Platt Northeastern University. Some images and slides are used from: AIMA CS188 UC Berkeley

Adversarial Search Lecture 7

Artificial Intelligence. Minimax and alpha-beta pruning

Artificial Intelligence Search III

Game-playing AIs: Games and Adversarial Search I AIMA

CS 771 Artificial Intelligence. Adversarial Search

Adversary Search. Ref: Chapter 5

mywbut.com Two agent games : alpha beta pruning

CS 188: Artificial Intelligence Spring Announcements

16.410/413 Principles of Autonomy and Decision Making

Game Playing State-of-the-Art. CS 188: Artificial Intelligence. Behavior from Computation. Video of Demo Mystery Pacman. Adversarial Search

Adversarial Search. Robert Platt Northeastern University. Some images and slides are used from: 1. CS188 UC Berkeley 2. RN, AIMA

Game Playing. Philipp Koehn. 29 September 2015

Game Engineering CS F-24 Board / Strategy Games

CS 188: Artificial Intelligence

CS 188: Artificial Intelligence

Game Playing State-of-the-Art CSE 473: Artificial Intelligence Fall Deterministic Games. Zero-Sum Games 10/13/17. Adversarial Search

Set 4: Game-Playing. ICS 271 Fall 2017 Kalev Kask

Adversarial Search. Read AIMA Chapter CIS 421/521 - Intro to AI 1

Artificial Intelligence 1: game playing

Announcements. Homework 1. Project 1. Due tonight at 11:59pm. Due Friday 2/8 at 4:00pm. Electronic HW1 Written HW1

Foundations of AI. 5. Board Games. Search Strategies for Games, Games with Chance, State of the Art. Wolfram Burgard and Luc De Raedt SA-1

Foundations of AI. 6. Adversarial Search. Search Strategies for Games, Games with Chance, State of the Art. Wolfram Burgard & Bernhard Nebel

Artificial Intelligence

COMP9414: Artificial Intelligence Adversarial Search

CS188 Spring 2010 Section 3: Game Trees

CS188 Spring 2010 Section 3: Game Trees

Theory and Practice of Artificial Intelligence

ADVERSARIAL SEARCH. Chapter 5

CS 188: Artificial Intelligence Spring 2007

Game Playing Beyond Minimax. Game Playing Summary So Far. Game Playing Improving Efficiency. Game Playing Minimax using DFS.

Announcements. CS 188: Artificial Intelligence Spring Game Playing State-of-the-Art. Overview. Game Playing. GamesCrafters

CPS 570: Artificial Intelligence Two-player, zero-sum, perfect-information Games

Adversarial search (game playing)

CPS331 Lecture: Search in Games last revised 2/16/10

Computer Game Programming Board Games

Foundations of AI. 6. Board Games. Search Strategies for Games, Games with Chance, State of the Art

Games and Adversarial Search

Foundations of Artificial Intelligence

Announcements. Homework 1 solutions posted. Test in 2 weeks (27 th ) -Covers up to and including HW2 (informed search)

ADVERSARIAL SEARCH. Today. Reading. Goals. AIMA Chapter , 5.7,5.8

COMP219: Artificial Intelligence. Lecture 13: Game Playing

Foundations of Artificial Intelligence

Ar#ficial)Intelligence!!

Adversarial Search: Game Playing. Reading: Chapter

CS 188: Artificial Intelligence. Overview

Artificial Intelligence Adversarial Search

Artificial Intelligence

Intuition Mini-Max 2

CS188 Spring 2014 Section 3: Games

CSE 573: Artificial Intelligence Autumn 2010

Artificial Intelligence. Topic 5. Game playing

Game Playing. Garry Kasparov and Deep Blue. 1997, GM Gabriel Schwartzman's Chess Camera, courtesy IBM.

CS 380: ARTIFICIAL INTELLIGENCE ADVERSARIAL SEARCH. Santiago Ontañón

CSC 380 Final Presentation. Connect 4 David Alligood, Scott Swiger, Jo Van Voorhis

Last update: March 9, Game playing. CMSC 421, Chapter 6. CMSC 421, Chapter 6 1

COMP219: COMP219: Artificial Intelligence Artificial Intelligence Dr. Annabel Latham Lecture 12: Game Playing Overview Games and Search

ADVERSARIAL SEARCH. Today. Reading. Goals. AIMA Chapter Read , Skim 5.7

Adversarial Search. Hal Daumé III. Computer Science University of Maryland CS 421: Introduction to Artificial Intelligence 9 Feb 2012

CSE 473: Artificial Intelligence Fall Outline. Types of Games. Deterministic Games. Previously: Single-Agent Trees. Previously: Value of a State

CMPUT 396 Tic-Tac-Toe Game

Adversarial Search. CMPSCI 383 September 29, 2011

Game Playing: Adversarial Search. Chapter 5

Adversarial Search. Chapter 5. Mausam (Based on slides of Stuart Russell, Andrew Parks, Henry Kautz, Linda Shapiro) 1

Contents. Foundations of Artificial Intelligence. Problems. Why Board Games?

Transcription:

CSC384: Intro to Artificial Intelligence Game Tree Search Chapter 6.1, 6.2, 6.3, 6.6 cover some of the material we cover here. Section 6.6 has an interesting overview of State-of-the-Art game playing programs. Section 6.5 extends the ideas to games with uncertainty (We won t cover that material but it makes for interesting reading). Generalizing Search Problems So far: our search problems have assumed agent has complete control of environment state does not change unless the agent (robot) changes it. All we need to compute is a single path to a goal state. Assumption not always reasonable stochastic environment (e.g., the weather, traffic accidents). other agents whose interests conflict with yours Problem: you might not traverse the path you are expecting. 1 2 Generalizing Search Problems Two-person Zero-Sum Games In these cases, we need to generalize our view of search to handle state changes that are not in the control of the agent. ne generalization yields game tree search agent and some other agents. The other agents are acting to maximize their profits this might not have a positive effect on your profits. Two-person, zero-sum games chess, checkers, tic-tac-toe, backgammon, go, Doom, find the last parking space Your winning means that your opponent looses, and vice-versa. Zero-sum means the sum of your and your opponent s payoff is zero---any thing you gain come at your opponent s cost (and vice-versa). Key insight: how you act depends on how the other agent acts (or how you think they will act) and vice versa (if your opponent is a rational player) 3 4 1

More General Games What makes something a game? there are two (or more) agents influencing state change each agent has their own interests e.g., goal states are different; or we assign different values to different paths/states Each agent tries to alter the state so as to best benefit itself. More General Games What makes games hard? how you should play depends on how you think the other person will play; but how they play depends on how they think you will play; so how you should play depends on how you think they think you will play; but how they play should depend on how they think you think they think you will play; 5 6 More General Games Game 1: Rock, Paper Scissors Zero-sum games are fully competitive if one player wins, the other player loses e.g., the amount of money I win (lose) at poker is the amount of money you lose (win) More general games can be cooperative some outcomes are preferred by both of us, or at least our values aren t diametrically opposed We ll look in detail at zero-sum games but first, some examples of simple zero-sum and cooperative games Scissors cut paper, paper covers rock, rock smashes scissors Represented as a matrix: Player I chooses a row, Player II chooses a column Payoff to each player in each cell (P.I / P.II) 1: win, 0: tie, -1: loss so it s zero-sum Player I R P S Player II R P S 0/0-1/1 1/-1 1/-1 0/0-1/1-1/1 1/-1 0/0 7 8 2

Game 2: Prisoner s Dilemma Game 3: Battlebots Two prisoner s in separate cells, DA doesn t have enough evidence to convict them If one confesses, other doesn t: confessor goes free Coop Def other sentenced to 4 years Coop 3/3 0/4 If both confess (both defect) both sentenced to 3 years Def 4/0 1/1 Neither confess (both cooperate) sentenced to 1 year on minor charge Payoff: 4 minus sentence Two robots: Blue (Steven s), Red (Michael s) one cup of coffee, one tea left both S & M prefer coffee (value 10) C T tea acceptable (value 8) Both robots go for Coffee C 0/0 10/8 collide and get no payoff Both go for tea: same T 8/10 0/0 ne goes for coffee, other for tea: coffee robot gets 10 tea robot gets 8 9 10 Two Player Zero Sum Games Two-Player, Zero-Sum Game: Definition Key point of previous games: what you should do depends on what other guy does Previous games are simple one shot games single move each in game theory: strategic or normal form games Many games extend over multiple moves e.g., chess, checkers, etc. in game theory: extensive form games We ll focus on the extensive form that s where the computational questions emerge Two players A (Max) and B (Min) set of positions P (states of the game) a starting position s P (where game begins) positions T P (where game can end) set of directed edges E A between states (A s moves) set of directed edges E B between states (B s moves) utility or payoff function U : T R (how good is each state for player A) why don t we need a utility function for B? 11 12 3

Intuitions Tic-tac-toe: States Players alternate moves (starting with Max) Game ends when some p T is reached A game state: a position-player pair tells us what position we re in, whose move it is Utility function and s replace goals Max wants to maximize the payoff Min wants to minimize the payoff Think of it as: Max gets U(t), Min gets U(t) for node t This is why it s called zero (or constant) sum start Turn=Max() Min() Turn=Min() another Turn=Max() Max() 13 U = +1 U = -1 14 Tic-tac-toe: Game Tree Game Tree Max Min Max Min U = +1 a b c d 15 Game tree looks like a search tree Layers reflect the alternating moves But Max doesn t decide where to go alone after Max moves to state a, Mins decides whether to move to state b, c, or d Thus Max must have a strategy must know what to do next no matter what move Min makes (b, c, or d) a sequence of moves will not suffice: Max may want to do something different in response to b, c, or d What is a reasonable strategy? 16 4

Minimax Strategy: Intuitions Minimax Strategy: Intuitions max node max node min node min node s1 s2 s3 s1 s2 s3 t1 t2 t3 t4 t5 t6 t7 7-6 4 3 9-10 2 The nodes have utilities. But we can compute a utility for the non- states, by assuming both players always play their best move. t1 t2 t3 t4 t5 t6 t7 7-6 4 3 9-10 2 If Max goes to s1, Min goes to t2 So Max goes to s2: so * U(s1) = min{u(t1), U(t2), U(t3)} = -6 U() If Max goes to s2, Min goes to t4 = max{u(s1), U(s2), U(s3)} * U(s2) = min{u(t4), U(t5)} = 3 = 3 If Max goes to s3, Min goes to t6 * U(s3) = min{u(t6), U(t7)} = -10 17 18 Minimax Strategy Minimax Strategy Build full game tree (all leaves are s) root is start state, edges are possible moves, etc. label nodes with utilities Back values up the tree U(t) is defined for all s (part of input) U(n) = min {U(c) : c a child of n} if n is a min node U(n) = max {U(c) : c a child of n} if n is a max node The values labeling each state are the values that Max will achieve in that state if both he and Min play their best moves. Max plays a move to change the state to the highest valued min child. Min plays a move to change the state to the lowest valued max child. If Min plays poorly, Max could do better, but never worse. If Max, however know that Min will play poorly, there might be a better strategy of play for Max than minimax! 19 20 5

Depth-first Implementation of MinMax utility(n,u) :- (N), val(n,u). utility(n,u) :- maxmove(n), children(n,clist), utilitylist(clist,ulist), max(ulist,u). utility(n,u) :- minmove(n), children(n,clist), utilitylist(clist,ulist), min(ulist,u). Depth-first evaluation of game tree (N) holds if the state (node) is a node. Similarly for maxmove(n) (Max player s move) and minmove(n) (Min player s move). utility of s is specified as part of the input (val) Depth-first Implementation of MinMax utilitylist([],[]). utilitylist([n R],[U UList]) :- utility(n,u), utilitylist(r,ulist). utilitylist simply computes a list of utilities, one for each node on the list. The way prolog executes implies that this will compute utilities using a depth-first post-order traversal of the game tree. post-order (visit children before visiting parents). 21 22 Depth-first Implementation of MinMax Visualization of DF-MinMax Notice that the game tree has to have finite depth for this to work Advantage of DF implementation: space efficient nce s17 evaluated, no need to store tree: s16 only needs its value. nce s24 value computed, we can evaluate s16 s1 s13 s16 s2 s6 t14 t15 s17 s24 t3 t4 t5 s7 s10 s18 s21 t25 t26 t8 t9 t11 t12 t19 t20 t22 t23 23 24 6

Pruning It is not necessary to examine entire tree to make correct minimax decision Assume depth-first generation of tree After generating value for only some of n s children we may find n is never reached in a MinMax strategy! No need to generate/evaluate more children of n! Two types of pruning (cuts): pruning of max nodes (α-cuts) α is the best value for MA find so far pruning of min nodes (β-cuts) β is the best value for MIN find so far 25 Cutting Max Nodes (Alpha Cuts) At a Max node n: Let β be the lowest value of n s siblings examined so far, i.e. siblings to the left of n that have already been searched. (note β is fixed when evaluating n) Let α be the highest value of n s children examined so far (note α changes as children of n examined) s2 5 s1 s13 s16 T3 8 s6 T4 10 T5 5 α =8 α=10 α=10 max node min node β =5 only one sibling value known sequence of values for α as s6 s children are explored 26 Cutting Max Nodes (Alpha Cuts) If α becomes βwe can stop expanding the children of n Min will never choose to move from n s parent to n since it would choose one of n s lower valued siblings first. min node 14 12 8 P s1 s2 s3 2 4 9 n β = 8 α = 2 4 9 Cutting Min Nodes (Beta Cuts) At a Min node n: Let β be the lowest value of n s children examined so far (changes as children of n are examined) Let α be the highest value of n s sibling s examined so far (fixed when evaluating n) s1 s13 s16 α =10 s2 s6 β =5 β =3 max node min node 27 28 7

Cutting Min Nodes (Beta Cuts) Alpha-Beta Algorithm If β becomes αwe can stop expanding the children of n. Max will never choose to move from n s parent to n since it would choose one of n s higher value siblings first. P alpha = 7 Pseudo-code that associates a value with each node. Strategy extracted by moving to Max node (if you are player Max) at each step. MaxEval(node, alpha, beta): If (node), return U(n) For each c in childlist(n) val MinEval(c, alpha, beta) alpha max(alpha, val) If alpha beta, return alpha Return alpha 6 2 7 s1 s2 s3 9 8 3 n beta = 9 8 3 Evaluate(startNode): /* assume Max moves first */ MaxEval(start, -infnty, +infnty) MinEval(node, alpha, beta): If (node), return U(n) For each c in childlist(n) val MaxEval(c, alpha, beta) beta min(beta, val) If alpha beta, return beta Return beta 29 30 Rational pponents Rational pponents This all assumes that your opponent is rational e.g., will choose moves that minimize your score What if your opponent doesn t play rationally? will it affect quality of outcome? Storing your strategy is a potential issue: you must store decisions for each node you can reach by playing optimally if your opponent has unique rational choices, this is a single branch through game tree if there are ties, opponent could choose any one of the tied moves: must store strategy for each subtree What if your opponent doesn t play rationally? Will your stored strategy still work? 31 32 8

Practical Matters The efficiency of alpha-beta pruning depends on the order in which children are evaluated: n average half of the children are pruned at each node (so the branch factor is halved!) (b d/2 ) so we can search twice as deep! All real games are too large to expand the whole tree e.g., chess branching factor is roughly 35 Depth 10 tree: 2,700,000,000,000,000 nodes Even alpha-beta pruning won t help here! Practical Matters Depth-first expansion almost always used for game trees because of sheer size of trees We must limit depth of search tree can t expand all the way to nodes must make heuristic estimates about the values of the (non) states at the leaves of the tree evaluation function is an often used term evaluation functions are often learned s2 s1 s13 s16 s6 t14 t15 s17 s24 Cut after 3 moves s7 t4 s7 s7 s18 s21 t25 s10 33 34 Heuristics Some Interesting Games Think of a few games and suggest some heuristics for estimating the goodness of a position chess? checkers? your favorite video game? find the last parking spot? Tesauro s TD-Gammon champion backgammon player which learned evaluation function; stochastic component (dice) Checker s (Samuel, 1950s; Chinook 1990s Schaeffer) Chess (which you all know about) Bridge, Poker, etc. Check out Jonathan Schaeffer s Web page: www.cs.ualberta.ca/~games they ve studied lots of games (you can play too) 35 36 9

An Aside on Large Search Problems Issue: inability to expand tree to nodes is relevant even in standard search often we can t expect A* to reach a goal by expanding full frontier so we often limit our lookahead, and make moves before we actually know the true path to the goal sometimes called online or realtime search In this case, we use the heuristic function not just to guide our search, but also to commits to moves we actually make in general, guarantees of optimality are lost, but we reduce computational/memory expense dramatically Realtime Search Graphically 1. We run A* (or our favorite search algorithm) until we are forced to make a move or run out of memory. Note: no leaves are goals yet. 2. We use evaluation function f(n) to decide which path looks best (let s say it is the red one). 3. We take the first step along the best path (red), by actually making that move. 4. We restart search at the node we reach by making that move. (We may actually cache the results of the relevant part of first search tree if it s hanging around, as it would with A*). 37 38 10