Games (adversarial search problems)

Similar documents
Minimax Trees: Utility Evaluation, Tree Evaluation, Pruning

ARTIFICIAL INTELLIGENCE (CS 370D)

Adversary Search. Ref: Chapter 5

mywbut.com Two agent games : alpha beta pruning

Tree representation Utility function

Adversarial Search 1

Artificial Intelligence. Minimax and alpha-beta pruning

CS 2710 Foundations of AI. Lecture 9. Adversarial search. CS 2710 Foundations of AI. Game search

Adversarial Search and Game Playing. Russell and Norvig: Chapter 5

game tree complete all possible moves

CS 1571 Introduction to AI Lecture 12. Adversarial search. CS 1571 Intro to AI. Announcements

Adversarial Search and Game- Playing C H A P T E R 6 C M P T : S P R I N G H A S S A N K H O S R A V I

Artificial Intelligence. Topic 5. Game playing

COMP219: COMP219: Artificial Intelligence Artificial Intelligence Dr. Annabel Latham Lecture 12: Game Playing Overview Games and Search

Adversarial Search. Human-aware Robotics. 2018/01/25 Chapter 5 in R&N 3rd Ø Announcement: Slides for this lecture are here:

Programming Project 1: Pacman (Due )

CPS331 Lecture: Search in Games last revised 2/16/10

Artificial Intelligence 1: game playing

Computer Science and Software Engineering University of Wisconsin - Platteville. 4. Game Play. CS 3030 Lecture Notes Yan Shi UW-Platteville

Game-Playing & Adversarial Search

Game Tree Search. CSC384: Introduction to Artificial Intelligence. Generalizing Search Problem. General Games. What makes something a game?

2 person perfect information

Game Playing. Why do AI researchers study game playing? 1. It s a good reasoning problem, formal and nontrivial.

COMP219: Artificial Intelligence. Lecture 13: Game Playing

ADVERSARIAL SEARCH. Today. Reading. Goals. AIMA Chapter , 5.7,5.8

Adversarial search (game playing)

Artificial Intelligence

CS 4700: Foundations of Artificial Intelligence

CS 771 Artificial Intelligence. Adversarial Search

CS 5522: Artificial Intelligence II

Artificial Intelligence

Game Playing. Dr. Richard J. Povinelli. Page 1. rev 1.1, 9/14/2003

ADVERSARIAL SEARCH. Chapter 5

ADVERSARIAL SEARCH. Today. Reading. Goals. AIMA Chapter Read , Skim 5.7

Game-playing AIs: Games and Adversarial Search I AIMA

Adversarial Search. CMPSCI 383 September 29, 2011

CS 331: Artificial Intelligence Adversarial Search II. Outline

CSE 473: Artificial Intelligence. Outline

Adversarial Search (Game Playing)

CSC384: Introduction to Artificial Intelligence. Game Tree Search

Today. Types of Game. Games and Search 1/18/2010. COMP210: Artificial Intelligence. Lecture 10. Game playing

Game Playing State-of-the-Art

Adversarial Search: Game Playing. Reading: Chapter

Algorithms for Data Structures: Search for Games. Phillip Smith 27/11/13

Adversarial Search Aka Games

CSE 473: Artificial Intelligence Fall Outline. Types of Games. Deterministic Games. Previously: Single-Agent Trees. Previously: Value of a State

CS 188: Artificial Intelligence Spring Announcements

Module 3. Problem Solving using Search- (Two agent) Version 2 CSE IIT, Kharagpur

Set 4: Game-Playing. ICS 271 Fall 2017 Kalev Kask

Computer Game Programming Board Games

Playing Games. Henry Z. Lo. June 23, We consider writing AI to play games with the following properties:

Adversarial Search. Soleymani. Artificial Intelligence: A Modern Approach, 3 rd Edition, Chapter 5

Game Playing AI Class 8 Ch , 5.4.1, 5.5

Adversarial Search and Game Playing

2/5/17 ADVERSARIAL SEARCH. Today. Introduce adversarial games Minimax as an optimal strategy Alpha-beta pruning Real-time decision making

Last update: March 9, Game playing. CMSC 421, Chapter 6. CMSC 421, Chapter 6 1

CS 188: Artificial Intelligence

Adversarial Search. Hal Daumé III. Computer Science University of Maryland CS 421: Introduction to Artificial Intelligence 9 Feb 2012

CS 188: Artificial Intelligence

CS 188: Artificial Intelligence Spring 2007

CPS 570: Artificial Intelligence Two-player, zero-sum, perfect-information Games

Adversarial Search Lecture 7

Outline. Game Playing. Game Problems. Game Problems. Types of games Playing a perfect game. Playing an imperfect game

Adversarial Search. Read AIMA Chapter CIS 421/521 - Intro to AI 1

Announcements. Homework 1. Project 1. Due tonight at 11:59pm. Due Friday 2/8 at 4:00pm. Electronic HW1 Written HW1

Adversarial Search. Chapter 5. Mausam (Based on slides of Stuart Russell, Andrew Parks, Henry Kautz, Linda Shapiro) 1

Game Playing State-of-the-Art. CS 188: Artificial Intelligence. Behavior from Computation. Video of Demo Mystery Pacman. Adversarial Search

Announcements. CS 188: Artificial Intelligence Spring Game Playing State-of-the-Art. Overview. Game Playing. GamesCrafters

Game playing. Chapter 5. Chapter 5 1

CS 440 / ECE 448 Introduction to Artificial Intelligence Spring 2010 Lecture #5

Game playing. Chapter 5, Sections 1 6

Artificial Intelligence. 4. Game Playing. Prof. Bojana Dalbelo Bašić Assoc. Prof. Jan Šnajder

COMP9414: Artificial Intelligence Adversarial Search

CS885 Reinforcement Learning Lecture 13c: June 13, Adversarial Search [RusNor] Sec

Game-playing AIs: Games and Adversarial Search FINAL SET (w/ pruning study examples) AIMA

Game Playing. Philipp Koehn. 29 September 2015

CSE 40171: Artificial Intelligence. Adversarial Search: Game Trees, Alpha-Beta Pruning; Imperfect Decisions

Game Playing State-of-the-Art CSE 473: Artificial Intelligence Fall Deterministic Games. Zero-Sum Games 10/13/17. Adversarial Search

CS 380: ARTIFICIAL INTELLIGENCE ADVERSARIAL SEARCH. Santiago Ontañón

CSE 573: Artificial Intelligence Autumn 2010

Lecture 5: Game Playing (Adversarial Search)

Game playing. Outline

Path Planning as Search

Foundations of Artificial Intelligence

Announcements. Homework 1 solutions posted. Test in 2 weeks (27 th ) -Covers up to and including HW2 (informed search)

CS440/ECE448 Lecture 9: Minimax Search. Slides by Svetlana Lazebnik 9/2016 Modified by Mark Hasegawa-Johnson 9/2017

Ch.4 AI and Games. Hantao Zhang. The University of Iowa Department of Computer Science. hzhang/c145

Chapter Overview. Games

Artificial Intelligence, CS, Nanjing University Spring, 2018, Yang Yu. Lecture 4: Search 3.

Adversarial Search. Robert Platt Northeastern University. Some images and slides are used from: 1. CS188 UC Berkeley 2. RN, AIMA

CS 188: Artificial Intelligence. Overview

Game playing. Chapter 6. Chapter 6 1

Lecture 14. Questions? Friday, February 10 CS 430 Artificial Intelligence - Lecture 14 1

Adversarial Search. Rob Platt Northeastern University. Some images and slides are used from: AIMA CS188 UC Berkeley

Games and Adversarial Search

CS510 \ Lecture Ariel Stolerman

Game Tree Search. Generalizing Search Problems. Two-person Zero-Sum Games. Generalizing Search Problems. CSC384: Intro to Artificial Intelligence

Adversarial Search. CS 486/686: Introduction to Artificial Intelligence

Artificial Intelligence

Ar#ficial)Intelligence!!

Transcription:

Mustafa Jarrar: Lecture Notes on Games, Birzeit University, Palestine Fall Semester, 204 Artificial Intelligence Chapter 6 Games (adversarial search problems) Dr. Mustafa Jarrar Sina Institute, University of Birzeit mjarrar@birzeit.edu www.jarrar.info Jarrar 204

Watch this lecture and download the slides from http://jarrar-courses.blogspot.com/20//artificial-intelligence-fall-20.html Most information based on Chapter 5 of [] Jarrar 204 2

Can you plan ahead with these games Jarrar 204 3

Game Tree (2-player, deterministic, turns) How to see the game as a tree Image from [2] Last state, game is over Calculated by utility function, depends on the game. Jarrar 204 4

Two-Person Perfect Information Deterministic Game My Moves Your Moves Your Moves Your Moves My Moves My Moves My Moves My Moves Two players take turns making moves Board state fully known, deterministic evaluation of moves One player wins by defeating the other (or else there is a tie) Want a strategy to win, assuming the other person plays as well as possible Jarrar 204 5

Computer Games Playing games can be seen as a Search Problem Multiplayer games as multi-agent environments. Agents' goals are in conflict. Mostly deterministic and fully observable environments. Some games are not trivial search problems, thus needs AI techniques, e.g. Chess has an average branching factor of 35, and games often go to 50 moves by each player, so the search tree has about 35 00 or 0 54 nodes. Finding optimal move: choosing a good move with time limits. Heuristic evaluation functions allow us to approximate the true utility of a state without doing a complete search. Jarrar 204 6

imax Create a utility function Evaluation of board/game state to determine how strong the position of player is. Player wants to maximize the utility function Player 2 wants to minimize the utility function imax Tree Generate a new level for each move Levels alternate between max (player moves) and min (player 2 moves) Jarrar 204 7

imax Tree You are and your enemy is. You play with your enemy in this way. Jarrar 204 8

imax Tree Evaluation Assign utility values to leaves Sometimes called board evaluation function If leaf is a final state, assign the maximum or minimum possible utility value (depending on who would win). If leaf is not a final state, must use some other heuristic, specific to the game, to evaluate how good/bad the state is at that point Jarrar 204 9

imax Tree 00 23 28 2-3 2 4 70-4 -2-70 -5-00 -73-4 -8-24 Terminal nodes: values calculated from the utility function, evaluates how good/bad the state is at this point Jarrar 204 0

imax Tree Evaluation For the MAX player. Generate the game as deep as time permits 2. Apply the evaluation function to the leaf states 3. Back-up values At MIN assign minimum payoff move At MAX assign maximum payoff move 4. At root, MAX chooses the operator that led to the highest payoff Jarrar 204

imax Tree 23 28 2-3 2 4 70-4 -2-70 -5-00 -73-4 -8-24 Terminal nodes: values calculated from the utility function Jarrar 204 2

imax Tree 28-3 2 70-4 00-73 -4-8 23 28 2-3 2 4 70-4 -2-70 -5-00 -73-4 -8-24 Other nodes: values calculated via minimax algorithm Jarrar 204 3

imax Tree -3-4 -73 2-3 2 70-4 00-73 -4-8 23 28 2-3 2 4 70-4 -2-70 -5-00 -73-4 -8-24 Jarrar 204 4

imax Tree -3-3 -4-73 2-3 2 70-4 00-73 -4-8 23 28 2-3 2 4 70-4 -2-70 -5-00 -73-4 -8-24 Jarrar 204 5

imax Tree The best next move for -3-3 -4-73 2-3 2 70-4 00-73 -4-8 23 28 2-3 2 4 70-4 -2-70 -5-00 -73-4 -8-24 Jarrar 204 6

i Example-2 Based on [3] 4 7 9 6 9 8 8 5 6 7 5 2 3 2 5 4 9 3 Terminal nodes: values calculated from the utility function Jarrar 204 7

i Example-2 4 7 6 2 6 3 4 5 2 5 4 2 6 3 4 3 4 7 9 6 9 8 8 5 6 7 5 2 3 2 5 4 9 3 Other nodes: values calculated via minimax algorithm Jarrar 204 8

i Example-2 7 6 5 5 6 4 4 7 6 2 6 3 4 5 2 5 4 2 6 3 4 3 4 7 9 6 9 8 8 5 6 7 5 2 3 2 5 4 9 3 Jarrar 204 9

i Example-2 5 3 4 7 6 5 5 6 4 4 7 6 2 6 3 4 5 2 5 4 2 6 3 4 3 4 7 9 6 9 8 8 5 6 7 5 2 3 2 5 4 9 3 Jarrar 204 20

i Example-2 5 5 3 4 7 6 5 5 6 4 4 7 6 2 6 3 4 5 2 5 4 2 6 3 4 3 4 7 9 6 9 8 8 5 6 7 5 2 3 2 5 4 9 3 Jarrar 204 2

i Example-2 5 5 3 4 7 6 5 5 6 4 4 7 6 2 6 3 4 5 2 5 4 2 6 3 4 3 4 7 9 6 9 8 8 5 6 7 5 2 3 2 5 4 9 3 moves by and countermoves by Jarrar 204 22

Properties of i Complete? Yes (if tree is finite) Optimal? Yes (against an optimal opponent) Time complexity? A complete evaluation takes time b m Space complexity? A complete evaluation takes space bm (depth-first exploration) For chess, b 35, m 00 for "reasonable" games exact solution completely infeasible, since it s too big Instead, we limit the depth based on various factors, including time available. Jarrar 204 23

Alpha-Beta Pruning Algorithm Jarrar 204 24

Pruning the imax Tree Since we have limited time available, we want to avoid unnecessary computation in the minimax tree. Pruning: ways of determining that certain branches will not be useful. a Cuts If the current max value is greater than the successor s min value, don t explore that min subtree any more. Jarrar 204 25

a Cut Example -3-3 -4-73 2-3 2 70-4 00-73 -4 Jarrar 204 26

a Cut Example 2 Depth first search along path Jarrar 204 27

a Cut Example 2 2 2 is minimum so far (second level) Can t evaluate yet at top level Jarrar 204 28

a Cut Example -3-3 2-3 -3 is minimum so far (second level) -3 is maximum so far (top level) Jarrar 204 29

a Cut Example -3-3 2 2-3 2 2 is minimum so far (second level) -3 is still maximum (can t use second node yet) Jarrar 204 30

a Cut Example -3-3 -70 2-3 2-70 -70 is now minimum so far (second level) -3 is still maximum (can t use second node yet) Jarrar 204 3

a Cut Example -3-3 -70 2-3 2-70 Since second level node will never be > -70, it will never be chosen by the previous level We can stop exploring that node Jarrar 204 32

a Cut Example -3-3 -70-73 2-3 2-70 -4 00-73 Evaluation at second level is again -73 Jarrar 204 33

a Cut Example -3-3 -70-73 2-3 2-70 -4 00-73 Again, can apply a cut since the second level node will never be > -73, and thus will never be chosen by the previous level Jarrar 204 34

a Cut Example -3-3 -70-73 2-3 2-70 -4 00-73 -4 As a result, we evaluated the node without evaluating several of the possible paths Jarrar 204 35

b Cuts Similar idea to a cuts, but the other way around If the current minimum is less than the successor s max value, don t look down that max tree any more Jarrar 204 36

b Cut Example 2 2 70 73 2-3 2 70-4 00 73-4 Some subtrees at second level already have values > min from previous, so we can stop evaluating them. Jarrar 204 37

Alpha-Beta Example 2 [-, + ] [-, + ] a best choice for? b best choice for? we assume a depth-first, left-to-right search as basic strategy the range of the possible values for each node are indicated initially [-, + ] from s or s perspective these local values reflect the values of the sub-trees in that node; the global values a and b are the best overall choices so far for or Jarrar 204 38

Alpha-Beta Example 2 [-, + ] [-, 7] 7 a best choice for? b best choice for 7 Jarrar 204 39

Alpha-Beta Example 2 [-, + ] [-, 6] 7 6 a best choice for? b best choice for 6 Jarrar 204 40

Alpha-Beta Example 2 [5, + ] 5 7 6 5 a best choice for 5 b best choice for 5 obtains the third value from a successor node this is the last value from this sub-tree, and the exact value is known now has a value for its first successor node, but hopes that something better might still come Jarrar 204 4

Alpha-Beta Example 2 [5, + ] [-, 5] [-,3] 7 6 5 3 a best choice for 5 b best choice for 3 continues with the next sub-tree, and gets a better value has a better choice from its perspective, however, and will not consider a move in the sub-tree currently explored by min Initially [-, + ] Jarrar 204 42

Alpha-Beta Example 2 [5, + ] [-, 5] [-,3] 7 6 5 3 a best choice for 5 b best choice for 3 knows that won t consider a move to this sub-tree, and abandons it this is a case of pruning, indicated by Jarrar 204 43

Alpha-Beta Example 2 [5, + ] [-, 5] [-,3] [-,6] 7 6 5 3 6 a best choice for 5 b best choice for 3 explores the next sub-tree, and finds a value that is worse than the other nodes at this level if is not able to find something lower, then will choose this branch, so must explore more successor nodes Jarrar 204 44

Alpha-Beta Example 2 [5, + ] [-, 5] [-,3] [-,5] 7 6 5 3 6 5 a best choice for 5 b best choice for 3 is lucky, and finds a value that is the same as the current worst value at this level can choose this branch, or the other branch with the same value Jarrar 204 45

Alpha-Beta Example 2 5 [-, 5] [-,3] [-,5] 7 6 5 3 6 5 a best choice for 5 b best choice for 3 could continue searching this sub-tree to see if there is a value that is less than the current worst alternative in order to give as few choices as possible this depends on the specific implementation knows the best value for its sub-tree Jarrar 204 46

Exercise max min max min Jarrar 204 47

Exercise (Solution) 0 0 4 0 4 4 max min max 0 9 4 2 4 min Jarrar 204 48

a-b Pruning Pruning by these cuts does not affect final result May allow you to go much deeper in tree Good ordering of moves can make this pruning much more efficient Evaluating best branch first yields better likelihood of pruning later branches Perfect ordering reduces time to b m/2 instead of O(b d ) i.e. doubles the depth you can search to! Jarrar 204 49

a-b Pruning Can store information along an entire path, not just at most recent levels! Keep along the path: a: best MAX value found on this path (initialize to most negative utility value) b: best MIN value found on this path (initialize to most positive utility value) Jarrar 204 50

Pruning at MAX node a is possibly updated by the MAX of successors evaluated so far If the value that would be returned is ever > b, then stop work on this branch If all children are evaluated without pruning, return the MAX of their values Jarrar 204 5

Pruning at MIN node b is possibly updated by the MIN of successors evaluated so far If the value that would be returned is ever < a, then stop work on this branch If all children are evaluated without pruning, return the MIN of their values Jarrar 204 52

Idea of a-b Pruning 2 2 2-3 We know b on this path is 2 So, when we get max=70, we know this will never be used, so we can stop here 2 70-4 00 70 Jarrar 204 53

Why is it called α-β? α is the value of the best (i.e., highestvalue) choice found so far at any choice point along the path for max If v is worse than α, max will avoid it prune that branch Define β similarly for min Jarrar 204 54

Imperfect Decisions Complete search is impractical for most games Alternative: search the tree only to a certain depth Requires a cutoff-test to determine where to stop Replaces the terminal test The nodes at that level effectively become terminal leave nodes Uses a heuristics-based evaluation function to estimate the expected utility of the game from those leave nodes. Jarrar 204 55

Utility Evaluation Function Very game-specific Take into account knowledge about game Stupid utility if player wins - if player 0 wins 0 if tie (or unknown) Only works if we can evaluate complete tree But, should form a basis for other evaluations Jarrar 204 56

Utility Evaluation Need to assign a numerical value to the state Could assign a more complex utility value, but then the min/max determination becomes trickier. Typically assign numerical values to lots of individual factors: a = # player s pieces - # player 2 s pieces b = if player has queen and player 2 does not, - if the opposite, or 0 if the same c = 2 if player has 2-rook advantage, if a -rook advantage, etc. Jarrar 204 57

Utility Evaluation The individual factors are combined by some function Usually a linear weighted combination is used: u = aa + bb + cc Different ways to combine are also possible Notice: quality of utility function is based on: What features are evaluated How those features are scored How the scores are weighted/combined Absolute utility value doesn t matter relative value does. Jarrar 204 58

Evaluation Functions If you had a perfect utility evaluation function, what would it mean about the minimax tree? You would never have to evaluate more than one level deep! Typically, you can t create such perfect utility evaluations, though. Jarrar 204 59

Evaluation Functions for Ordering As mentioned earlier, order of branch evaluation can make a big difference in how well you can prune A good evaluation function might help you order your available moves: Perform one move only Evaluate board at that level Recursively evaluate branches in order from best first move to worst first move (or vice-versa if at a MIN node) Jarrar 204 60

The following are extra Examples (Self Study) Jarrar 204 6

Example: Tic-Tac-Toe (evaluation function) Simple evaluation function E(s) = (rx + cx + dx) - (ro + co + do) where r,c,d are the numbers of row, column and diagonal lines still available; x and o are the pieces of the two players. -ply lookahead start at the top of the tree evaluate all 9 choices for player pick the maximum E-value 2-ply lookahead also looks at the opponents possible move assuming that the opponents picks the minimum E-value Jarrar 204 62

Tic-Tac-Toe -Ply Based on [3] E(s0) = {E(s), E(sn)} = {2,3,4} = 4 E(s) E(s2) E(s3) X 8 X 8 X 8-5 - 6-5 = 3 = 2 = 3 E(s4) 8 E(s5) 8 E(s6) 8 X - 6 = 2 X - 4 = 4 X - 6 = 2 E(s7) 8-5 = 3 E(s8) 8-6 = 2 X X X E(s9) 8-5 = 3 Jarrar 204 63

Tic-Tac-Toe 2-Ply E(s0) = {E(s), E(sn)} = {2,3,4} = 4 E(s:) E(s:2) E(s:3) X 8 X 8 X 8-5 - 6-5 = 3 = 2 = 3 E(s2:4) O 5 X - 4 = E(s2:42) O 6 X - 4 = 2 E(s2:43) O 5 X - 4 = E(s:4) 8 E(s:5) 8 E(s:6) 8 X - 6 = 2 X - 4 = 4 X - 6 = 2 E(s2:44) 6 O X - 4 = 2 E(s:7) 8-5 = 3 E(s:8) 8-6 = 2 X X X E(s2:45) 6 E(s2:46) 5 E(s2:47) 6 E(s2:48) 5 X O - 4 X - 4 X - 4 X - 4 = 2 O = O = 2 O = E(s:9) 8-5 = 3 E(s2:9) O X 5-6 = - E(s2:0) X O 5-6 = - E(s2:) X 5 O - 6 = - E(s2:2) X 5 O - 6 = - E(s2:3) X 5 O - 6 = - E(s2:4) X 5-6 O = - E(s2:5) X 5-6 O = - E(s2:6) X 5-6 O = - E(s2) X O 6-5 = E(s22) X O 5-5 = 0 E(s23) X 6 O - 5 = E(s24) X 4 O - 5 = - E(s25) E(s26) X 6 X 5 X O - 5-5 = O = 0 E(s27) 6-5 O = X E(s28) 5-5 O = 0 Jarrar 204 64

Checkers Case Study Based on [4] Initial board configuration Black single on 20 single on 2 king on 3 Red single on 23 king on 22 Evaluation function E(s) = (5 x + x 2 ) - (5r + r 2 ) where x = black king advantage, x 2 = black single advantage, r = red king advantage, r 2 = red single advantage 5 3 2 29 2 3 4 6 7 8 9 0 2 4 5 6 7 8 9 20 22 23 24 25 26 27 28 30 3 32 Jarrar 204 65

5 3 2 2 3 4 6 7 8 9 0 2 4 5 6 7 8 9 20 22 23 24 25 26 27 28 Checkers i Example MAX 0-8 -8 29 30 3 32 MIN 6 6 2 0-4 -8-8 MAX 6 6 2 0 0 0-4 -4-8 -8-8 -8 Jarrar 204 66

Checkers Alpha-Beta Example 5 2 3 4 6 7 8 a b 6 MAX 3 2 0-4 -8 9 0 2 4 5 6 7 8 9 20 22 23 24 29 25 26 27 28 30 3 32 MIN MAX 6 6 2 0-4 -8-8 6 6 2 0 0 0-4 -4-8 -8-8 -8 Jarrar 204 67

Checkers Alpha-Beta Example 5 2 3 4 6 7 8 a b MAX 3 2 0-4 -8 9 0 2 4 5 6 7 8 9 20 22 23 24 29 25 26 27 28 30 3 32 MIN MAX 6 6 2 0-4 -8-8 6 6 2 0 0 0-4 -4-8 -8-8 -8 Jarrar 204 68

Checkers Alpha-Beta Example a b b- cutoff: no need to examine further branches MAX 0-4 -8 5 3 2 29 2 3 4 6 7 8 9 0 2 4 5 6 7 8 9 20 22 23 24 25 26 27 28 30 3 32 MIN MAX 6 6 2 0-4 -8-8 6 6 2 0 0 0-4 -4-8 -8-8 -8 Jarrar 204 69

Checkers Alpha-Beta Example a b MAX 5 3 2 3 4 6 7 8 9 0 2 4 5 6 7 8 9 20 2 22 23 24 0-4 -8 29 25 26 27 28 30 3 32 MIN MAX 6 6 2 0-4 -8-8 6 6 2 0 0 0-4 -4-8 -8-8 -8 Jarrar 204 70

Checkers Alpha-Beta Example a b b- cutoff: no need to examine further branches MAX 0-4 -8 5 3 2 29 2 3 4 6 7 8 9 0 2 4 5 6 7 8 9 20 22 23 24 25 26 27 28 30 3 32 MIN MAX 6 6 2 0-4 -8-8 6 6 2 0 0 0-4 -4-8 -8-8 -8 Jarrar 204 7

Checkers Alpha-Beta Example a b MAX 5 3 2 3 4 6 7 8 9 0 2 4 5 6 7 8 9 20 2 22 23 24 0-4 -8 29 25 26 27 28 30 3 32 MIN MAX 6 6 2 0-4 -8-8 6 6 2 0 0 0-4 -4-8 -8-8 -8 Jarrar 204 72

Checkers Alpha-Beta Example a b 0 MAX 5 3 2 3 4 6 7 8 9 0 2 4 5 6 7 8 9 20 2 22 23 24 0-4 -8 29 25 26 27 28 30 3 32 MIN MAX 6 6 2 0-4 -8-8 6 6 2 0 0 0-4 -4-8 -8-8 -8 Jarrar 204 73

Checkers Alpha-Beta Example a b -4 a- cutoff: no need to examine further branches MAX 0-4 -8 5 3 2 29 2 3 4 6 7 8 9 0 2 4 5 6 7 8 9 20 22 23 24 25 26 27 28 30 3 32 MIN MAX 6 6 2 0-4 -8-8 6 6 2 0 0 0-4 -4-8 -8-8 -8 Jarrar 204 74

Checkers Alpha-Beta Example a b -8 MAX 0-4 -8 5 3 2 29 2 3 4 6 7 8 9 0 2 4 5 6 7 8 9 20 22 23 24 25 26 27 28 30 3 32 MIN MAX 6 6 2 0-4 -8-8 6 6 2 0 0 0-4 -4-8 -8-8 -8 Jarrar 204 75

References [] S. Russell and P. Norvig: Artificial Intelligence: A Modern Approach Prentice Hall, 2003, Second Edition [2] Nilufer Onden: Lecture Notes on Artificial Intelligence http://www.cs.mtu.edu/~nilufer/classes/cs48/204-spring/lecture-slides/cs48-ch05-adversarialsearch.pdf [3] Samy Abu Nasser: Lecture Notes on Artificial Intelligence http://up.edu.ps/ocw/repositories/academic/up/bs/it/itls423/022009/data/itls423.0_042009.ppt [4] Franz Kurfess: Lecture Notes on Artificial Intelligence http://users.csc.calpoly.edu/~fkurfess/courses/artificial-intelligence/f09/slides/3-search.ppt Jarrar 204 76