Games and Adversarial Search II

Similar documents
Game-playing AIs: Games and Adversarial Search FINAL SET (w/ pruning study examples) AIMA

10/5/2015. Constraint Satisfaction Problems. Example: Cryptarithmetic. Example: Map-coloring. Example: Map-coloring. Constraint Satisfaction Problems

CS 771 Artificial Intelligence. Adversarial Search

Game-Playing & Adversarial Search

Game-playing AIs: Games and Adversarial Search I AIMA

Adversarial Search and Game- Playing C H A P T E R 6 C M P T : S P R I N G H A S S A N K H O S R A V I

CS 188: Artificial Intelligence Spring 2007

Adversarial Search and Game Playing

Game-Playing & Adversarial Search Alpha-Beta Pruning, etc.

Adversarial Search (Game Playing)

Game playing. Chapter 5. Chapter 5 1

Adverserial Search Chapter 5 minmax algorithm alpha-beta pruning TDDC17. Problems. Why Board Games?

Adversarial Search Aka Games

Artificial Intelligence 1: game playing

Path Planning as Search

CS 380: ARTIFICIAL INTELLIGENCE ADVERSARIAL SEARCH. Santiago Ontañón

Game Playing AI Class 8 Ch , 5.4.1, 5.5

Game Playing: Adversarial Search. Chapter 5

Game playing. Chapter 6. Chapter 6 1

Game Playing State-of-the-Art

Adversarial Search. Human-aware Robotics. 2018/01/25 Chapter 5 in R&N 3rd Ø Announcement: Slides for this lecture are here:

Adversarial Search. Read AIMA Chapter CIS 421/521 - Intro to AI 1

CS 5522: Artificial Intelligence II

CS 380: ARTIFICIAL INTELLIGENCE

Game playing. Chapter 5, Sections 1 6

Games vs. search problems. Adversarial Search. Types of games. Outline

Adversarial Search 1

Game playing. Chapter 6. Chapter 6 1

Artificial Intelligence, CS, Nanjing University Spring, 2018, Yang Yu. Lecture 4: Search 3.

Programming Project 1: Pacman (Due )

Game Playing. Dr. Richard J. Povinelli. Page 1. rev 1.1, 9/14/2003

Games vs. search problems. Game playing Chapter 6. Outline. Game tree (2-player, deterministic, turns) Types of games. Minimax

Artificial Intelligence

Game playing. Outline

Game Playing. Philipp Koehn. 29 September 2015

CS 188: Artificial Intelligence

Announcements. CS 188: Artificial Intelligence Fall Today. Tree-Structured CSPs. Nearly Tree-Structured CSPs. Tree Decompositions*

Lecture 5: Game Playing (Adversarial Search)

Outline. Game playing. Types of games. Games vs. search problems. Minimax. Game tree (2-player, deterministic, turns) Games

Artificial Intelligence Adversarial Search

Adversarial Search and Game Playing. Russell and Norvig: Chapter 5

Set 4: Game-Playing. ICS 271 Fall 2017 Kalev Kask

ARTIFICIAL INTELLIGENCE (CS 370D)

CS 188: Artificial Intelligence

Game Playing State-of-the-Art. CS 188: Artificial Intelligence. Behavior from Computation. Video of Demo Mystery Pacman. Adversarial Search

Adversary Search. Ref: Chapter 5

CS 171, Intro to A.I. Midterm Exam Fall Quarter, 2016

UMBC 671 Midterm Exam 19 October 2009

Adversarial Search: Game Playing. Reading: Chapter

CS 440 / ECE 448 Introduction to Artificial Intelligence Spring 2010 Lecture #5

Announcements. Homework 1. Project 1. Due tonight at 11:59pm. Due Friday 2/8 at 4:00pm. Electronic HW1 Written HW1

Foundations of Artificial Intelligence

Foundations of Artificial Intelligence

COMP9414: Artificial Intelligence Adversarial Search

Artificial Intelligence

Games CSE 473. Kasparov Vs. Deep Junior August 2, 2003 Match ends in a 3 / 3 tie!

ADVERSARIAL SEARCH. Chapter 5

Adversarial Search Lecture 7

CS 188: Artificial Intelligence Spring Announcements

Outline. Game Playing. Game Problems. Game Problems. Types of games Playing a perfect game. Playing an imperfect game

Artificial Intelligence. Topic 5. Game playing

Last update: March 9, Game playing. CMSC 421, Chapter 6. CMSC 421, Chapter 6 1

Adversarial Search. Chapter 5. Mausam (Based on slides of Stuart Russell, Andrew Parks, Henry Kautz, Linda Shapiro) 1

Adversarial Search. CMPSCI 383 September 29, 2011

mywbut.com Two agent games : alpha beta pruning

UNIVERSITY of PENNSYLVANIA CIS 391/521: Fundamentals of AI Midterm 1, Spring 2010

Contents. Foundations of Artificial Intelligence. Problems. Why Board Games?

Artificial Intelligence Search III

CPS331 Lecture: Search in Games last revised 2/16/10

Adversarial Search. Soleymani. Artificial Intelligence: A Modern Approach, 3 rd Edition, Chapter 5

Adversarial Search. Robert Platt Northeastern University. Some images and slides are used from: 1. CS188 UC Berkeley 2. RN, AIMA

Intuition Mini-Max 2

Adversarial Search. Hal Daumé III. Computer Science University of Maryland CS 421: Introduction to Artificial Intelligence 9 Feb 2012

CSE 473: Artificial Intelligence Fall Outline. Types of Games. Deterministic Games. Previously: Single-Agent Trees. Previously: Value of a State

CPS 570: Artificial Intelligence Two-player, zero-sum, perfect-information Games

School of EECS Washington State University. Artificial Intelligence

Announcements. CS 188: Artificial Intelligence Spring Game Playing State-of-the-Art. Overview. Game Playing. GamesCrafters

Computer Science and Software Engineering University of Wisconsin - Platteville. 4. Game Play. CS 3030 Lecture Notes Yan Shi UW-Platteville

Game Playing. Why do AI researchers study game playing? 1. It s a good reasoning problem, formal and nontrivial.

CSE 573: Artificial Intelligence Autumn 2010

Adversarial Search. Rob Platt Northeastern University. Some images and slides are used from: AIMA CS188 UC Berkeley

CS 331: Artificial Intelligence Adversarial Search II. Outline

CS 188: Artificial Intelligence. Overview

CS188 Spring 2010 Section 3: Game Trees

CSE 40171: Artificial Intelligence. Adversarial Search: Games and Optimality

Game Playing State of the Art

CSE 40171: Artificial Intelligence. Adversarial Search: Game Trees, Alpha-Beta Pruning; Imperfect Decisions

CSE 473: Ar+ficial Intelligence

Ch.4 AI and Games. Hantao Zhang. The University of Iowa Department of Computer Science. hzhang/c145

2/5/17 ADVERSARIAL SEARCH. Today. Introduce adversarial games Minimax as an optimal strategy Alpha-beta pruning Real-time decision making

CSE 473: Artificial Intelligence. Outline

ADVERSARIAL SEARCH. Today. Reading. Goals. AIMA Chapter , 5.7,5.8

ADVERSARIAL SEARCH. Today. Reading. Goals. AIMA Chapter Read , Skim 5.7

CS 4700: Foundations of Artificial Intelligence

COMP219: COMP219: Artificial Intelligence Artificial Intelligence Dr. Annabel Latham Lecture 12: Game Playing Overview Games and Search

Games (adversarial search problems)

Game Playing State-of-the-Art CSE 473: Artificial Intelligence Fall Deterministic Games. Zero-Sum Games 10/13/17. Adversarial Search

CS-171, Intro to A.I. Mid-term Exam Winter Quarter, 2015

Game playing. Chapter 5, Sections 1{5. AIMA Slides cstuart Russell and Peter Norvig, 1998 Chapter 5, Sections 1{5 1

Game-playing: DeepBlue and AlphaGo

Transcription:

Games and Adversarial Search II

Alpha-Beta Pruning (AIMA 5.3) Some slides adapted from Richard Lathrop, USC/ISI, CS 271

Review: The Minimax Rule Idea: Make the best move for MAX assuming that MIN always replies with the best move for MIN 1. Start with the current position as a MAX node. 2. Expand the game tree a fixed number of ply. 3. Apply the evaluation function to all leaf positions. 4. Calculate back-up values bottom-up: For a MAX node, return the maximum of the values of its children (i.e. the best for MAX) For a MIN node, return the minimum of the values of its children (i.e. the best for MIN 5. Pick the move assigned to MAX at the root 6. Wait for MIN to respond and REPEAT FROM 1 CIS 421/521 - Intro to AI 3

2-ply Example: Backing up values MAX MIN 2 2 7 1 8 2 1 2 7 1 8 2 1 2 7 1 8 Evaluation function value 2 This is the move selected by minimax New point: Actually calculated by DFS! 2 1 2 7 1 8 CIS 421/521 - Intro to AI 4

Minimax Algorithm function MINIMAX-DECISION(state) returns an action inputs: state, current state in game v MAX-VALUE(state) return an action in SUCCESSORS(state) with value v function MAX-VALUE(state) returns a utility value if TERMINAL-TEST(state) then return UTILITY(state) v - for a,s in SUCCESSORS(state) do v MAX(v, MIN-VALUE(s) ) return v function MIN-VALUE(state) returns a utility value if TERMINAL-TEST(state) then return UTILITY(state) v for a,s in SUCCESSORS(state) do v MIN(v, MAX-VALUE(s) ) return v CIS 421/521 - Intro to AI 5

Alpha-Beta Pruning A way to improve the performance of the Minimax Procedure Basic idea: If you have an idea which is surely bad, don t take the time to see how truly awful it is ~ Pat Winston Assuming left-to-right tree traversal: =2 2 7 1 >=2 <=1? We don t need to compute the value at this node! No matter what it is it can t effect the value of the root node. CIS 421/521 - Intro to AI 6

Alpha-Beta Pruning II During Minimax, keep track of two additional values: α: current lower bound on MAX s outcome β: current upper bound on MIN s outcome MAX will never choose a move that could lead to a worse score (for MAX) than α MIN will never choose a move that could lead to a better score (for MAX) than β Therefore, stop evaluating a branch whenever: When evaluating a MAX node: a value v β is backed-up MIN will never select that MAX node When evaluating a MIN node: a value v α is found MAX will never select that MIN node CIS 421/521 - Intro to AI 7

Alpha-Beta Pruning IIIa Based on observation that for all viable paths utility value f(n) will be α <= f(n) <= β Initially, α = -infinity, β=infinity As the search tree is traversed, the possible utility value window shrinks as α increases, β decreases CIS 421/521 - Intro to AI 8

Alpha-Beta Pruning IIIb Whenever the current ranges of alpha and beta no longer overlap ( ), it is clear that the current node is a dead end, so it can be pruned CIS 421/521 - Intro to AI 9

Alpha-beta Algorithm: In detail Depth first search (usually bounded, with static evaluation) only considers nodes along a single path from root at any time β α = current lower bound on MAX s outcome (initially, = infinity) = current upper bound on MIN s outcome (initially, = +infinity) Pass current values of and down to child nodes during search. Update values of and during search: MAX updates at MAX nodes MIN updates at MIN nodes Prune remaining branches at a node whenever CIS 421/521 - Intro to AI 10

When to Prune Prune whenever. β α Prune below a Max node when its value becomes the value of its ancestors. Max nodes update based on children s returned values. Idea: Player MIN at node above won t pick that value anyway, since MIN can force a worse value. Prune below a Min node when its value becomes the value of its ancestors. Min nodes update based on children s returned values. Idea: Player MAX at node above won t pick that value anyway; she can do better. CIS 421/521 - Intro to AI 11

Pseudocode for Alpha-Beta Algorithm function ALPHA-BETA-SEARCH(state) returns an action inputs: state, current state in game v MAX-VALUE(state, -, + ) return an action in ACTIONS(state) with value v CIS 421/521 - Intro to AI 12

Pseudocode for Alpha-Beta Algorithm function ALPHA-BETA-SEARCH(state) returns an action inputs: state, current state in game v MAX-VALUE(state, -, + ) return an action in ACTIONS(state) with value v function MAX-VALUE(state,, ) returns a utility value if TERMINAL-TEST(state) then return UTILITY(state) v - for a in ACTIONS(state) do v MAX(v,MIN-VALUE(Result(s,a),, )) if v then return v MAX(,v) return v CIS 421/521 - Intro to AI 13

Alpha-Beta Algorithm II function MIN-VALUE(state,, ) returns a utility value if TERMINAL-TEST(state) then return UTILITY(state) v + for a,s in SUCCESSORS(state) do v MIN(v,MAX-VALUE(s,, )) if v then return v MIN(,v) return v CIS 421/521 - Intro to AI 14

An Alpha-Beta Example β α Do DF-search until first leaf,, passed to kids = =+,, initial values = =+ CIS 421/521 - Intro to AI 15

Alpha-Beta Example (continued) β = =+ α = =3 MIN updates, based on kids CIS 421/521 - Intro to AI 16

Alpha-Beta Example (continued) β = =+ α MIN updates, based on kids. No change. = =3 CIS 421/521 - Intro to AI 17

Alpha-Beta Example (continued) β MAX updates, based on kids. =3 =+ α 3 is returned as node value. CIS 421/521 - Intro to AI 18

Alpha-Beta Example (continued) β α =3 =+,, passed to kids =3 =+ CIS 421/521 - Intro to AI 19

Alpha-Beta Example (continued) β α =3 =+ MIN updates, based on kids. =3 =2 CIS 421/521 - Intro to AI 20

Alpha-Beta Example (continued) β =3 =+ α =3 =2, so prune. CIS 421/521 - Intro to AI 21

Alpha-Beta Example (continued) β MAX updates, based on kids. No change. =3 =+ 2 is returned as node value. α CIS 421/521 - Intro to AI 22

Alpha-Beta Example (continued) β =3 =+,,, passed to kids α =3 =+ CIS 421/521 - Intro to AI 23

Alpha-Beta Example (continued) β α =3 =+, MIN updates, based on kids. =3 =14 CIS 421/521 - Intro to AI 24

Alpha-Beta Example (continued) β α =3 =+, MIN updates, based on kids. =3 =5 CIS 421/521 - Intro to AI 25

Alpha-Beta Example (continued) β =3 =+ 2 is returned as node value. α 2 CIS 421/521 - Intro to AI 26

Alpha-Beta Example (continued) β Max now makes it s best move, as computed by Alpha-Beta 2 α CIS 421/521 - Intro to AI 27

Effectiveness of Alpha-Beta Pruning Guaranteed to compute same root value as Minimax Worst case: no pruning, same as Minimax (O(b d )) Best case: when each player s best move is the first option examined, examines only O(b d/2 ) nodes, allowing to search twice as deep! CIS 421/521 - Intro to AI 28

When best move is the first examined, examines only O(b d/2 ) nodes. So: run Iterative Deepening search, sort by value returned on last iteration. So: expand captures first, then threats, then forward moves, etc. O(b (d/2) ) is the same as having a branching factor of sqrt(b), Since (sqrt(b)) d = b (d/2) e.g., in chess go from b ~ 35 to b ~ 6 For Deep Blue, alpha-beta pruning reduced the average branching factor from 35-40 to 6, as expected, doubling search depth CIS 421/521 - Intro to AI 29

Chinook and Deep Blue Chinook the World Man-Made Checkers Champion, developed at the University of Alberta. Competed in human tournaments, earning the right to play for the human world championship, and defeated the best players in the world. Deep Blue Defeated world champion Gary Kasparov 3.5-2.5 in 1997 after losing 4-2 in 1996. Uses a parallel array of 256 special chess-specific processors Evaluates 200 billion moves every 3 minutes; 12-ply search depth Expert knowledge from an international grandmaster. 8000 factor evaluation function tuned from hundreds of thousands of grandmaster games Tends to play for tiny positional advantages. CIS 421/521 - Intro to AI 31

FOR STUDY.

Example -which nodes can be pruned? 3 4 1 2 7 8 5 6 CIS 421/521 - Intro to AI 33

Answer to Example Max -which nodes can be pruned? Min Max 3 4 1 2 7 8 5 6 Answer: NONE! Because the most favorable nodes for both are explored last (i.e., in the diagram, are on the right-hand side). CIS 421/521 - Intro to AI 34

Second Example (the exact mirror image of the first example) -which nodes can be pruned? 6 5 8 7 2 1 3 4 CIS 421/521 - Intro to AI 35

Answer to Second Example (the exact mirror image of the first example) Max -which nodes can be pruned? Min Max 6 5 8 7 2 1 3 4 Answer: LOTS! Because the most favorable nodes for both are explored first (i.e., in the diagram, are on the left-hand side). CIS 421/521 - Intro to AI 36

Constraint Satisfaction Problems AIMA: Chapter 6 CIS 421/521 - Intro to AI 37

Big idea Represent the constraints that solutions must satisfy in a uniform declarative language Find solutions by GENERAL PURPOSE search algorithms with no changes from problem to problem No hand built transition functions No hand built heuristics Just specify the problem in a formal declarative language, and a general purpose algorithm does everything else! CIS 421/521 - Intro to AI 38

Constraint Satisfaction Problems A CSP consists of: Finite set of variables X 1, X 2,, X n Nonempty domain of possible values for each variable D 1, D 2, D n where D i = {v 1,, v k } Finite set of constraints C 1, C 2,, C m Each constraint C i limits the values that variables can take, e.g., X 1 X 2 A state is defined as an assignment of values to some or all variables. A consistent assignment does not violate the constraints. Example problem: Sudoku CIS 421/521 - Intro to AI 39

Constraint satisfaction problems An assignment is complete when every variable is assigned a value. A solution to a CSP is a complete, consistent assignment. Solutions to CSPs can be found by a completely general purpose algorithm, given only the formal specification of the CSP. Beyond our scope: CSPs that require a solution that maximizes an objective function. CIS 421/521 - Intro to AI 40

Applications Map coloring Line Drawing Interpretation Scheduling problems Job shop scheduling Scheduling the Hubble Space Telescope Floor planning for VLSI CIS 421/521 - Intro to AI 41

Example: Map-coloring Variables: WA, NT, Q, NSW, V, SA, T Domains: D i = {red,green,blue} Constraints: adjacent regions must have different colors e.g., WA NT So (WA,NT) must be in {(red,green),(red,blue),(green,red), } CIS 421/521 - Intro to AI 42

Example: Map-coloring Solutions: complete and consistent assignments e.g., WA = red, NT = green,q = red, NSW = green, V = red, SA = blue, T = green CIS 421/521 - Intro to AI 43

Example: Cryptarithmetic X 3 X 2 X 1 Variables: F T U W R O, X 1 X 2 X 3 Domain: {0,1,2,3,4,5,6,7,8,9} Constraints: Alldiff (F,T,U,W,R,O) O + O = R + 10 X 1 X 1 + W + W = U + 10 X 2 X 2 + T + T = O + 10 X 3 X 3 = F, T 0, F 0 CIS 421/521 - Intro to AI 44

Benefits of CSP Clean specification of many problems, generic goal, successor function & heuristics Just represent problem as a CSP & solve with general package CSP knows which variables violate a constraint And hence where to focus the search CSPs: Automatically prune off all branches that violate constraints (State space search could do this only by hand-building constraints into the successor function) CIS 421/521 - Intro to AI 45

CSP Representations Constraint graph: nodes are variables arcs are (binary) constraints Standard representation pattern: variables with values Constraint graph simplifies search. e.g. Tasmania is an independent subproblem. This problem: A binary CSP: each constraint relates two variables CIS 421/521 - Intro to AI 46

Varieties of CSPs Discrete variables finite domains: n variables, domain size d O(d n ) complete assignments e.g., Boolean CSPs, includes Boolean satisfiability (NP-complete) infinite domains: integers, strings, etc. e.g., job scheduling, variables are start/end days for each job need a constraint language, e.g., StartJob 1 + 5 StartJob 3 Continuous variables e.g., start/end times for Hubble Space Telescope observations linear constraints solvable in polynomial time by linear programming CIS 421/521 - Intro to AI 47

Varieties of constraints Unary constraints involve a single variable, e.g., SA green Binary constraints involve pairs of variables, e.g., SA WA Higher-order constraints involve 3 or more variables e.g., crypt-arithmetic column constraints Preference (soft constraints) e.g. red is better than green can be represented by a cost for each variable assignment Constrained optimization problems. CIS 421/521 - Intro to AI 48

Idea 1: CSP as a search problem A CSP can easily be expressed as a search problem Initial State: the empty assignment {}. Successor function: Assign value to any unassigned variable provided that there is not a constraint conflict. Goal test: the current assignment is complete. Path cost: a constant cost for every step. Solution is always found at depth n, for n variables Hence Depth First Search can be used CIS 421/521 - Intro to AI 49

Backtracking search Note that variable assignments are commutative Eg [ step 1: WA = red; step 2: NT = green ] equivalent to [ step 1: NT = green; step 2: WA = red ] Therefore, a tree search, not a graph search Only need to consider assignments to a single variable at each node b = d and there are d n leaves (n variables, domain size d ) Depth-first search for CSPs with single-variable assignments is called backtracking search Backtracking search is the basic uninformed algorithm for CSPs Can solve n-queens for n 25 CIS 421/521 - Intro to AI 50

Backtracking example CIS 421/521 - Intro to AI 51

Backtracking example And so on. CIS 421/521 - Intro to AI 52

Idea 2: Improving backtracking efficiency General-purpose methods & general-purpose heuristics can give huge gains in speed, on average Heuristics: Q: Which variable should be assigned next? 1. Most constrained variable 2. (if ties:) Most constraining variable Q: In what order should that variable s values be tried? 3. Least constraining value Q: Can we detect inevitable failure early? 4. Forward checking CIS 421/521 - Intro to AI 53

CIS 421/521 - Intro to AI 54 Heuristic 1: Most constrained variable Choose a variable with the fewest legal values a.k.a. minimum remaining values (MRV) heuristic 3 3 3 3 3 3 3 3 3 2 3 2 3 2 3 3 1 3 1 3 3 2

Heuristic 2: Most constraining variable Tie-breaker among most constrained variables Choose the variable with the most constraints on remaining variables These two heuristics together lead to immediate solution of our example problem CIS 421/521 - Intro to AI 55

Heuristic 3: Least constraining value Given a variable, choose the least constraining value: the one that rules out the fewest values in the remaining variables Note: demonstrated here independent of the other heuristics CIS 421/521 - Intro to AI 56

Heuristic 4: Forward checking Idea: Keep track of remaining legal values for unassigned variables Terminate search when any unassigned variable has no remaining legal values New data structure (A first step towards Arc Consistency & AC-3) CIS 421/521 - Intro to AI 57

Forward checking Idea: Keep track of remaining legal values for unassigned variables Terminate search when any unassigned variable has no remaining legal values CIS 421/521 - Intro to AI 58

Forward checking Idea: Keep track of remaining legal values for unassigned variables Terminate search when any unassigned variable has no remaining legal values CIS 421/521 - Intro to AI 59

Forward checking Idea: Keep track of remaining legal values for unassigned variables Terminate search when any unassigned variable has no remaining legal values Terminate! No possible value for SA CIS 421/521 - Intro to AI 60

Example: 4-Queens Problem 1 X1 X2 X3 X4 X1 {1,2,3,4} X2 {1,2,3,4} 2 3 4 X3 {1,2,3,4} X4 {1,2,3,4} (From Bonnie Dorr, U of Md, CMSC 421) CIS 421/521 - Intro to AI 61

Example: 4-Queens Problem 1 1 2 3 4 X1 {1,2,3,4} X2 {1,2,3,4} 2 3 4 X3 {1,2,3,4} X4 {1,2,3,4} Assign value to unassigned variable CIS 421/521 - Intro to AI 62

Example: 4-Queens Problem 1 1 2 3 4 X1 {1,2,3,4} X2 {,,3,4} 2 3 4 X3 {,2,,4} X4 {,2,3, } Forward check! CIS 421/521 - Intro to AI 63

Example: 4-Queens Problem 1 1 2 3 4 X1 {1,2,3,4} X2 {,,3,4} 2 3 4 X3 {,2,,4} X4 {,2,3, } Assign value to unassigned variable CIS 421/521 - Intro to AI 64

Example: 4-Queens Problem 1 1 2 3 4 X1 {1,2,3,4} X2 {,,3,4} 2 3 4 X3 {,,, } Backtrack!!! X4 {,2,, } Forward check! CIS 421/521 - Intro to AI 65

Example: 4-Queens Problem Picking up a little later after two steps of backtracking. 1 2 3 4 1 2 3 4 X1 {,2,3,4} X3 {1,2,3,4} X2 {1,2,3,4} X4 {1,2,3,4} Assign value to unassigned variable CIS 421/521 - Intro to AI 66

Example: 4-Queens Problem 1 1 2 3 4 X1 {,2,3,4} X2 {,,,4} 2 3 4 X3 {1,,3, } X4 {1,,3,4} Forward check! CIS 421/521 - Intro to AI 67

Example: 4-Queens Problem 1 1 2 3 4 X1 {,2,3,4} X2 {,,,4} 2 3 4 X3 {1,,3, } X4 {1,,3,4} Assign value to unassigned variable CIS 421/521 - Intro to AI 68

Example: 4-Queens Problem 1 1 2 3 4 X1 {,2,3,4} X2 {,,,4} 2 3 4 X3 {1,,, } X4 {1,,3, } Forward check! CIS 421/521 - Intro to AI 69

Example: 4-Queens Problem 1 1 2 3 4 X1 {,2,3,4} X2 {,,,4} 2 3 4 X3 {1,,, } X4 {1,,3, } Assign value to unassigned variable CIS 421/521 - Intro to AI 70

Example: 4-Queens Problem 1 1 2 3 4 X1 {,2,3,4} X2 {,,,4} 2 3 4 X3 {1,,, } X4 {,,3, } Forward check! CIS 421/521 - Intro to AI 71

Example: 4-Queens Problem 1 1 2 3 4 X1 {,2,3,4} X2 {,,,4} 2 3 4 X3 {1,,, } X4 {,,3, } Assign value to unassigned variable CIS 421/521 - Intro to AI 72

Towards Constraint propagation Forward checking propagates information from assigned to unassigned variables, but doesn't provide early detection for all failures: NT and SA cannot both be blue! Constraint propagation goes beyond forward checking & repeatedly enforces constraints locally CIS 421/521 - Intro to AI 73