Game Playing. Chapter 8

Similar documents
Adversary Search. Ref: Chapter 5

ARTIFICIAL INTELLIGENCE (CS 370D)

COMP219: COMP219: Artificial Intelligence Artificial Intelligence Dr. Annabel Latham Lecture 12: Game Playing Overview Games and Search

Game Playing. Why do AI researchers study game playing? 1. It s a good reasoning problem, formal and nontrivial.

Foundations of Artificial Intelligence

Games CSE 473. Kasparov Vs. Deep Junior August 2, 2003 Match ends in a 3 / 3 tie!

Computer Game Programming Board Games

Algorithms for Data Structures: Search for Games. Phillip Smith 27/11/13

Prepared by Vaishnavi Moorthy Asst Prof- Dept of Cse

Game playing. Chapter 5. Chapter 5 1

CSE 332: Data Structures and Parallelism Games, Minimax, and Alpha-Beta Pruning. Playing Games. X s Turn. O s Turn. X s Turn.

COMP219: Artificial Intelligence. Lecture 13: Game Playing

Game playing. Chapter 6. Chapter 6 1

CSE 40171: Artificial Intelligence. Adversarial Search: Game Trees, Alpha-Beta Pruning; Imperfect Decisions

ADVERSARIAL SEARCH. Today. Reading. Goals. AIMA Chapter Read , Skim 5.7

Lecture 5: Game Playing (Adversarial Search)

CS 1571 Introduction to AI Lecture 12. Adversarial search. CS 1571 Intro to AI. Announcements

CS 4700: Foundations of Artificial Intelligence

2/5/17 ADVERSARIAL SEARCH. Today. Introduce adversarial games Minimax as an optimal strategy Alpha-beta pruning Real-time decision making

Adversarial Search Aka Games

Game-playing AIs: Games and Adversarial Search I AIMA

Adversarial Search. Chapter 5. Mausam (Based on slides of Stuart Russell, Andrew Parks, Henry Kautz, Linda Shapiro) 1

Outline. Game playing. Types of games. Games vs. search problems. Minimax. Game tree (2-player, deterministic, turns) Games

Games vs. search problems. Game playing Chapter 6. Outline. Game tree (2-player, deterministic, turns) Types of games. Minimax

Game playing. Chapter 6. Chapter 6 1

Adversarial search (game playing)

Game-playing AIs: Games and Adversarial Search FINAL SET (w/ pruning study examples) AIMA

Computer Science and Software Engineering University of Wisconsin - Platteville. 4. Game Play. CS 3030 Lecture Notes Yan Shi UW-Platteville

ADVERSARIAL SEARCH. Today. Reading. Goals. AIMA Chapter , 5.7,5.8

Adversarial Search and Game- Playing C H A P T E R 6 C M P T : S P R I N G H A S S A N K H O S R A V I

Artificial Intelligence. Minimax and alpha-beta pruning

Game Playing State-of-the-Art

Intuition Mini-Max 2

CS 380: ARTIFICIAL INTELLIGENCE ADVERSARIAL SEARCH. Santiago Ontañón

Adversarial Search (Game Playing)

Games (adversarial search problems)

Game Playing: Adversarial Search. Chapter 5

Game playing. Outline

Documentation and Discussion

Adversarial Search. CS 486/686: Introduction to Artificial Intelligence

CPS331 Lecture: Search in Games last revised 2/16/10

Game playing. Chapter 5, Sections 1 6

CS 5522: Artificial Intelligence II

Game Playing. Dr. Richard J. Povinelli. Page 1. rev 1.1, 9/14/2003

Artificial Intelligence. 4. Game Playing. Prof. Bojana Dalbelo Bašić Assoc. Prof. Jan Šnajder

Adversarial Search. Human-aware Robotics. 2018/01/25 Chapter 5 in R&N 3rd Ø Announcement: Slides for this lecture are here:

2 person perfect information

Lecture 14. Questions? Friday, February 10 CS 430 Artificial Intelligence - Lecture 14 1

Minimax Trees: Utility Evaluation, Tree Evaluation, Pruning

ADVERSARIAL SEARCH. Chapter 5

Game Playing Part 1 Minimax Search

CS 188: Artificial Intelligence Spring Announcements

CS 2710 Foundations of AI. Lecture 9. Adversarial search. CS 2710 Foundations of AI. Game search

Adversarial Search and Game Playing

Search the action space of 2 players Russell & Norvig Chapter 6 Bratko Chapter 24

Today. Types of Game. Games and Search 1/18/2010. COMP210: Artificial Intelligence. Lecture 10. Game playing

Adversarial Search Lecture 7

Artificial Intelligence. Topic 5. Game playing

Games vs. search problems. Adversarial Search. Types of games. Outline

Adversarial Search 1

Game Playing. Philipp Koehn. 29 September 2015

Adversarial Search. CS 486/686: Introduction to Artificial Intelligence

Adversarial Search. Read AIMA Chapter CIS 421/521 - Intro to AI 1

Theory and Practice of Artificial Intelligence

CS 380: ARTIFICIAL INTELLIGENCE

CS 440 / ECE 448 Introduction to Artificial Intelligence Spring 2010 Lecture #5

CS 188: Artificial Intelligence

CS885 Reinforcement Learning Lecture 13c: June 13, Adversarial Search [RusNor] Sec

mywbut.com Two agent games : alpha beta pruning

CS 331: Artificial Intelligence Adversarial Search II. Outline

Adversarial Search. CMPSCI 383 September 29, 2011

CSC 380 Final Presentation. Connect 4 David Alligood, Scott Swiger, Jo Van Voorhis

Adversarial Search. Rob Platt Northeastern University. Some images and slides are used from: AIMA CS188 UC Berkeley

Artificial Intelligence, CS, Nanjing University Spring, 2018, Yang Yu. Lecture 4: Search 3.

Announcements. Homework 1. Project 1. Due tonight at 11:59pm. Due Friday 2/8 at 4:00pm. Electronic HW1 Written HW1

Games and Adversarial Search II

Artificial Intelligence Search III

CSE 473: Artificial Intelligence Fall Outline. Types of Games. Deterministic Games. Previously: Single-Agent Trees. Previously: Value of a State

Pengju

CS 188: Artificial Intelligence Spring 2007

Game-Playing & Adversarial Search

CS 387/680: GAME AI BOARD GAMES

Artificial Intelligence 1: game playing

Game Playing Beyond Minimax. Game Playing Summary So Far. Game Playing Improving Efficiency. Game Playing Minimax using DFS.

CS440/ECE448 Lecture 9: Minimax Search. Slides by Svetlana Lazebnik 9/2016 Modified by Mark Hasegawa-Johnson 9/2017

Artificial Intelligence

Game Playing AI Class 8 Ch , 5.4.1, 5.5

Adversarial Search. Robert Platt Northeastern University. Some images and slides are used from: 1. CS188 UC Berkeley 2. RN, AIMA

Ar#ficial)Intelligence!!

Path Planning as Search

Game Playing State-of-the-Art. CS 188: Artificial Intelligence. Behavior from Computation. Video of Demo Mystery Pacman. Adversarial Search

Foundations of Artificial Intelligence

CS61B Lecture #22. Today: Backtracking searches, game trees (DSIJ, Section 6.5) Last modified: Mon Oct 17 20:55: CS61B: Lecture #22 1

Programming Project 1: Pacman (Due )

CITS3001. Algorithms, Agents and Artificial Intelligence. Semester 2, 2016 Tim French

Artificial Intelligence Lecture 3

Foundations of AI. 5. Board Games. Search Strategies for Games, Games with Chance, State of the Art. Wolfram Burgard and Luc De Raedt SA-1

Game-playing: DeepBlue and AlphaGo

Algorithms for solving sequential (zero-sum) games. Main case in these slides: chess. Slide pack by Tuomas Sandholm

Foundations of AI. 6. Board Games. Search Strategies for Games, Games with Chance, State of the Art

Transcription:

Game Playing Chapter 8

Outline Overview Minimax search Adding alpha-beta cutoffs Additional refinements Iterative deepening 2

Overview Old beliefs Games provided a structured task in which it was very easy to measure success or failure. Games did not obviously require large amounts of knowledge, thought to be solvable by straightforward search. 3

Overview Chess The average branching factor is around 35. In an average game, each player might make 50 moves. One would have to examine 35 100 positions. 4

Overview Improve the generate procedure so that only good moves are generated. 5

Overview Improve the generate procedure so that only good moves are generated. plausible-moves vs. legal-moves 6

Overview Improve the test procedure so that the best moves will be recognized and explored first. 7

Overview Improve the test procedure so that the best moves will be recognized and explored first. less moves to be evaluated 8

Overview It is not usually possible to search until a goal state is found. It has to evaluate individual board positions by estimating how likely they are to lead to a win. Static evaluation function Credit assignment problem (Minsky, 1963). 9

Overview Good plausible-move generator. Good static evaluation function. 10

Minimax Search Depth-first and depth-limited search. At the player choice, maximize the static evaluation of the next position. At the opponent choice, minimize the static evaluation of the next position. 11

Minimax Search A -2 B -6 C -2 D -4 E F G H I J K 9-6 0 0-2 -4-3 Maximizing ply Player Minimizing ply Opponent Two-ply search 12

Minimax Search Player(Position, Depth): for each S SUCCESSORS(Position) do return RESULT = Opponent(S, Depth + 1) NEW-VALUE = PLAYER-VALUE(RESULT) if NEW-VALUE > MAX-SCORE, then MAX-SCORE = NEW-VALUE BEST-PATH = PATH(RESULT) + S VALUE = MAX-SCORE PATH = BEST-PATH 13

Minimax Search Opponent(Position, Depth): for each S SUCCESSORS(Position) do return RESULT = Player(S, Depth + 1) NEW-VALUE = PLAYER-VALUE(RESULT) if NEW-VALUE < MIN-SCORE, then MIN-SCORE = NEW-VALUE BEST-PATH = PATH(RESULT) + S VALUE = MIN-SCORE PATH = BEST-PATH 14

Minimax Search Any-Player(Position, Depth): for each S SUCCESSORS(Position) do return RESULT = Any-Player(S, Depth + 1) NEW-VALUE = VALUE(RESULT) if NEW-VALUE > BEST-SCORE, then BEST-SCORE = NEW-VALUE BEST-PATH = PATH(RESULT) + S VALUE = BEST-SCORE PATH = BEST-PATH 15

Minimax Search MINIMAX(Position, Depth, Player): MOVE-GEN(Position, Player). STATIC(Position, Player). DEEP-ENOUGH(Position, Depth) 16

Minimax Search 1. if DEEP-ENOUGH(Position, Depth), then return: VALUE = STATIC(Position, Player) PATH = nil 2. SUCCESSORS = MOVE-GEN(Position, Player) 3. if SUCCESSORS is empty, then do as in Step 1 17

Minimax Search 4. if SUCCESSORS is not empty: RESULT-SUCC = MINIMAX(SUCC, Depth+1, Opp(Player)) NEW-VALUE = - VALUE(RESULT-SUCC) if NEW-VALUE > BEST-SCORE, then: BEST-SCORE = NEW-VALUE BEST-PATH = PATH(RESULT-SUCC) + SUCC 5. Return: VALUE = BEST-SCORE PATH = BEST-PATH 18

Adding Alpha-Beta Cutoffs At the player choice, maximize the static evaluation of the next position. > α threshold At the opponent choice, minimize the static evaluation of the next position. < β threshold 19

Adding Alpha-Beta Cutoffs D 4 I 5 β cutoff B E 3 < 4? J A F G 3 2 α cutoff C > 3? H Maximizing ply Player Minimizing ply Opponent Maximizing ply Player Minimizing ply Opponent 20

Adding Alpha-Beta Cutoffs D β I v β β cutoff B E α < β? J A C F G v α α cutoff > α? H Maximizing ply Player Minimizing ply Opponent Maximizing ply Player Minimizing ply Opponent 21

Player(Position, Depth, α, β): for each S SUCCESSORS(Position) do return RESULT = Opponent(S, Depth + 1, α, β) NEW-VALUE = PLAYER-VALUE(RESULT) if NEW-VALUE > α, then α = NEW-VALUE BEST-PATH = PATH(RESULT) + S if α β then return VALUE = α PATH = BEST-PATH VALUE = α PATH = BEST-PATH 22

Opponent(Position, Depth, α, β): for each S SUCCESSORS(Position) do return RESULT = Player(S, Depth + 1, α, β) NEW-VALUE = PLAYER-VALUE(RESULT) if NEW-VALUE < β, then β = NEW-VALUE BEST-PATH = PATH(RESULT) + S if β αthen return VALUE = β PATH = BEST-PATH VALUE = β PATH = BEST-PATH 23

Any-Player(Position, Depth, α, β): for each S SUCCESSORS(Position) do return RESULT = Any-Player(S, Depth + 1, β, α) NEW-VALUE = VALUE(RESULT) if NEW-VALUE > α, then α = NEW-VALUE BEST-PATH = PATH(RESULT) + S if α β then return VALUE = α PATH = BEST-PATH VALUE = α PATH = BEST-PATH 24

Additional Refinements Futility cutoffs Waiting for quiescence Secondary search Using book moves Not assuming opponent s optimal move 25

Additional Refinements Futility cutoffs A B 3 C > 3? D 4 E < 4? F G H 3 3.1 I 5 J 26

Iterative Deepening Iteration 1 Iteration 2 Iteration 3 27

Homework Exercises 1-7, 9 (Chapter 12 AI Rich & Knight) 28