Adversarial search (game playing)

Similar documents
Game playing. Chapter 6. Chapter 6 1

Games vs. search problems. Adversarial Search. Types of games. Outline

Outline. Game playing. Types of games. Games vs. search problems. Minimax. Game tree (2-player, deterministic, turns) Games

Game playing. Outline

CS 380: ARTIFICIAL INTELLIGENCE ADVERSARIAL SEARCH. Santiago Ontañón

Games vs. search problems. Game playing Chapter 6. Outline. Game tree (2-player, deterministic, turns) Types of games. Minimax

Game playing. Chapter 6. Chapter 6 1

CS 380: ARTIFICIAL INTELLIGENCE

Game playing. Chapter 5. Chapter 5 1

Game Playing. Dr. Richard J. Povinelli. Page 1. rev 1.1, 9/14/2003

Game playing. Chapter 5, Sections 1{5. AIMA Slides cstuart Russell and Peter Norvig, 1998 Chapter 5, Sections 1{5 1

Game Playing. Philipp Koehn. 29 September 2015

Artificial Intelligence. Topic 5. Game playing

ADVERSARIAL SEARCH. Chapter 5

Game Playing: Adversarial Search. Chapter 5

Lecture 5: Game Playing (Adversarial Search)

Game playing. Chapter 5, Sections 1 6

Ch.4 AI and Games. Hantao Zhang. The University of Iowa Department of Computer Science. hzhang/c145

Today. Types of Game. Games and Search 1/18/2010. COMP210: Artificial Intelligence. Lecture 10. Game playing

COMP219: COMP219: Artificial Intelligence Artificial Intelligence Dr. Annabel Latham Lecture 12: Game Playing Overview Games and Search

Game Playing. Why do AI researchers study game playing? 1. It s a good reasoning problem, formal and nontrivial.

Games CSE 473. Kasparov Vs. Deep Junior August 2, 2003 Match ends in a 3 / 3 tie!

Artificial Intelligence, CS, Nanjing University Spring, 2018, Yang Yu. Lecture 4: Search 3.

Adversarial Search. Chapter 5. Mausam (Based on slides of Stuart Russell, Andrew Parks, Henry Kautz, Linda Shapiro) 1

CS 188: Artificial Intelligence

Adversarial Search. Soleymani. Artificial Intelligence: A Modern Approach, 3 rd Edition, Chapter 5

Adversarial Search. Hal Daumé III. Computer Science University of Maryland CS 421: Introduction to Artificial Intelligence 9 Feb 2012

Adversarial Search. CMPSCI 383 September 29, 2011

CS 331: Artificial Intelligence Adversarial Search II. Outline

Artificial Intelligence Adversarial Search

Programming Project 1: Pacman (Due )

COMP219: Artificial Intelligence. Lecture 13: Game Playing

CS 188: Artificial Intelligence Spring Game Playing in Practice

Adversarial Search and Game Playing

Adversarial Search. Chapter 5. Mausam (Based on slides of Stuart Russell, Andrew Parks, Henry Kautz, Linda Shapiro, Diane Cook) 1

Local Search. Hill Climbing. Hill Climbing Diagram. Simulated Annealing. Simulated Annealing. Introduction to Artificial Intelligence

Game Playing State-of-the-Art CSE 473: Artificial Intelligence Fall Deterministic Games. Zero-Sum Games 10/13/17. Adversarial Search

6. Games. COMP9414/ 9814/ 3411: Artificial Intelligence. Outline. Mechanical Turk. Origins. origins. motivation. minimax search

Last update: March 9, Game playing. CMSC 421, Chapter 6. CMSC 421, Chapter 6 1

Adversarial Search Lecture 7

CS 188: Artificial Intelligence Spring 2007

Adversarial Search (a.k.a. Game Playing)

Artificial Intelligence 1: game playing

Intuition Mini-Max 2

Announcements. CS 188: Artificial Intelligence Fall Local Search. Hill Climbing. Simulated Annealing. Hill Climbing Diagram

Adversarial Search. Human-aware Robotics. 2018/01/25 Chapter 5 in R&N 3rd Ø Announcement: Slides for this lecture are here:

Adversarial Search and Game- Playing C H A P T E R 6 C M P T : S P R I N G H A S S A N K H O S R A V I

CS 188: Artificial Intelligence Spring Announcements

CSE 573: Artificial Intelligence Autumn 2010

Game Playing State-of-the-Art

CS 5522: Artificial Intelligence II

Today. Nondeterministic games: backgammon. Algorithm for nondeterministic games. Nondeterministic games in general. See Russell and Norvig, chapter 6

Artificial Intelligence

CSE 473: Artificial Intelligence. Outline

Announcements. CS 188: Artificial Intelligence Spring Game Playing State-of-the-Art. Overview. Game Playing. GamesCrafters

CITS3001. Algorithms, Agents and Artificial Intelligence. Semester 2, 2016 Tim French

Artificial Intelligence. Minimax and alpha-beta pruning

Announcements. Homework 1. Project 1. Due tonight at 11:59pm. Due Friday 2/8 at 4:00pm. Electronic HW1 Written HW1

CS440/ECE448 Lecture 9: Minimax Search. Slides by Svetlana Lazebnik 9/2016 Modified by Mark Hasegawa-Johnson 9/2017

Game Playing State-of-the-Art. CS 188: Artificial Intelligence. Behavior from Computation. Video of Demo Mystery Pacman. Adversarial Search

Games and Adversarial Search

Game Playing State of the Art

Artificial Intelligence

Artificial Intelligence

Adversarial Search. Read AIMA Chapter CIS 421/521 - Intro to AI 1

CS 188: Artificial Intelligence

Adversarial Search and Game Playing. Russell and Norvig: Chapter 5

Game-Playing & Adversarial Search

Adversarial Search Aka Games

CS 4700: Foundations of Artificial Intelligence

CS 771 Artificial Intelligence. Adversarial Search

Outline. Game Playing. Game Problems. Game Problems. Types of games Playing a perfect game. Playing an imperfect game

Lecture 14. Questions? Friday, February 10 CS 430 Artificial Intelligence - Lecture 14 1

Artificial Intelligence

Adversarial Search (Game Playing)

CS 188: Artificial Intelligence. Overview

Adversarial Search 1

ADVERSARIAL SEARCH. Today. Reading. Goals. AIMA Chapter , 5.7,5.8

CSE 573: Artificial Intelligence

Set 4: Game-Playing. ICS 271 Fall 2017 Kalev Kask

Game Playing AI Class 8 Ch , 5.4.1, 5.5

CS 440 / ECE 448 Introduction to Artificial Intelligence Spring 2010 Lecture #5

Games (adversarial search problems)

Artificial Intelligence Search III

Game-playing AIs: Games and Adversarial Search FINAL SET (w/ pruning study examples) AIMA

Foundations of AI. 6. Adversarial Search. Search Strategies for Games, Games with Chance, State of the Art. Wolfram Burgard & Bernhard Nebel

ARTIFICIAL INTELLIGENCE (CS 370D)

Adversarial Search: Game Playing. Reading: Chapter

ADVERSARIAL SEARCH. Today. Reading. Goals. AIMA Chapter Read , Skim 5.7

Adversary Search. Ref: Chapter 5

Ar#ficial)Intelligence!!

Artificial Intelligence

Adversarial Search. CS 486/686: Introduction to Artificial Intelligence

Game Playing. Garry Kasparov and Deep Blue. 1997, GM Gabriel Schwartzman's Chess Camera, courtesy IBM.

CSE 473: Artificial Intelligence Autumn 2011

Adversarial Search. CS 486/686: Introduction to Artificial Intelligence

CS885 Reinforcement Learning Lecture 13c: June 13, Adversarial Search [RusNor] Sec

Adversarial Search. Rob Platt Northeastern University. Some images and slides are used from: AIMA CS188 UC Berkeley

CPS 570: Artificial Intelligence Two-player, zero-sum, perfect-information Games

School of EECS Washington State University. Artificial Intelligence

Transcription:

Adversarial search (game playing) References Russell and Norvig, Artificial Intelligence: A modern approach, 2nd ed. Prentice Hall, 2003 Nilsson, Artificial intelligence: A New synthesis. McGraw Hill, 2001

Outline Perfect play Resource limits α- β pruning Games of chance Games of imperfect information

Games vs. search problems Unpredictable opponent: we cannot know its next move: solution is a strategy specifying a move for every possible opponent reply There are time limits: unlikely to find goal, must approximate

Games vs. search problems History: Computer considers possible lines of play (Babbage, 1846) Algorithm for perfect play (Zermelo, 1912; Von Neumann, 1944) Finite horizon, approximate evaluation (Zuse, 1945; Wiener, 1948; Shannon, 1950) First chess program (Turing, 1951) Machine learning to improve evaluation accuracy (Samuel, 1952--57) Pruning to allow deeper search (McCarthy, 1956)

Types of games Complete information Deterministic Chess, checkers, go, othello, tictactoe Chance Backgammon, Monopoly Incomplete information Bridge, poker, scrabble, nuclear war Other applications: robots that cooperate or compete

Game tree (tic-tac-toe) two players take turns deterministic game

Minimax Perfect play for deterministic, perfectinformation games Idea: choose move whose associate position has the highest minimax value = best achievable payoff against best play e.g., 2-ply game: (ply = turn or half move; one move=two plies) In the following, let us call MAX player1 and MIN player2 (the reason will be clear later on)

Minimax

Minimax Algorithm defun MINIMAX-DECISION (game, state) returns action value - for each <a, s> in SUCCESSORS(game, state) do v MINIMAX-VALUE(game, s) if value < v then action a, value v return action defun MINIMAX-VALUE (game, state) returns payoff if TERMINAL?(state) then return UTILITY(state) if [MAX is to move in state] then return [the highest MINIMAX-VALUE of SUCCESSORS(state)] else return [the lowest MINIMAX-VALUE of SUCCESORS(state)]

Minimax properties Completeness: Only if the tree is finite Optimality: Only if the tree is finite Time complexity: O(b m ) Space complexity: O(bm) (depth first exploration)

Minimax properties For chess, b approx 35, m approx 100 (= 2 x 50 ply) for reasonable games Thus, an exact solution is completely infeasible

Minimax properties Minimax separates node generation from node evaluation: 1. generates the tree 2. assigns values to the leaves 3. propagates them upwards

Resource limits Suppose we have 100 seconds to decide and we can explore 10 4 nodes/sec: Can afford 10 6 nodes per move Standard approach: Test to prune a branch (e.g. depth-limit cutoff) Evaluation function (EVAL): estimate of desirability of a position

Evaluation function Black to move White slightly better White to move Black winning Chess: typically linear weighted sum of features Eval(s) = w 1 f 1 (s) + w 2 f 2 (s) + + w n f n (s) e.g. w 1 = 9 with f 1 (s) = #white-queens - #black-queens

Exact values don t matter Behaviour is preserved under any monotonic transformation of EVAL

Cutting off search MINIMAX-CUTOFF is the same as MINIMAX- VALUE except that: - TERMINAL? is replaced by CUTOFF? - UTILITY is replaced by EVAL

Cutting off search Does it work in practice? Without cut off: If b m = 10 6 and b=35 then m=4 (ply number) A 4-ply lookahead means a hopeless chess player 4-ply: human beginner 8-ply: typical PC, human master 12-ply: Deep Blue, Kasparov

Cutting off search The most straightforward approach to controlling the amount of search is to set a fixed depth limit The depth must be chosen so that the amount of time used will not exceed what rules of the game allow A slightly more robust approach is to apply iterative deepening: when the time runs out, the program returns the move selected by the deepest completed search

Cutting off search These approaches can have some disastrous consequences because of the approximate nature of the evaluation function Obviously, a more sophisticated cutoff test is needed In particular, the evaluation function should only be applied to positions that are quiescent, that is, unlikely to exhibit wild swings in value in the near future

Cutting off search This extra search is called a quiescence search: sometimes it is restricted to consider only certain types of moves, such as capture moves, that will quickly resolve the uncertainties in the position

Cutting off search Another problem, the horizon problem, is more difficult to eliminate It arises when the program is facing a move by the opponent that causes serious damage and is ultimately unavoidable (e.g., a pawn that can be promoted to queen by the opponent)

Cutting off search: alpha-beta pruning Fortunately, it is possible to compute the correct minimax decision without looking at every node in the search tree The process of eliminating a branch of the search tree is called pruning The standard pruning technique for minimax is called alpha-beta-pruning

Cutting off search: alpha-beta pruning General principle: Consider a node n somewhere in the tree, such that MAX (player 1) has a choice to moving to that node If MAX has a better node m either at the parent node of n, or at any choice point further up, then n will never be reached in actual play So, once we have found out enough about n (by examining some of its descendants) to reach this conclusion, we can prune it Pruning does not affect final result

α- β pruning: an example

α- β pruning example

Why is it called α- β pruning? α is the best value (to MAX) found so far in current path If V is worse than α, MAX will avoid it, prune that branch Define MIN β similarly for

α- β pruning properties Time complexity = O(b m/2 ) - doubles depth of search - can easily reach depth 8 (good chess player)

The α- β algorithm defun ALFA-BETA-SEARCH(game, state) returns action max-value MAX-VALUE ( game, state, -, + ) for each <action, value> in SUCCESSORS(game, state) do if value = max-value then return action

The α- β algorithm defun MAX-VALUE ( game, state, α, β ) returns value if CUTOFF?(state) then return EVAL-COST(game, state) for each <a, s> in SUCCESSORS(game, state) do α max ( α, MIN-VALUE ( game, s, α, β ) ) if α >= β then return β return α defun MIN-VALUE ( state, game, α, β ) returns value if CUTOFF?(state) then return EVAL-COST(game, state) for each <a, s> in SUCCESSORS(game, state) do β min ( β, MAX-VALUE ( game, s, α, β ) ) if β <= α then return α return β

Deterministic games: state of the art Checkers Chinook ended 40-year-reign of human world champion Marion Tinsley in 1994. Used an endgame database defining perfect play for all positions involving 8 or fewer pieces on the board, a total of 443,748,401,247 positions Chess Deep Blue defeated human world champion Gary Kasparov in a six-game match in 1997. Deep Blue searches 200 million positions per second, uses very sophisticated evaluation, and undisclosed methods for extending some lines of search up to 40 ply

Deterministic games: state of the art Othello human champions refuse to compete against computers, which are too good Go human champions refuse to compete against computers, which are too bad. In go, b > 300, so most programs use pattern knowledge bases to suggest plausible moves

Non-deterministic games

Non-deterministic games In non-deterministic games, chance introduced by dice, card-shuffling, etc. A non-deterministic game must introduce also chance nodes in addition to MAX and MIN nodes

Non-deterministic games Simplified example with coin-flipping:

Algorithm for non-deterministic games EXPECT-MINIMAX deals with chance nodes and gives perfect play (according to the game theory) EXPECT-MINIMAX is just like MINIMAX, except that we must also handle chance nodes

Algorithm for non-deterministic games Each of the possible positions no longer has a definite minimax value (which in deterministic games was the utility of the leaf reached by best play) Instead, we can only calculate an average or expected value where the average is taken over all the possible dice rolls that could occur

Non-deterministic games Let us get back to the coin-flipping example:

Non-deterministic games Let us get back to the coin-flipping example: 3 EXPECT-MAX

Minimax Algorithm for non-det. games defun EXPECT-MINIMAX-DECISION (game, state) returns action value - for each <a, s> in SUCCESSORS(game, state) do v EXPECT-MINIMAX-VAL(game, s) if value < v then action a, value v return action

Minimax Algorithm for non-det. games defun EXPECT-MINIMAX-VAL (game, state) returns payoff if TERMINAL?(state) then return UTILITY(state) if [state is a MAX node] then return [highest EXPECT-MINIMAX-VAL of SUCCESSORS(state)] if [state is a MIN node] then return [lowest EXPECT-MINIMAX-VAL of SUCCESSORS(state)] if [state is a CHANCE node] then return [average of EXPECT-MINIMAX-VAL of SUCCESSORS(state)]

DRAFT α- β algorithm for non-det. games defun EXP-ALFA-BETA-SEARCH(game, state) returns an action max-value EXP-MAX-VALUE ( game, state, -, + ) for each <action, value> in SUCCESSORS(game, state) do if value = max-value then return action

DRAFT The α- β algorithm for non-det. games defun EXP-MAX-VALUE ( game, state, α, β ) returns value if CUTOFF?(state) then return EVAL-COST(game, state) for each <a, s> in SUCCESSORS(game, state) do α max ( α, MIN-VALUE ( game, s, α, β ) ) if α >= β then return β return α defun EXP-MIN-VALUE ( state, game, α, β ) returns value if CUTOFF?(state) then return EVAL-COST(game, state) for each <a, s> in SUCCESSORS(game, state) do β min ( β, MAX-VALUE ( game, s, α, β ) ) if β <= α then return α return β

Summary Games are fun to work on! (and dangerous) They illustrate several important points about AI perfection is unattainable: must approximate good idea to think about what to think about uncertainty constrains the assignment of values to states Games are to AI as grand prix racing is to automobile design