Adversarial Search Aka Games

Similar documents
Adversarial Search (Game Playing)

Chapter 6. Overview. Why study games? State of the art. Game playing State of the art and resources Framework

Game Playing AI Class 8 Ch , 5.4.1, 5.5

CS 440 / ECE 448 Introduction to Artificial Intelligence Spring 2010 Lecture #5

Adversarial Search and Game Playing. Russell and Norvig: Chapter 5

Game-playing AIs: Games and Adversarial Search FINAL SET (w/ pruning study examples) AIMA

Game-playing AIs: Games and Adversarial Search I AIMA

Adversarial Search and Game- Playing C H A P T E R 6 C M P T : S P R I N G H A S S A N K H O S R A V I

CS 331: Artificial Intelligence Adversarial Search II. Outline

Foundations of AI. 6. Adversarial Search. Search Strategies for Games, Games with Chance, State of the Art. Wolfram Burgard & Bernhard Nebel

Game-Playing & Adversarial Search

Game Playing. Philipp Koehn. 29 September 2015

Adversarial Search. Soleymani. Artificial Intelligence: A Modern Approach, 3 rd Edition, Chapter 5

Adversarial Search and Game Playing

Artificial Intelligence. Minimax and alpha-beta pruning

Adversarial Search. Chapter 5. Mausam (Based on slides of Stuart Russell, Andrew Parks, Henry Kautz, Linda Shapiro) 1

Games CSE 473. Kasparov Vs. Deep Junior August 2, 2003 Match ends in a 3 / 3 tie!

ARTIFICIAL INTELLIGENCE (CS 370D)

CS 188: Artificial Intelligence

CS 771 Artificial Intelligence. Adversarial Search

Adversary Search. Ref: Chapter 5

COMP219: COMP219: Artificial Intelligence Artificial Intelligence Dr. Annabel Latham Lecture 12: Game Playing Overview Games and Search

Adversarial Search: Game Playing. Reading: Chapter

Computer Science and Software Engineering University of Wisconsin - Platteville. 4. Game Play. CS 3030 Lecture Notes Yan Shi UW-Platteville

Adversarial search (game playing)

Artificial Intelligence Adversarial Search

Foundations of AI. 5. Board Games. Search Strategies for Games, Games with Chance, State of the Art. Wolfram Burgard and Luc De Raedt SA-1

Last update: March 9, Game playing. CMSC 421, Chapter 6. CMSC 421, Chapter 6 1

ADVERSARIAL SEARCH. Chapter 5

CPS 570: Artificial Intelligence Two-player, zero-sum, perfect-information Games

Game Playing: Adversarial Search. Chapter 5

Games and Adversarial Search II

Game playing. Outline

Artificial Intelligence. Topic 5. Game playing

Game Playing. Why do AI researchers study game playing? 1. It s a good reasoning problem, formal and nontrivial.

Game playing. Chapter 6. Chapter 6 1

Foundations of Artificial Intelligence

CS 380: ARTIFICIAL INTELLIGENCE ADVERSARIAL SEARCH. Santiago Ontañón

Foundations of AI. 6. Board Games. Search Strategies for Games, Games with Chance, State of the Art

ADVERSARIAL SEARCH. Today. Reading. Goals. AIMA Chapter , 5.7,5.8

Foundations of Artificial Intelligence

Set 4: Game-Playing. ICS 271 Fall 2017 Kalev Kask

CS 4700: Foundations of Artificial Intelligence

Ar#ficial)Intelligence!!

Game Playing. Dr. Richard J. Povinelli. Page 1. rev 1.1, 9/14/2003

6. Games. COMP9414/ 9814/ 3411: Artificial Intelligence. Outline. Mechanical Turk. Origins. origins. motivation. minimax search

Adversarial Search. CMPSCI 383 September 29, 2011

Programming Project 1: Pacman (Due )

Games vs. search problems. Game playing Chapter 6. Outline. Game tree (2-player, deterministic, turns) Types of games. Minimax

Game playing. Chapter 6. Chapter 6 1

Adversarial Search. Chapter 5. Mausam (Based on slides of Stuart Russell, Andrew Parks, Henry Kautz, Linda Shapiro, Diane Cook) 1

CS 380: ARTIFICIAL INTELLIGENCE

Lecture 14. Questions? Friday, February 10 CS 430 Artificial Intelligence - Lecture 14 1

Contents. Foundations of Artificial Intelligence. Problems. Why Board Games?

COMP219: Artificial Intelligence. Lecture 13: Game Playing

Game Tree Search. CSC384: Introduction to Artificial Intelligence. Generalizing Search Problem. General Games. What makes something a game?

Games vs. search problems. Adversarial Search. Types of games. Outline

Outline. Game playing. Types of games. Games vs. search problems. Minimax. Game tree (2-player, deterministic, turns) Games

Today. Types of Game. Games and Search 1/18/2010. COMP210: Artificial Intelligence. Lecture 10. Game playing

Adversarial Search Lecture 7

Lecture 5: Game Playing (Adversarial Search)

CS440/ECE448 Lecture 9: Minimax Search. Slides by Svetlana Lazebnik 9/2016 Modified by Mark Hasegawa-Johnson 9/2017

mywbut.com Two agent games : alpha beta pruning

More Adversarial Search

Artificial Intelligence Search III

Games and Adversarial Search

Game playing. Chapter 5, Sections 1 6

CPS331 Lecture: Search in Games last revised 2/16/10

ADVERSARIAL SEARCH. Today. Reading. Goals. AIMA Chapter Read , Skim 5.7

CS 188: Artificial Intelligence Spring 2007

Outline. Game Playing. Game Problems. Game Problems. Types of games Playing a perfect game. Playing an imperfect game

V. Adamchik Data Structures. Game Trees. Lecture 1. Apr. 05, Plan: 1. Introduction. 2. Game of NIM. 3. Minimax

Intuition Mini-Max 2

CS 5522: Artificial Intelligence II

Games (adversarial search problems)

Artificial Intelligence 1: game playing

2/5/17 ADVERSARIAL SEARCH. Today. Introduce adversarial games Minimax as an optimal strategy Alpha-beta pruning Real-time decision making

Module 3. Problem Solving using Search- (Two agent) Version 2 CSE IIT, Kharagpur

Game playing. Chapter 5. Chapter 5 1

Game Playing State-of-the-Art

CS 188: Artificial Intelligence Spring Announcements

Game-Playing & Adversarial Search Alpha-Beta Pruning, etc.

Outline. Introduction. Game-Tree Search. What are games and why are they interesting? History and State-of-the-art in Game Playing

2 person perfect information

Artificial Intelligence

Solving Problems by Searching: Adversarial Search

Game Playing State-of-the-Art CSE 473: Artificial Intelligence Fall Deterministic Games. Zero-Sum Games 10/13/17. Adversarial Search

CS 4700: Artificial Intelligence

Adversarial Search. Hal Daumé III. Computer Science University of Maryland CS 421: Introduction to Artificial Intelligence 9 Feb 2012

CITS3001. Algorithms, Agents and Artificial Intelligence. Semester 2, 2016 Tim French

Game Playing. Garry Kasparov and Deep Blue. 1997, GM Gabriel Schwartzman's Chess Camera, courtesy IBM.

Artificial Intelligence

Playing Games. Henry Z. Lo. June 23, We consider writing AI to play games with the following properties:

Artificial Intelligence, CS, Nanjing University Spring, 2018, Yang Yu. Lecture 4: Search 3.

School of EECS Washington State University. Artificial Intelligence

Ch.4 AI and Games. Hantao Zhang. The University of Iowa Department of Computer Science. hzhang/c145

Artificial Intelligence

16.410/413 Principles of Autonomy and Decision Making

Unit-III Chap-II Adversarial Search. Created by: Ashish Shah 1

Announcements. CS 188: Artificial Intelligence Spring Game Playing State-of-the-Art. Overview. Game Playing. GamesCrafters

Transcription:

Adversarial Search Aka Games Chapter 5 Some material adopted from notes by Charles R. Dyer, U of Wisconsin-Madison

Overview Game playing State of the art and resources Framework Game trees Minimax Alpha-beta pruning Adding randomness

Why study games? Interesting, hard problems that require minimal initial structure Clear criteria for success A way to study problems involving {hostile, adversarial, competing} agents and the uncertainty of interacting with the natural world People have used them to assess their intelligence Fun, good, easy to understand, PR potential Games often define very large search spaces chess 35 nodes in search tree, 4 legal states

Chess: State of the art Deep Blue beat Gary Kasparov in 997 Garry Kasparav vs. Deep Junior (Feb 3): tie! Kasparov vs. X3D Fritz (November 3): tie! Checkers: Chinook is the world champion Checkers: has been solved exactly it s a draw! Go: Computers starting to achieve expert level Bridge: Expert computer players exist, but no world champions yet Poker: Poki regularly beats human experts Check out the U. Alberta Games Group

Chinook Chinook is the World Man-Machine Checkers Champion, developed by researchers at the University of Alberta It earned this title by competing in human tournaments, winning the right to play for the (human) world championship, and eventually defeating the best players in the world Play Chinook online One Jump Ahead: Challenging Human Supremacy in Checkers, Jonathan Schaeffer, 998 See Checkers Is Solved, J. Schaeffer, et al., Science, v37, n5844, pp58-, AAAS, 7.

Chess early days 948: Norbert Wiener s Cybernetics describes how a chess program could be developed using a depthlimited minimax search with an evaluation function 95: Claude Shannon publishes Programming a Computer for Playing Chess 95: Alan Turing develops on paper the first program capable of playing a full game of chess 96: Kotok and McCarthy (MIT) develop first program to play credibly 967: Mac Hack Six, by Richard Greenblatt et al. (MIT) defeats a person in regular tournament play

Ratings of human & computer chess champions

997

997

Othello: Murakami vs. Logistello Takeshi Murakami World Othello Champion open sourced 997: The Logistello software crushed Murakami, 6 to Humans can not win against it Othello, with 8 states, is still not solved 997

6

7

How can we do it?

Typical simple case for a game -person game Players alternate moves Zero-sum: one player s loss is the other s gain Perfect information: both players have access to complete information about state of game. No information hidden from either player No chance (e.g., using dice) involved Examples: Tic-Tac-Toe, Checkers, Chess, Go, Nim, Othello But not: Bridge, Solitaire, Backgammon, Poker, Rock-Paper-Scissors,...

Can we use Uninformed search? Heuristic search? Local search? Constraint based search?

How to play a game A way to play such a game is to: Consider all the legal moves you can make Compute new position resulting from each move Evaluate each to determine which is best Make that move Wait for your opponent to move and repeat Key problems are: Representing the board (i.e., game state) Generating all legal next boards Evaluating a position

Evaluation function Evaluation function or static evaluator used to evaluate the goodness of a game position Contrast with heuristic search where evaluation function is non-negative estimate of cost from start node to goal passing through given node Zero-sum assumption permits single function to describe goodness of board for both players f(n) >> : position n good for me; bad for you f(n) << : position n bad for me; good for you f(n) near : position n is a neutral position f(n) = +infinity: win for me f(n) = -infinity: win for you

Evaluation function examples For Tic-Tac-Toe f(n) = [# my open 3lengths] - [# your open 3lengths] Where 3length is complete row, column, or diagonal and an open one is one that has no opponent marks Alan Turing s function for chess f(n) = w(n)/b(n) where w(n) = sum of the point value of white s pieces and b(n) = sum of black s Traditional piece values are: pawn:; knight:3; bishop:3; rook: 5; queen: 9

Evaluation function examples Most evaluation functions specified as a weighted sum of positive features f(n) = w *feat (n) + w *feat (n) +... + w n *feat k (n) Example features for chess are piece count, piece values, piece placement, squares controlled, etc. IBM s chess program Deep Blue (circa 996) had >8K features in its evaluation function

But, that s not how people play People use look ahead i.e., enumerate actions, consider opponent s possible responses, REPEAT Producing a complete game tree is only possible for simple games So, generate a partial game tree for some number of plys Move = each player takes a turn Ply = one player s turn What do we do with the game tree?

We can easily imagine generating a complete game tree for Tic-Tac-Toe Taking board symmetries into account, there are 38 terminal positions 9 wins for X, 44 for O and 3 draws

Game trees Problem spaces for typical games are trees Root node is current board configuration; player must decide best single move to make next Static evaluator function rates board position f(board):real, > for me; < for opponent Arcs represent possible legal moves for a player If my turn to move, then root is labeled a "MAX" node; otherwise it s a "MIN" node Each tree level s nodes are all MAX or all MIN; nodes at level i are of opposite kind from those at level i+

MAX s play Game Tree for Tic-Tac-Toe MAX nodes MIN s play MIN nodes Terminal state (win for MAX) Here, symmetries are used to reduce branching factor

Minimax procedure Create MAX node with current board configuration Expand nodes to some depth (a.k.a. plys) of lookahead in game Apply evaluation function at each leaf node Back up values for each non-leaf node until value is computed for the root node At MIN nodes: value is minimum of children s values At MAX nodes: value is maximum of children s values Choose move to child node whose backed-up value determined value at root

Minimax theorem Intuition: assume your opponent is at least as smart as you and play accordingly If she s not, you can only do better! Von Neumann, J: Zur Theorie der Gesellschaftsspiele Math. Annalen. (98) 95-3 For every -person, -sum game with finite strategies, there is a value V and a mixed strategy for each player, such that (a) given player 's strategy, best payoff possible for player is V, and (b) given player 's strategy, best payoff possible for player is V. You can think of this as: Minimizing your maximum possible loss Maximizing your minimum possible gain

Minimax Algorithm 7 8 Static evaluator value 7 8 This is the move selected by minimax MAX MIN 7 8 7 8

Partial Game Tree for Tic-Tac-Toe f(n)=+ if position a win for X f(n)=- if position a win for O f(n)= if position a draw

Why use backed-up values? Intuition: if evaluation function is good, doing look ahead and backing up values with Minimax should be better Non-leaf node N s backed-up value is value of best state that MAX can reach at depth h if MIN plays well well : same criterion as MAX applies to itself If e is good, then backed-up value is better estimate of STATE(N) goodness than e(state(n)) Use lookup horizon h because time to choose move is limited

Minimax Tree MAX node MIN node f value value computed by minimax

Is that all there is to simple games?

7? Alpha-beta pruning Improve performance of the minimax algorithm through alpha-beta pruning If you have an idea that is surely bad, don't take the time to see how truly awful it is -- Pat Winston MAX >= We don t need to compute the value at this node MIN = <= No matter what it is, it can t affect value of the root node MAX

Alpha-beta pruning Traverse search tree in depth-first order At MAX node n, alpha(n) = max value found so far At MIN node n, beta(n) = min value found so far Alpha values start at - and only increase, while beta values start at + and only decrease Beta cutoff: Given MAX node N, cut off search below N (i.e., don t examine any more of its children) if alpha(n) >= beta(i) for some MIN node ancestor i of N Alpha cutoff: stop searching below MIN node N if beta(n)<=alpha(i) for some MAX node ancestor i of N

Alpha-Beta Tic-Tac-Toe Example

Alpha-Beta Tic-Tac-Toe Example β = The beta value of a MIN node is an upper bound on the final backed-up value. It can never increase

Alpha-Beta Tic-Tac-Toe Example β = The beta value of a MIN node is an upper bound on the final backed-up value. It can never increase

Alpha-Beta Tic-Tac-Toe Example α = β = The alpha value of a MAX node is a lower bound on the final backed-up value. It can never decrease

Alpha-Beta Tic-Tac-Toe Example α = β = β = - -

Alpha-Beta Tic-Tac-Toe Example α = β = β = - Search can be discontinued below any MIN node whose beta value is less than or equal to the alpha value of one of its MAX ancestors -

Another alpha-beta example MAX 3 MIN 3 - prune 4 - prune 3 8 4

Alpha-Beta Tic-Tac-Toe Example 5-3 3 3-3 - 3 5 5-5 5-3 -5 5-3 3

5-3 3 3-3 - 3 5 5-5 5-3 -5 5-3 3

5-3 3 3-3 - 3 5 5-5 5-3 -5 5-3 3

-3 5-3 3 3-3 - 3 5 5-5 5-3 -5 5-3 3

-3 5-3 3 3-3 - 3 5 5-5 5-3 -5 5-3 3

-3 5-3 3 3-3 - 3 5 5-5 5-3 -5 5-3 3

3-3 3 5-3 3 3-3 - 3 5 5-5 5-3 -5 5-3 3

3-3 3 5-3 3 3-3 - 3 5 5-5 5-3 -5 5-3 3

3-3 3 5-3 3 3-3 - 3 5 5-5 5-3 -5 5-3 3

3-3 3 5 5-3 3 3-3 - 3 5 5-5 5-3 -5 5-3 3

3-3 3 5-3 3 3-3 - 3 5 5-5 5-3 -5 5-3 3

3-3 3 5-3 3 3-3 - 3 5 5-5 5-3 -5 5-3 3

3-3 3 5-3 3 3-3 - 3 5 5-5 5-3 -5 5-3 3

3-3 3 5-3 3 3-3 - 3 5 5-5 5-3 -5 5-3 3

3-3 3 5-3 3 3-3 - 3 5 5-5 5-3 -5 5-3 3

3-3 3 5 5-3 3 3-3 - 3 5 5-5 5-3 -5 5-3 3

3-3 3 5-3 3 3-3 - 3 5 5-5 5-3 -5 5-3 3

3-3 3-3 5-3 3 3-3 - 3 5 5-5 5-3 -5 5-3 3

3-3 3-3 5-3 3 3-3 - 3 5 5-5 5-3 -5 5-3 3

3-3 3-3 5-3 3 3-3 - 3 5 5-5 5-3 -5 5-3 3

3-3 3-3 -5 5-3 3 3-3 - 3 5 5-5 5-3 -5 5-3 3

3-3 3-3 -5 5-3 3 3-3 - 3 5 5-5 5-3 -5 5-3 3

-5 3-5 -3 3-3 -5 5-3 3 3-3 - 3 5 5-5 5-3 -5 5-3 3

-5 3-5 -3 3-3 -5 5-3 3 3-3 - 3 5 5-5 5-3 -5 5-3 3

-5 3-5 -3 3-3 -5 5-3 3 3-3 - 3 5 5-5 5-3 -5 5-3 3

-5 3-5 -3 3-3 -5 5-3 3 3-3 - 3 5 5-5 5-3 -5 5-3 3

-5 3-5 -3 3-3 -5 5-3 3 3-3 - 3 5 5-5 5-3 -5 5-3 3

function MAX-VALUE (state, α, β) ;; α = best MAX so far; β = best MIN if TERMINAL-TEST (state) then return UTILITY(state) v := - for each s in SUCCESSORS (state) do v := MAX (v, MIN-VALUE (s, α, β)) if v >= β then return v α := MAX (α, v) end return v function MIN-VALUE (state, α, β) if TERMINAL-TEST (state) then return UTILITY(state) v := for each s in SUCCESSORS (state) do v := MIN (v, MAX-VALUE (s, α, β)) if v <= α then return v β := MIN (β, v) end return v Alpha-beta algorithm

Effectiveness of alpha-beta Alpha-beta guaranteed to compute same value for root node as minimax, but with computation Worst case: no pruning, examine b d leaf nodes, where nodes have b children & d-ply search is done Best case: examine only (b) d/ leaf nodes You can search twice as deep as minimax! Occurs if each player s best move is st alternative In Deep Blue s alpha-beta pruning, average branching factor at node was ~6 instead of ~35!

Other Improvements Adaptive horizon + iterative deepening Extended search: retain k> best paths (not just one) extend tree at greater depth below their leaf nodes to help dealing with horizon effect Singular extension: If move is obviously better than others in node at horizon h, expand it Use transposition tables to deal with repeated states Null-move search: assume player forfeits move; do a shallow analysis of tree; result must surely be worse than if player had moved. Can be used to recognize moves that should be explored fully.