Foundations of AI. 6. Board Games. Search Strategies for Games, Games with Chance, State of the Art

Similar documents
Foundations of AI. 5. Board Games. Search Strategies for Games, Games with Chance, State of the Art. Wolfram Burgard and Luc De Raedt SA-1

Contents. Foundations of Artificial Intelligence. Problems. Why Board Games?

Foundations of Artificial Intelligence

Foundations of Artificial Intelligence

Foundations of AI. 6. Adversarial Search. Search Strategies for Games, Games with Chance, State of the Art. Wolfram Burgard & Bernhard Nebel

Artificial Intelligence. Minimax and alpha-beta pruning

Adversarial Search. Soleymani. Artificial Intelligence: A Modern Approach, 3 rd Edition, Chapter 5

CS 331: Artificial Intelligence Adversarial Search II. Outline

Adversarial Search Aka Games

Adversarial Search (Game Playing)

Adversarial Search and Game Playing

Adverserial Search Chapter 5 minmax algorithm alpha-beta pruning TDDC17. Problems. Why Board Games?

Ar#ficial)Intelligence!!

Adversarial Search and Game Playing. Russell and Norvig: Chapter 5

Artificial Intelligence. Topic 5. Game playing

CPS331 Lecture: Search in Games last revised 2/16/10

Adversarial Search: Game Playing. Reading: Chapter

COMP219: COMP219: Artificial Intelligence Artificial Intelligence Dr. Annabel Latham Lecture 12: Game Playing Overview Games and Search

Game Playing. Why do AI researchers study game playing? 1. It s a good reasoning problem, formal and nontrivial.

Artificial Intelligence Search III

CS 188: Artificial Intelligence

CPS 570: Artificial Intelligence Two-player, zero-sum, perfect-information Games

Games and Adversarial Search

Game Playing AI Class 8 Ch , 5.4.1, 5.5

Adversarial Search. Chapter 5. Mausam (Based on slides of Stuart Russell, Andrew Parks, Henry Kautz, Linda Shapiro) 1

Programming Project 1: Pacman (Due )

Game Playing. Philipp Koehn. 29 September 2015

Artificial Intelligence Adversarial Search

Computer Science and Software Engineering University of Wisconsin - Platteville. 4. Game Play. CS 3030 Lecture Notes Yan Shi UW-Platteville

Algorithms for Data Structures: Search for Games. Phillip Smith 27/11/13

Today. Types of Game. Games and Search 1/18/2010. COMP210: Artificial Intelligence. Lecture 10. Game playing

ARTIFICIAL INTELLIGENCE (CS 370D)

Lecture 14. Questions? Friday, February 10 CS 430 Artificial Intelligence - Lecture 14 1

Games CSE 473. Kasparov Vs. Deep Junior August 2, 2003 Match ends in a 3 / 3 tie!

Game Playing AI. Dr. Baldassano Yu s Elite Education

2 person perfect information

Adversary Search. Ref: Chapter 5

Artificial Intelligence

ADVERSARIAL SEARCH. Today. Reading. Goals. AIMA Chapter , 5.7,5.8

Adversarial Search and Game- Playing C H A P T E R 6 C M P T : S P R I N G H A S S A N K H O S R A V I

Adversarial Search Lecture 7

Foundations of Artificial Intelligence Introduction State of the Art Summary. classification: Board Games: Overview

Game playing. Outline

Adversarial search (game playing)

COMP219: Artificial Intelligence. Lecture 13: Game Playing

ADVERSARIAL SEARCH. Today. Reading. Goals. AIMA Chapter Read , Skim 5.7

Game Playing State-of-the-Art CSE 473: Artificial Intelligence Fall Deterministic Games. Zero-Sum Games 10/13/17. Adversarial Search

Game Playing: Adversarial Search. Chapter 5

CS440/ECE448 Lecture 9: Minimax Search. Slides by Svetlana Lazebnik 9/2016 Modified by Mark Hasegawa-Johnson 9/2017

Outline. Game Playing. Game Problems. Game Problems. Types of games Playing a perfect game. Playing an imperfect game

Game-Playing & Adversarial Search

Last update: March 9, Game playing. CMSC 421, Chapter 6. CMSC 421, Chapter 6 1

Game playing. Chapter 6. Chapter 6 1

ADVERSARIAL SEARCH. Chapter 5

CS 771 Artificial Intelligence. Adversarial Search

CS 440 / ECE 448 Introduction to Artificial Intelligence Spring 2010 Lecture #5

Game playing. Chapter 5, Sections 1 6

CS 188: Artificial Intelligence Spring Announcements

Game Engineering CS F-24 Board / Strategy Games

Adversarial Search. CMPSCI 383 September 29, 2011

CITS3001. Algorithms, Agents and Artificial Intelligence. Semester 2, 2016 Tim French

Games vs. search problems. Game playing Chapter 6. Outline. Game tree (2-player, deterministic, turns) Types of games. Minimax

CS 380: ARTIFICIAL INTELLIGENCE ADVERSARIAL SEARCH. Santiago Ontañón

School of EECS Washington State University. Artificial Intelligence

Game playing. Chapter 6. Chapter 6 1

Game Playing. Dr. Richard J. Povinelli. Page 1. rev 1.1, 9/14/2003

6. Games. COMP9414/ 9814/ 3411: Artificial Intelligence. Outline. Mechanical Turk. Origins. origins. motivation. minimax search

Adversarial Search. Rob Platt Northeastern University. Some images and slides are used from: AIMA CS188 UC Berkeley

Artificial Intelligence

More Adversarial Search

Ch.4 AI and Games. Hantao Zhang. The University of Iowa Department of Computer Science. hzhang/c145

Unit-III Chap-II Adversarial Search. Created by: Ashish Shah 1

Artificial Intelligence 1: game playing

Outline. Game playing. Types of games. Games vs. search problems. Minimax. Game tree (2-player, deterministic, turns) Games

Game-playing: DeepBlue and AlphaGo

Game-playing AIs: Games and Adversarial Search FINAL SET (w/ pruning study examples) AIMA

Artificial Intelligence

COMP9414: Artificial Intelligence Adversarial Search

CS 4700: Foundations of Artificial Intelligence

Announcements. CS 188: Artificial Intelligence Spring Game Playing State-of-the-Art. Overview. Game Playing. GamesCrafters

Foundations of Artificial Intelligence

CS 380: ARTIFICIAL INTELLIGENCE

Game playing. Chapter 5. Chapter 5 1

Game-playing AIs: Games and Adversarial Search I AIMA

CS 1571 Introduction to AI Lecture 12. Adversarial search. CS 1571 Intro to AI. Announcements

CS 5522: Artificial Intelligence II

Chapter Overview. Games

CS 2710 Foundations of AI. Lecture 9. Adversarial search. CS 2710 Foundations of AI. Game search

2/5/17 ADVERSARIAL SEARCH. Today. Introduce adversarial games Minimax as an optimal strategy Alpha-beta pruning Real-time decision making

Chapter 6. Overview. Why study games? State of the art. Game playing State of the art and resources Framework

Announcements. Homework 1. Project 1. Due tonight at 11:59pm. Due Friday 2/8 at 4:00pm. Electronic HW1 Written HW1

Game Playing Beyond Minimax. Game Playing Summary So Far. Game Playing Improving Efficiency. Game Playing Minimax using DFS.

Game Playing State-of-the-Art

Intuition Mini-Max 2

Adversarial Search. Human-aware Robotics. 2018/01/25 Chapter 5 in R&N 3rd Ø Announcement: Slides for this lecture are here:

CS 188: Artificial Intelligence Spring 2007

Game Playing. Garry Kasparov and Deep Blue. 1997, GM Gabriel Schwartzman's Chess Camera, courtesy IBM.

Adversarial Search 1

Module 3. Problem Solving using Search- (Two agent) Version 2 CSE IIT, Kharagpur

Games vs. search problems. Adversarial Search. Types of games. Outline

Transcription:

Foundations of AI 6. Board Games Search Strategies for Games, Games with Chance, State of the Art Wolfram Burgard, Andreas Karwath, Bernhard Nebel, and Martin Riedmiller SA-1

Contents Board Games Minimax Search Alpha-Beta Search Games with an Element of Chance State of the Art 06/2

Why Board Games? Board games are one of the oldest branches of AI (Shannon and Turing 1950). Board games present a very abstract and pure form of competition between two opponents and clearly require a form of intelligence. The states of a game are easy to represent. The possible actions of the players are welldefined. Realization of the game as a search problem The world states are fully accessible It is nonetheless a contingency problem, because the characteristics of the opponent are not known in advance. 06/3

Problems Board games are not only difficult because they are contingency problems, but also because the search trees can become astronomically large. Examples: Chess: On average 35 possible actions from every position, 100 possible moves 35 100 10 150 nodes in the search tree (with only 10 40 legal chess positions). Go: On average 200 possible actions with ca. 300 moves 200 300 10 700 nodes. Good game programs have the properties that they delete irrelevant branches of the game tree, use good evaluation functions for in-between states, and look ahead as many moves as possible. 06/4

Terminology of Two-Person Board Games Players are MAX and MIN, where MAX begins. Initial position (e.g., board arrangement) Operators (= legal moves) Termination test, determines when the game is over. Terminal state = game over. Strategy. In contrast to regular searches, where a path from beginning to end is simply a solution, MAX must come up with a strategy to reach a terminal state regardless of what MIN does correct reactions to all of MIN s moves. 06/5

Tic-Tac-Toe Example Every step of the search tree, also called game tree, is given the player s name whose turn it is (MAX- and MIN-steps). When it is possible, as it is here, to produce the full search tree (game tree), the minimax algorithm delivers an optimal strategy for MAX. 06/6

Minimax 1. Generate the complete game tree using depth-first search. 2. Apply the utility function to each terminal state. 3. Beginning with the terminal states, determine the utility of the predecessor nodes as follows: Node is a MIN-node Value is the minimum of the successor nodes Node is a MAX-node Value is the maximum of the successor nodes From the initial state (root of the game tree), MAX chooses the move that leads to the highest value (minimax decision). Note: Minimax assumes that MIN plays perfectly. Every weakness (i.e. every mistake MIN makes) can only improve the result for MAX. 06/7

Minimax Example 06/8

Minimax Algorithm Recursively calculates the best move from the initial state. Note: Minimax only works when the game tree is not too deep. Otherwise, the minimax value must be approximated. 06/9

Evaluation Function When the search space is too large, the game tree can be created to a certain depth only. The art is to correctly evaluate the playing position of the leaves. Example of simple evaluation criteria in chess: Material value: pawn 1, knight/bishop 3, rook 5, queen 9. Other: king safety, good pawn structure Rule of thumb: 3-point advantage = certain victory The choice of evaluation function is decisive! The value assigned to a state of play should reflect the chances of winning, i.e., the chance of winning with a 1-point advantage should be less than with a 3-point advantage. 06/10

Evaluation Function - General The preferred evaluation functions are weighted, linear functions: w 1 f 1 + w 2 f 2 + + w n f n where the w s are the weights, and the f s are the features. [e.g., w 1 = 3, f 1 = number of our own knights on the board] Assumption: The criteria are independent. The weights can be learned. The criteria, however, must be given (noone knows how they can be learned). 06/11

When Should we Stop Growing the Tree? Fixed-depth search Better: iterative deepening search (with cut-off at the goal limit) but only evaluate peaceful positions that won t cause large fluctuations in the evaluation function in the following moves. e.g., follow a sequence of forced moves through to the end. 06/12

Horizon Problem Black has a slight material advantage but will eventually lose (pawn becomes a queen) A fixed-depth search cannot detect this because it thinks it can avoid it (on the other side of the horizon - because black is concentrating on the check with the rook, to which white must react). 06/13

Alpha-Beta Pruning We do not need to consider all nodes. 06/14

Alpha-Beta Pruning: General If m > n we will never reach node n in the game. 06/15

Alpha-Beta Pruning Minimax algorithm with depth-first search α = the value of the best (i.e., highest-value) choice we have found so far at any choice point along the path for MAX. β = the value of the best (i.e., lowest-value) choice we have found so far at any choice point along the path for MIN. 06/16

When Can we Prune? The following applies: α values of MAX nodes can never decrease β values of MIN nodes can never increase (1) Prune below the MIN node whose β-bound is less than or equal to the α-bound of its MAX-predecessor node. (2) Prune below the MAX node whose α-bound is greater than or equal to the β-bound of its MIN-predecessor node. Provides the same results as the complete minimax search to the same depth (because only irrelevant nodes are eliminated). 06/17

Alpha-Beta Search Algorithm Initial call with MAX-VALUE(initial-state,, + ) 06/18

Alpha-Beta Pruning Example 06/19

Alpha-Beta Pruning Example 06/20

Alpha-Beta Pruning Example 06/21

Alpha-Beta Pruning Example 06/22

Alpha-Beta Pruning Example 06/23

Efficiency Gain The alpha-beta search cuts the largest amount off the tree when we examine the best move first. In the best case (always the best move first), the search expenditure is reduced to O(b d/2 ). In the average case (randomly distributed moves), the search expenditure is reduced to O((b/log b) d ) For b < 100, we attain O(b 3d/4 ). Practical case: A simple ordering heuristic brings the performance close to the best case. We can search twice as deep in the same amount of time In chess, we can thus reach a depth of 6-7 moves. 06/24

Games that Include an Element of Chance White has just rolled 6-5 and has 4 legal moves. 06/25

Game Tree for Backgammon In addition to MIN- and MAX nodes, we need chance nodes (for the dice). 06/26

Calculation of the Expected Value Utility function for chance nodes C over MAX: d i : possible dice rolls P(d i ): probability of obtaining that roll S(C,d i ): attainable positions from C with roll d i utility(s): Evaluation of s expectimax(c) = Σ P(d i ) max (utility(s)) expectimin likewise i S S(C,d i ) 06/27

Problems Order-preserving transformations on evaluation values change the best move: Search costs increase: Instead of O(b d ), we get O((bxn) d ), where n is the number of possible dice outcomes. In Backgammon (n=21, b=20, can be 4000) the maximum for d is 2. 06/28

Card Games Recently card games such as bridge and poker have been addressed as well One approach: simulate play with open cards and then average over all possible plays (or make a Monte Carlo simulation) using minimax (perhaps modified) Pick the move with the best expected result (usually all moves will lead to a loss, but some give better results) Averaging over clairvoyancy Although incorrect, appears to give reasonable results 06/29

State of the Art Checkers, draughts (by international rules): A program called CHINOOK is the official world champion in mancomputer competition (acknowledges by ACF and EDA) and the highest-rated player: CHINOOK: 2712 Ron King: 2632 Asa Long: 2631 Don Lafferty: 2625 Backgammon: The BKG program defeated the official world champion in 1980. A newer program TD-Gammon is among the top 3 players. Othello: Very good, even on normal computers. In 1997, the Logistello program defeated the human world champion. Go: The best programs (Zen, Mogo, Crazystone) are rated as good as strong amateurs (1kyu/1dan) on the Internet Go servers. However, its usually easy to adapt to the weaknesses of these programs. 06/30

Chess (1) Chess as Drosophila of AI research. A limited number of rules produces an unlimited number of courses of play. In a game of 40 moves, there are 1.5 x 10 128 possible courses of play. Victory comes through logic, intuition, creativity, and previous knowledge. Only special chess intelligence, no general knowledge 06/31

Chess (2) In 1997, world chess master G. Kasparow was beaten by a computer in a match of 6 games. Deep Blue (IBM Thomas J. Watson Research Center) Special hardware (32 processors with 8 chips, 2 Mi. calculations per second) Heuristic search Case-based reasoning and learning techniques 1996 Knowledge based on 600 000 chess games 1997 Knowledge based on 2 million chess games Training through grand masters Duel between the machine-like human Kasparow vs. the human machine Deep Blue. 06/32

Chess (3) Nowadays, ordinary PC hardware is enough Name Strength (ELO) Rybka 2.3.1 2962 G. Kasperow 2828 V. Anand 2758 A. Karpow 2710 Deep Blue 2680 But note that the machine ELO points are not strictly comparable to human ELO points 06/33

The Reasons for Success Alpha-Beta-Search with dynamic decision-making for uncertain positions Good (but usually simple) evaluation functions Large databases of opening moves. Very large game termination databases (for checkers, all 10-piece situations) And very fast and parallel processors! 06/34

Summary A game can be defined by the initial state, the operators (legal moves), a terminal test and a utility function (outcome of the game). In two-player board games, the minimax algorithm can determine the best move by enumerating the entire game tree. The alpha-beta algorithm produces the same result but is more efficient because it prunes away irrelevant branches. Usually, it is not feasible to construct the complete game tree, so the utility of some states must be determined by an evaluation function. Games of chance can be handled by an extension of the alpha-beta algorithm. 06/35