Foundations of Artificial Intelligence

Similar documents
Foundations of Artificial Intelligence

Contents. Foundations of Artificial Intelligence. Problems. Why Board Games?

Foundations of AI. 6. Board Games. Search Strategies for Games, Games with Chance, State of the Art

Foundations of AI. 5. Board Games. Search Strategies for Games, Games with Chance, State of the Art. Wolfram Burgard and Luc De Raedt SA-1

Foundations of AI. 6. Adversarial Search. Search Strategies for Games, Games with Chance, State of the Art. Wolfram Burgard & Bernhard Nebel

Adversarial Search and Game Playing

Adversarial Search. Soleymani. Artificial Intelligence: A Modern Approach, 3 rd Edition, Chapter 5

School of EECS Washington State University. Artificial Intelligence

CS 331: Artificial Intelligence Adversarial Search II. Outline

CS 771 Artificial Intelligence. Adversarial Search

Game-playing AIs: Games and Adversarial Search FINAL SET (w/ pruning study examples) AIMA

Artificial Intelligence, CS, Nanjing University Spring, 2018, Yang Yu. Lecture 4: Search 3.

Artificial Intelligence Adversarial Search

Game Playing: Adversarial Search. Chapter 5

CPS 570: Artificial Intelligence Two-player, zero-sum, perfect-information Games

Game-Playing & Adversarial Search

Game Playing. Philipp Koehn. 29 September 2015

CS 188: Artificial Intelligence

Adversarial Search and Game- Playing C H A P T E R 6 C M P T : S P R I N G H A S S A N K H O S R A V I

Artificial Intelligence. Topic 5. Game playing

Lecture 14. Questions? Friday, February 10 CS 430 Artificial Intelligence - Lecture 14 1

Adversarial Search Lecture 7

Game playing. Chapter 5, Sections 1 6

Game Playing State-of-the-Art CSE 473: Artificial Intelligence Fall Deterministic Games. Zero-Sum Games 10/13/17. Adversarial Search

Lecture 5: Game Playing (Adversarial Search)

Artificial Intelligence. Minimax and alpha-beta pruning

Programming Project 1: Pacman (Due )

Game playing. Chapter 6. Chapter 6 1

Adversarial Search Aka Games

Game playing. Chapter 5. Chapter 5 1

CS 380: ARTIFICIAL INTELLIGENCE ADVERSARIAL SEARCH. Santiago Ontañón

Adversarial Search (Game Playing)

CPS331 Lecture: Search in Games last revised 2/16/10

Games vs. search problems. Game playing Chapter 6. Outline. Game tree (2-player, deterministic, turns) Types of games. Minimax

Game playing. Outline

Game playing. Chapter 6. Chapter 6 1

Adversarial Search and Game Playing. Russell and Norvig: Chapter 5

Adversarial Search: Game Playing. Reading: Chapter

CS 380: ARTIFICIAL INTELLIGENCE

CITS3001. Algorithms, Agents and Artificial Intelligence. Semester 2, 2016 Tim French

Games and Adversarial Search

Ar#ficial)Intelligence!!

ADVERSARIAL SEARCH. Chapter 5

Game Playing. Dr. Richard J. Povinelli. Page 1. rev 1.1, 9/14/2003

CSE 473: Artificial Intelligence. Outline

Adversarial Search. Human-aware Robotics. 2018/01/25 Chapter 5 in R&N 3rd Ø Announcement: Slides for this lecture are here:

Artificial Intelligence

COMP219: COMP219: Artificial Intelligence Artificial Intelligence Dr. Annabel Latham Lecture 12: Game Playing Overview Games and Search

Artificial Intelligence Search III

Artificial Intelligence 1: game playing

Foundations of Artificial Intelligence

CS 188: Artificial Intelligence

Games CSE 473. Kasparov Vs. Deep Junior August 2, 2003 Match ends in a 3 / 3 tie!

6. Games. COMP9414/ 9814/ 3411: Artificial Intelligence. Outline. Mechanical Turk. Origins. origins. motivation. minimax search

Path Planning as Search

Games vs. search problems. Adversarial Search. Types of games. Outline

Last update: March 9, Game playing. CMSC 421, Chapter 6. CMSC 421, Chapter 6 1

Solving Problems by Searching: Adversarial Search

Game Playing AI Class 8 Ch , 5.4.1, 5.5

Adversary Search. Ref: Chapter 5

Today. Types of Game. Games and Search 1/18/2010. COMP210: Artificial Intelligence. Lecture 10. Game playing

CS 5522: Artificial Intelligence II

Outline. Game playing. Types of games. Games vs. search problems. Minimax. Game tree (2-player, deterministic, turns) Games

CS440/ECE448 Lecture 9: Minimax Search. Slides by Svetlana Lazebnik 9/2016 Modified by Mark Hasegawa-Johnson 9/2017

Adversarial Search. Rob Platt Northeastern University. Some images and slides are used from: AIMA CS188 UC Berkeley

Artificial Intelligence

COMP219: Artificial Intelligence. Lecture 13: Game Playing

Outline. Game Playing. Game Problems. Game Problems. Types of games Playing a perfect game. Playing an imperfect game

Game-playing: DeepBlue and AlphaGo

CS 188: Artificial Intelligence Spring Announcements

Game Playing State-of-the-Art

Adversarial Search. Chapter 5. Mausam (Based on slides of Stuart Russell, Andrew Parks, Henry Kautz, Linda Shapiro) 1

CS 4700: Foundations of Artificial Intelligence

Adversarial Search. Read AIMA Chapter CIS 421/521 - Intro to AI 1

Adversarial search (game playing)

Artificial Intelligence

ADVERSARIAL SEARCH 5.1 GAMES

Computer Science and Software Engineering University of Wisconsin - Platteville. 4. Game Play. CS 3030 Lecture Notes Yan Shi UW-Platteville

Algorithms for Data Structures: Search for Games. Phillip Smith 27/11/13

Announcements. Homework 1. Project 1. Due tonight at 11:59pm. Due Friday 2/8 at 4:00pm. Electronic HW1 Written HW1

Announcements. CS 188: Artificial Intelligence Spring Game Playing State-of-the-Art. Overview. Game Playing. GamesCrafters

Game-Playing & Adversarial Search Alpha-Beta Pruning, etc.

Artificial Intelligence

Adverserial Search Chapter 5 minmax algorithm alpha-beta pruning TDDC17. Problems. Why Board Games?

Game Engineering CS F-24 Board / Strategy Games

CSE 573: Artificial Intelligence

Game Playing. Why do AI researchers study game playing? 1. It s a good reasoning problem, formal and nontrivial.

Ch.4 AI and Games. Hantao Zhang. The University of Iowa Department of Computer Science. hzhang/c145

ARTIFICIAL INTELLIGENCE (CS 370D)

Games and Adversarial Search II

Foundations of Artificial Intelligence Introduction State of the Art Summary. classification: Board Games: Overview

Game Playing State-of-the-Art. CS 188: Artificial Intelligence. Behavior from Computation. Video of Demo Mystery Pacman. Adversarial Search

Set 4: Game-Playing. ICS 271 Fall 2017 Kalev Kask

Adversarial Search. Robert Platt Northeastern University. Some images and slides are used from: 1. CS188 UC Berkeley 2. RN, AIMA

Adversarial Search. CS 486/686: Introduction to Artificial Intelligence

Adversarial Search 1

Intuition Mini-Max 2

More Adversarial Search

Game Playing State of the Art

Game-playing AIs: Games and Adversarial Search I AIMA

Transcription:

Foundations of Artificial Intelligence 6. Board Games Search Strategies for Games, Games with Chance, State of the Art Joschka Boedecker and Wolfram Burgard and Bernhard Nebel Albert-Ludwigs-Universität Freiburg May 12, 2017

Contents 1 Board Games 2 Minimax Search 3 Alpha-Beta Search 4 Games with an Element of Chance 5 State of the Art (University of Freiburg) Foundations of AI May 12, 2017 2 / 33

Why Board Games? Board games are one of the oldest branches of AI (Shannon and Turing 1950). Board games present a very abstract and pure form of competition between two opponents and clearly require a form of intelligence. The states of a game are easy to represent. The possible actions of the players are well-defined. Realization of the game as a search problem The individual states are fully accessible It is nonetheless a contingency problem, because the characteristics of the opponent are not known in advance. (University of Freiburg) Foundations of AI May 12, 2017 3 / 33

Problems Board games are not only difficult because they are contingency problems, but also because the search trees can become astronomically large. Examples: Chess: On average 35 possible actions from every position; often, games have 50 moves per player, resulting in a search depth of 100: 35 100 10 150 nodes in the search tree (with only 10 40 legal chess positions). Go: On average 200 possible actions with ca. 300 moves 200 300 10 700 nodes. Good game programs have the properties that they delete irrelevant branches of the game tree, use good evaluation functions for in-between states, and look ahead as many moves as possible. (University of Freiburg) Foundations of AI May 12, 2017 4 / 33

Terminology of Two-Person Board Games Players are max and min, where max begins. Initial position (e.g., board arrangement) Operators (= legal moves) Termination test, determines when the game is over. Terminal state = game over. Strategy. In contrast to regular searches, where a path from beginning to end is simply a solution, max must come up with a strategy to reach a terminal state regardless of what min does correcting reactions to all of min s moves. (University of Freiburg) Foundations of AI May 12, 2017 5 / 33

Tic-Tac-Toe Example MA () MIN (O) MA () O O O... MIN (O) O O O............... TERMINAL Utility O O O O O O O O O O 1 0 +1... Every step of the search tree, also called game tree, is given the player s name whose turn it is (max- and min-steps). When it is possible, as it is here, to produce the full search tree (game tree), the minimax algorithm delivers an optimal strategy for max. (University of Freiburg) Foundations of AI May 12, 2017 6 / 33

Minimax 1. Generate the complete game tree using depth-first search. 2. Apply the utility function to each terminal state. 3. Beginning with the terminal states, determine the utility of the predecessor nodes as follows: Node is a min-node Value is the minimum of the successor nodes Node is a max-node Value is the maximum of the successor nodes From the initial state (root of the game tree), max chooses the move that leads to the highest value (minimax decision). Note: Minimax assumes that min plays perfectly. Every weakness (i.e., every mistake min makes) can only improve the result for max. (University of Freiburg) Foundations of AI May 12, 2017 7 / 33

Minimax Example (University of Freiburg) Foundations of AI May 12, 2017 8 / 33

Minimax Algorithm Recursively calculates the best move from the initial state. function MINIMA-DECISION(state) returns an action return arg max a MIN-VALUE(RESULT(state, a)) ACTIONS(s) function MA-VALUE(state) returns a utility value if TERMINAL-TEST(state) then return UTILITY(state) v for each a in ACTIONS(state) do v MA(v, MIN-VALUE(RESULT(s, a))) return v function MIN-VALUE(state) returns a utility value if TERMINAL-TEST(state) then return UTILITY(state) v for each a in ACTIONS(state) do v MIN(v, MA-VALUE(RESULT(s, a))) return v Figure 5.3 An algorithm for calculating minimax decisions. It returns the action corresponding to the best possible move, that is, the move that leads to the outcome with the best utility, under the assumption that the opponent plays to minimize utility. The functions MA-VALUE and MIN-VALUE go through the whole game tree, all the way to the leaves, to determine the backed-up value of a state. The notation argmax a S f(a) computes the element a of set S that has the maximum value of f(a). Note: Minimax can only be applied to game trees that are not too deep. Otherwise, the minimax value must be approximated at a certain level. (University of Freiburg) Foundations of AI May 12, 2017 9 / 33

Evaluation Function When the search tree is too large, it can be expanded to a certain depth only. The art is to correctly evaluate the playing position of the leaves of the tree at that depth. Example of simple evaluation criteria in chess: (University of Freiburg) Foundations of AI May 12, 2017 10 / 33

Evaluation Function When the search tree is too large, it can be expanded to a certain depth only. The art is to correctly evaluate the playing position of the leaves of the tree at that depth. Example of simple evaluation criteria in chess: Material value: pawn 1, knight/bishop 3, rook 5, queen 9 Other: king safety, good pawn structure Rule of thumb: three-point advantage = certain victory The choice of the evaluation function is decisive! The value assigned to a state of play should reflect the chances of winning, i.e., the chance of winning with a one-point advantage should be less than with a three-point advantage. (University of Freiburg) Foundations of AI May 12, 2017 10 / 33

Evaluation Function General The preferred evaluation functions are weighted, linear functions: w 1 f 1 + w 2 f 2 + + w n f n where the w s are the weights, and the f s are the features. [e.g., w 1 = 3, f 1 = number of our own knights on the board] The above linear sum makes the strong assumption that the contributions of all features are independent. (not true: e.g., bishops in the endgame are more powerful, when there is more space) The weights can be learned. The features, however, are often designed by human intuition and understanding (University of Freiburg) Foundations of AI May 12, 2017 11 / 33

When Should we Stop Growing the Tree? Motivation: Return an answer within the allocated time. Fixed-depth search. Better: iterative deepening search (stop, when time is over). but only stop and evaluate at quiescent positions that will not cause large fluctuations in the evaluation function in the following moves. For example, if one can capture a figure, then the position is not quiescent because this action might change the evaluation substantially. An alternative is to continue the search at non quiescent positions, preferably by only allowing certain types of moves (e.g., capturing) to reduce search effort, until a quiescent position was reached. There still is the problem of limited depth search: horizon effect (see next slide). (University of Freiburg) Foundations of AI May 12, 2017 12 / 33

Horizon Problem Black to move Black has a slight material advantage... but will eventually lose (pawn becomes a queen). A fixed-depth search cannot detect this because it thinks it can avoid it (on the other side of the horizon because black is concentrating on the check with the rook, to which white must react). (University of Freiburg) Foundations of AI May 12, 2017 13 / 33

Alpha-Beta Pruning Can we improve this? (University of Freiburg) Foundations of AI May 12, 2017 14 / 33

Alpha-Beta Pruning Can we improve this? We do not need to consider all nodes. (University of Freiburg) Foundations of AI May 12, 2017 14 / 33

Alpha-Beta Pruning: General Player Opponent m...... Player Opponent n If m > n we will never reach node n in the game. (University of Freiburg) Foundations of AI May 12, 2017 15 / 33

Alpha-Beta Pruning Minimax algorithm with depth-first search α = the value of the best (i.e., highest-value) choice we have found so far at any choice point along the path for max. β = the value of the best (i.e., lowest-value) choice we have found so far at any choice point along the path for min. (University of Freiburg) Foundations of AI May 12, 2017 16 / 33

When Can we Prune? The following applies: α values of max nodes can never decrease β values of min nodes can never increase (1) Prune below the min node whose β-bound is less than or equal to the α-bound of its max-predecessor node. (2) Prune below the max node whose α-bound is greater than or equal to the β-bound of its min-predecessor node. Provides the same results as the complete minimax search to the same depth (because only irrelevant nodes are eliminated). (University of Freiburg) Foundations of AI May 12, 2017 17 / 33

Alpha-Beta Search Algorithm function ALPHA-BETA-SEARCH(state) returns an action v MA-VALUE(state,, + ) return the action in ACTIONS(state) with value v function MA-VALUE(state, α, β) returns a utility value if TERMINAL-TEST(state) then return UTILITY(state) v for each a in ACTIONS(state) do v MA(v, MIN-VALUE(RESULT(s,a), α, β)) if v β then return v α MA(α, v) return v function MIN-VALUE(state, α, β) returns a utility value if TERMINAL-TEST(state) then return UTILITY(state) v + for each a in ACTIONS(state) do v MIN(v, MA-VALUE(RESULT(s,a), α, β)) if v α then return v β MIN(β, v) return v Initial call with Max-Value(initial-state,, + ) Figure 5.7 The alpha beta search algorithm. Notice that these routines are the same as the MINIMA functions in Figure??, except for the two lines in each of MIN-VALUE and MA-VALUE that maintain α and β (and the bookkeeping to pass these parameters along). (University of Freiburg) Foundations of AI May 12, 2017 18 / 33

Alpha-Beta Pruning Example MA 3 MIN 3 3 12 8 (University of Freiburg) Foundations of AI May 12, 2017 19 / 33

Alpha-Beta Pruning Example MA 3 MIN 3 2 3 12 8 2 (University of Freiburg) Foundations of AI May 12, 2017 20 / 33

Alpha-Beta Pruning Example MA 3 MIN 3 2 14 3 12 8 2 14 (University of Freiburg) Foundations of AI May 12, 2017 21 / 33

Alpha-Beta Pruning Example MA 3 MIN 3 2 14 5 3 12 8 2 14 5 (University of Freiburg) Foundations of AI May 12, 2017 22 / 33

Alpha-Beta Pruning Example MA 3 3 MIN 3 2 14 5 2 3 12 8 2 14 5 2 (University of Freiburg) Foundations of AI May 12, 2017 23 / 33

Efficiency Gain The alpha-beta search cuts the largest amount off the tree when we examine the best move first. In the best case (always the best move first), the search expenditure is reduced to O(b d/2 ) we can search twice as deep in the same amount of time. In the average case (randomly distributed moves), for moderate b (b < 100), we roughly have O(b 3d/4 ). However, the best move typically is not known. Practical case: A simple ordering heuristic brings the performance close to the best case In chess, we can thus reach a depth of 6 7 moves. Good ordering for chess? (University of Freiburg) Foundations of AI May 12, 2017 24 / 33

Efficiency Gain The alpha-beta search cuts the largest amount off the tree when we examine the best move first. In the best case (always the best move first), the search expenditure is reduced to O(b d/2 ) we can search twice as deep in the same amount of time. In the average case (randomly distributed moves), for moderate b (b < 100), we roughly have O(b 3d/4 ). However, the best move typically is not known. Practical case: A simple ordering heuristic brings the performance close to the best case In chess, we can thus reach a depth of 6 7 moves. Good ordering for chess? Try captures first, then threats, then forward moves, then backward moves. (University of Freiburg) Foundations of AI May 12, 2017 24 / 33

Games that Include an Element of Chance 0 1 2 3 4 5 6 7 8 9 10 11 12 25 24 23 22 21 20 19 18 17 16 15 14 13 White has just rolled a 6 and a 5 and has 4 legal moves. (University of Freiburg) Foundations of AI May 12, 2017 25 / 33

Game Tree for Backgammon In addition to min- and max nodes, we need chance nodes (for the dice). MA CHANCE MIN B............ 1/36 1,1... 1/18 1,2......... 1/18 1/36 6,5 6,6... CHANCE C............ MA 1/36 1,1... 1/18 1,2... 1/18 1/36 6,5 6,6...... TERMINAL 2 1 1 1 1 (University of Freiburg) Foundations of AI May 12, 2017 26 / 33

Calculation of the Expected Value Utility function for chance nodes C over max: d i : possible dice roll P (d i ): probability of obtaining that roll S(C, d i ): attainable positions from C with roll d i Utility(s): Evaluation of s Expectimax(C) = i P (d i ) max (Utility(s)) s S(C,d i ) Expectimin likewise (University of Freiburg) Foundations of AI May 12, 2017 27 / 33

Problems Order-preserving transformations on the evaluation values may change the best move: MA a 1 a 2 a 1 a 2 CHANCE 2.1 1.3.9.1.9.1 21 40.9.9.1.9.1 MIN 2 3 1 4 20 30 1 400 2 2 3 3 1 1 4 4 20 20 30 30 1 1 400 400 Search costs increase: Instead of O(b d ), we get O((b n) d ), where n is the number of possible dice outcomes. In Backgammon (n = 21, b = 20, can be 4000) the maximum for d is 2. (University of Freiburg) Foundations of AI May 12, 2017 28 / 33

Card Games Recently card games such as bridge and poker have been addressed as well One approach: simulate play with open cards and then average over all possible plays (or make a Monte Carlo simulation) using minimax (perhaps modified) Pick the move with the best expected result (usually all moves will lead to a loss, but some give better results) Averaging over clairvoyance Although incorrect, appears to give reasonable results (University of Freiburg) Foundations of AI May 12, 2017 29 / 33

State of the Art (1) Backgammon: The BKG program defeated the official world champion in 1980. A newer program TD-Gammon is among the top 3 players. Checkers, draughts (by international rules): A program called Chinook is the official world champion in man-computer competition (acknowledges by ACF and EDA) and is the highest-rated player: Chinook: 2712 Ron King: 2632 Asa Long: 2631 Don Lafferty: 2625 In 1995, Chinook won a 32 game match against Don Lafferty. Othello: Very good, even on normal computers. In 1997, the Logistello program defeated the human world champion. Chess: In 1997, world chess master G. Kasparow was beaten by a computer in a match of 6 games by Deep Blue (IBM Thomas J. Watson Research Center). Special hardware (32 processors with 8 chips, 2 Mi. calculations per second) and special chess knowledge. (University of Freiburg) Foundations of AI May 12, 2017 30 / 33

State of the Art (2) Go: The program AlphaGo was able to beat in March 2016 one of the best human players Lee Sedol (according to ELO ranking the 4th best player worldwide) 4:1. AlphaGo used Monte Carlo search techniques (UCT) and deep learning techniques. Poker: In January 2017, Libratus played against four top-class human poker players for 20 days heads-up no-limit Texas hold em. In the end, Libratus was more than 1.7 M$ ahead. Libratus used a number of different techniques all based on game theory. (University of Freiburg) Foundations of AI May 12, 2017 31 / 33

The Reasons for Success... Alpha-Beta-Search... with dynamic decision-making for uncertain positions Good (but usually simple) evaluation functions Large databases of opening moves Very large game termination databases (for checkers, all ten-piece situations) For Go, Monte-Carlo and machine learning techniques proved to be successful.... and very fast and parallel processors, huge memory, and plenty of plays. For Poker, game theoretic analysis together with extensive self-play (15 million core CPU hours) were important. (University of Freiburg) Foundations of AI May 12, 2017 32 / 33

Summary A game can be defined by the initial state, the operators (legal moves), a terminal test and a utility function (outcome of the game). In two-player board games, the minimax algorithm can determine the best move by enumerating the entire game tree. The alpha-beta algorithm produces the same result but is more efficient because it prunes away irrelevant branches. Usually, it is not feasible to construct the complete game tree, so the utility of some states must be determined by an evaluation function. Games of chance can be handled by an extension of the alpha-beta algorithm. The success for different games is based on quite different methodolgies. (University of Freiburg) Foundations of AI May 12, 2017 33 / 33