Decision Making in Multiplayer Environments Application in Backgammon Variants

Size: px
Start display at page:

Download "Decision Making in Multiplayer Environments Application in Backgammon Variants"

Transcription

1 Decision Making in Multiplayer Environments Application in Backgammon Variants PhD Thesis by Nikolaos Papahristou AI researcher Department of Applied Informatics Thessaloniki, Greece

2 Contributions Expert playing agents for the first time in Tavli games New training method for self-play learning Statistics for tavli games Palamedes: A free program where anyone can play against the trained agents 1 st place and gold medal in 2011 and 2015 Backgammon Computer Olympiad Nikolaos Papahristou PhD Thesis AI in backgammon Variants 2

3 AI history in Games 1993, TD-Gammon: Surpasses world-champion in backgammon 1994, Chinook: Beat world champion in Checkers 1997, Logistello: Beat world champion in Othello 1997, Deep Blue: Beat world champion in Chess 1998, Maven: Beat world champion in Scrabble 2007, Chinook: Weakly solving of checkers 2015, Cepheus: Effectively solving Heads up Limit Texas Holdem Poker 2016, AlphaGo: Beat legendary player at Go Nikolaos Papahristou PhD Thesis AI in backgammon Variants 3

4 Tavli Games - Motivation In Greece there are three popular variants Portes (similar to standard backgammon) Plakoto Fevga No previous research in these variants Can we make AI agents that play these games expertly? Nikolaos Papahristou PhD Thesis AI in backgammon Variants 4

5 Outline 1. Background 2. Learning to Play Tavli Games 3. Statistics and Match Play 4. Palamedes 5. Conclusion Nikolaos Papahristou PhD Thesis AI in backgammon Variants 5

6 Backgammon Games Portes Plakoto Fevga Tavli Match Nikolaos Papahristou PhD Thesis AI in backgammon Variants 6

7 Modes of Play Money games The goal of the player is to maximize his/her points from a single game Money games can be viewed as games played in a match of infinite length Backgammon Matches Players accumulate points until one player reaches a predefined number of points Typical match length is 5 or 7 points. Goal of the player is to win the match. Matches can also be comprised by different variants (e.g. Tavli match). In this thesis we deal with matches of one variant only. Nikolaos Papahristou PhD Thesis AI in backgammon Variants 7

8 Machine Learning Categories Supervised learning Examples of inputs and their desired outputs (labels) are given. Find a model that learns to map new inputs. Unsupervised learning No labels given. Find structure in data. Reinforcement learning No inputs/labels. Find best behavior by interacting a (dynamic) environment. Nikolaos Papahristou PhD Thesis AI in backgammon Variants 8

9 Reinforcement Learning (RL) MDP μ = (S, Α, P, R, I) S (s 1, s 2, s T ) is the state space A is the action space: Α(s) P is the transition model: P(s t+1 s t, α) R is the reward function: R(s t,, α, s t+1 ) Ι is the initial state Markov property Transitions and rewards are independent of history Nikolaos Papahristou PhD Thesis AI in backgammon Variants 9

10 RL - Value Function Methods A value function V: S R, maps a state to a real number value when following a policy π: V π s = E π R t s t = s π is better than π, if and only if V π (s) V π (s) for every state. All optimal policies share the same optimal value function: V s = max π Vπ s Nikolaos Papahristou PhD Thesis AI in backgammon Variants 10

11 Temporal Difference Learning TD(0) δ t = r t+1 + γv s t+1 V s t V s t V s t + aδ t δ t is called the temporal difference error V s t+1 is the target of the update α [0,1] : is the learning rate TD(0) uses bootstrapping is a stochastic approximation algorithm uses one-step backups Nikolaos Papahristou PhD Thesis AI in backgammon Variants 11

12 Temporal Difference Learning TD(λ) base the backup on more than one future reward multi-step backup V x t V x t + a γλ k k=0 λ [0,1] For λ=0 TD(λ) equivalent to TD(0) δ t+k For λ=1 TD(λ) resembles Monte Carlo sampling λ (0,1): offers a way of future states affecting the present. A carefully selected value of λ can speed up learning In practice λ is determined by trial and error Nikolaos Papahristou PhD Thesis AI in backgammon Variants 12

13 Function Approximation Previous algorithms work on small state spaces Real-world applications have large state spaces Computing and storing the values of all states impractical Solution: generalize from a limited subsets of states = function approximation Typical function approximation methods Neural Networks Decision trees Tile coding Radial Basis Functions (RBF) Nikolaos Papahristou PhD Thesis AI in backgammon Variants 13

14 Outline 1. Background 2. Learning to Play Tavli Games 3. Statistics and Match Play 4. Palamedes 5. Conclusion Nikolaos Papahristou PhD Thesis AI in backgammon Variants 14

15 Neural Network Architecture Outputs: W: Win game WD: Win Double game LD: Lose Double game OUTPUTS W WD LD OUTPUT LAYER ʃ ʃ ʃ Linear combination of outputs produces position score ʃ ʃ ʃ HIDDEN LAYER..... INPUT LAYER Backgammon position encoded at input layer Nikolaos Papahristou PhD Thesis AI in backgammon Variants 15

16 Evaluation Evaluation procedures: against Tavli3D (open source program) against stored weights of the same NN against previously trained agent games per benchmark Evaluation value: estimated points per game (ppg) Nikolaos Papahristou PhD Thesis AI in backgammon Variants 16

17 Training procedure Generate sample game Self-play: Neural Network used as evaluation function At every time step all legal afterstates are scored and the best one is played Update weights Apply TD(λ) update for every move of the game using the back-propagation procedure of the NN Repeat procedure until no more improvement is observed Nikolaos Papahristou PhD Thesis AI in backgammon Variants 17

18 Game sequence creation for learning Database of games already available Observe (or play against) expert(s) Self-play 1. Learning online (each update is done immediately after a move is played). 2. Learning offline (updates are done incrementally after the game ends) a) Forward offline: Updates are done starting from the first position of the game and ending at the terminal position. b) Reverse offline: Updates are done starting from the terminal position of the game and ending at the first. c) Reverse offline recalc: As previous, but recalculate target value after each update. Nikolaos Papahristou PhD Thesis AI in backgammon Variants 18

19 Comparison of sequence creation methods ppg vs pubeval ppg vs Fevga-1 ppg vs Plakoto1 0 Backgammon 0.5 Plakoto Fevga Reverse Offline Recalc Online Reverse Offline Forward Offline Each line average of 10 training runs -λ = 0, α = 0.1 -ΝΝ has 10 Hidden Units Nikolaos Papahristou PhD Thesis AI in backgammon Variants 19

20 Determining the target of the update a b c a: Update the values without flipping the board. b: Updates are split in two. c: Updates are done on the inverted value of the next player. Nikolaos Papahristou PhD Thesis AI in backgammon Variants 20

21 Plakoto features Plakoto-1 (raw encoding) 4 binary inputs for every point per player 1 input for the checkers off board per player 1 binary input for every point per player for pins Plakoto-2 (raw + smart features) Replaced the player pin units with the probability of the opponent pinning the points Nikolaos Papahristou PhD Thesis AI in backgammon Variants 21

22 Fevga features Fevga-1 (raw encoding) 4 binary inputs for every point per player 1 input for the checkers off board per player Fevga-2 (raw + smart features) existence of primes (the most powerful strategy in fevga) pipcount, existence of a race situation Fevga-3 (raw + smart + intermediate reward) Primes are treated as winning positions Strategy learned is based on the creation of primes This strategy is considered one of the most powerful by expert players Nikolaos Papahristou PhD Thesis AI in backgammon Variants 22

23 points per game (ppg) vs Tavli3D points per game (ppg) vs Tavli3D Training progress of all agents examined Plakoto-1 Plakoto-2 Plakoto Fevga2 Fevga3 Fevga4 Fevga5 0 Games trained (millions) Games trained (millions) Summary of techniques used by the various agents Plakoto Agent Updating method Sequence creation and update direction Fevga agent Updating method Sequence creation and update direction Intermediate reward Plakoto-1 b Forward offline Fevga-2 b Forward offline No Plakoto-2 b Forward offline Fevga-3 b Forward offline Yes Plakoto-3 c Reverse offline recalc Fevga-4 c Reverse offline recalc No Fevga-5 c Reverse offline recalc Yes Nikolaos Papahristou PhD Thesis AI in backgammon Variants 23

24 Final Training Setup NNs as game evaluation function of states Training examples by self-play Temporal Difference Learning (λ) for weight update Offline updates Updates start from the terminal position and work back to the starting position Incremental updating of weights Gradual decrease of α, λ parameters Nikolaos Papahristou PhD Thesis AI in backgammon Variants 24

25 Selected values of α and λ parameters. Games Trained Portes Plakoto Fevga λ=0.7 α=1 λ=0 α=0.3 λ=0.7 α= λ=0.7 α=0.3 λ=0 α=0.3 λ=0.7 α= λ=0.7 α=0.1 λ=0 α=0.1 λ=0.7 α= λ=0 α=0.3 λ=0 α=0.1 λ=0 α= λ=0 α=0.1 λ=0 α=0.1 λ=0 α= λ=0 α=0.1 λ=0 α=0.01 λ=0 α= λ=0 α= Nikolaos Papahristou PhD Thesis AI in backgammon Variants 25

26 New features added Plakoto-5 Race, PipDiff, PipBearoff, PipBearoff, ChFrontOfPin, Esc_Prob Fevga-6 Probability of making a prime instead of binary feature Race, PipDiff, PipBearoff, PipBearoff Nikolaos Papahristou PhD Thesis AI in backgammon Variants 26

27 Performance of the new bots New Bot Opponent ppg Portes-1(1-ply) Pubeval (1-ply) Plakoto-5(1-ply) Plakoto-4(1-ply) Plakoto-5(2-ply) Plakoto-4(1-ply) Fevga-6(1-ply) Fevga-4(1-ply) Fevga-6(2-ply) Fevga-4(1-ply) The number of games played are for 1-ply and for 2-ply. In order to speed up the testing time of 2-ply, the expansion of depth-2 was performed only for the best 15 candidate moves (forward pruning). Nikolaos Papahristou PhD Thesis AI in backgammon Variants 27

28 Outline 1. Background 2. Learning to Play Tavli Games 3. Statistics and Match Play 4. Palamedes 5. Conclusion Nikolaos Papahristou PhD Thesis AI in backgammon Variants 28

29 Motivation - Method Investigate first player advantage in Tavli games (Portes, Plakoto, Fevga) using simulation and the Palamedes bot. Extract useful statistics (e.g. % games won as double wins) for each game Construct effective match strategies Method: Self-Play simulations for every roll and every starting move Nikolaos Papahristou PhD Thesis AI in backgammon Variants 29

30 First player estimated equity of all opening rolls Nikolaos Papahristou PhD Thesis AI in backgammon Variants 30

31 Advantage of the first player SINGLE ROLLS Portes Plakoto Fevga DOUBLE ROLLS ALL ROLLS Notable points: First player has big advantage on Fevga Double rolls significantly increase first player chances (except on Fevga) Fairness ranking: 1)Backgammon 2)Portes 3)Plakoto 4)Fevga Nikolaos Papahristou PhD Thesis AI in backgammon Variants 31

32 Expected outcome (%) of the first player Fevga All Rolls 47.31% 10.22% 38.42% 4.05% Fevga Double Rolls 47.39% 12.89% 36.06% 3.66% Fevga Single Rolls 47.30% 9.69% 38.89% 4.12% Plakoto All Rolls 29.90% 22.77% 29.62% 17.71% Plakoto Double Rolls 27.85% 28.91% 27.30% 15.95% Plakoto Single Rolls 30.31% 21.55% 30.09% 18.06% Portes All Rolls 38.18% 14.62% 34.87% 12.31% Portes Double Rolls 41.84% 17.57% 30.68% 9.73% Portes Single Rolls 37.45% 14.03% 35.70% 12.82% 0% 100% Single Wins (WS) Double Wins (WD) Single Losses (LS) Double Losses (LD) Table 1. Gammon rates of Tavli variants Variant Gammon Rate Portes 26.85% Plakoto 40.48% Fevga 14.27% Nikolaos Papahristou PhD Thesis AI in backgammon Variants 32

33 Example of a MWC table (Fevga) A MATCH WINNING CHANCE (MWC) away B away Move selection Money Strategy: E = WS LS + 2 * (WD LD) Match Strategy: MWC = WS * mwc(a-1, B) + WD * mwc(a-2, B) + LS * mwc(a, B-1) + LD * mwc(a, B-2) Nikolaos Papahristou PhD Thesis AI in backgammon Variants 33

34 Match strategy vs Money Strategy Variant Match Wins Diff. moves Games WS Games WD Games LS Games LD Total game points Portes 5144 ± % Plakoto 5103 ± % Fevga 5067 ± % point matches All results from the point of the match-strategy player WS: Single wins WD: Double Wins LS: Single Losses LD: Double Losses Diff. moves: % of match strategy moves that is different from the ones that the money strategy would have made in its place. Nikolaos Papahristou PhD Thesis AI in backgammon Variants 34

35 Outline 1. Background 2. Learning to Play Tavli Games 3. Statistics and Match Play 4. Palamedes 5. Conclusion Nikolaos Papahristou PhD Thesis AI in backgammon Variants 35

36 Palamedes Free Software to play against all agents Windows and Android versions Developed in C++ using Qt Framework and Eigen library Nikolaos Papahristou PhD Thesis AI in backgammon Variants 36

37 Palamedes Features Several variants supported Human vs AI Look-ahead search (2-ply) Endgame databases supported Money-game and match modes Player Statistics Analysis of played moves (Windows only) Nikolaos Papahristou PhD Thesis AI in backgammon Variants 37

38 Palamedes Analytics installs 250 active users / day 1500 games / day 18 / session User results per game: Portes : ppg Plakoto: ppg Fevga: ppg Nikolaos Papahristou PhD Thesis AI in backgammon Variants 38

39 Palamedes in Computer Olympiads Participated two times in backgammon computer Olympiads (2011, 2015) 1 st place both times Game type: standard backgammon Opponents GNUBG: open source BGBlitz: commercial Nikolaos Papahristou PhD Thesis AI in backgammon Variants 39

40 Outline 1. Background 2. Learning to Play Tavli Games 3. Statistics and Match Play 4. Palamedes 5. Conclusion Nikolaos Papahristou PhD Thesis AI in backgammon Variants 40

41 Contributions Expert playing agents for the first time in Tavli games New training method for self-play learning Statistics for tavli games Palamedes: A free program where anyone can play against the trained agents Nikolaos Papahristou PhD Thesis AI in backgammon Variants 41

42 Future work Apply training algorithm to other games/env Make training algorithm multi-threaded Graying the NN black box Extend MWC tables for tavli matches Change Fevga rules to reduce first player adv. Add more endgame databases Palamedes: add deeper search (3-ply, 4-ply) Palamedes: tutor mode Nikolaos Papahristou PhD Thesis AI in backgammon Variants 42

43 Thank you! Nikos Papahristou

On the Design and Training of Bots to Play Backgammon Variants

On the Design and Training of Bots to Play Backgammon Variants On the Design and Training of Bots to Play Backgammon Variants Nikolaos Papahristou, Ioannis Refanidis To cite this version: Nikolaos Papahristou, Ioannis Refanidis. On the Design and Training of Bots

More information

Constructing Pin Endgame Databases for the Backgammon Variant Plakoto

Constructing Pin Endgame Databases for the Backgammon Variant Plakoto Constructing Pin Endgame Databases for the Backgammon Variant Plakoto Nikolaos Papahristou and Ioannis Refanidis University of Macedonia Department of Applied Informatics AI Group Thessaloniki, Greece

More information

Plakoto. A Backgammon Board Game Variant Introduction, Rules and Basic Strategy. (by J.Mamoun - This primer is copyright-free, in the public domain)

Plakoto. A Backgammon Board Game Variant Introduction, Rules and Basic Strategy. (by J.Mamoun - This primer is copyright-free, in the public domain) Plakoto A Backgammon Board Game Variant Introduction, Rules and Basic Strategy (by J.Mamoun - This primer is copyright-free, in the public domain) Introduction: Plakoto is a variation of the game of backgammon.

More information

Reinforcement Learning in Games Autonomous Learning Systems Seminar

Reinforcement Learning in Games Autonomous Learning Systems Seminar Reinforcement Learning in Games Autonomous Learning Systems Seminar Matthias Zöllner Intelligent Autonomous Systems TU-Darmstadt zoellner@rbg.informatik.tu-darmstadt.de Betreuer: Gerhard Neumann Abstract

More information

TD-Gammon, a Self-Teaching Backgammon Program, Achieves Master-Level Play

TD-Gammon, a Self-Teaching Backgammon Program, Achieves Master-Level Play NOTE Communicated by Richard Sutton TD-Gammon, a Self-Teaching Backgammon Program, Achieves Master-Level Play Gerald Tesauro IBM Thomas 1. Watson Research Center, I? 0. Box 704, Yorktozon Heights, NY 10598

More information

TUD Poker Challenge Reinforcement Learning with Imperfect Information

TUD Poker Challenge Reinforcement Learning with Imperfect Information TUD Poker Challenge 2008 Reinforcement Learning with Imperfect Information Outline Reinforcement Learning Perfect Information Imperfect Information Lagging Anchor Algorithm Matrix Form Extensive Form Poker

More information

An Artificially Intelligent Ludo Player

An Artificially Intelligent Ludo Player An Artificially Intelligent Ludo Player Andres Calderon Jaramillo and Deepak Aravindakshan Colorado State University {andrescj, deepakar}@cs.colostate.edu Abstract This project replicates results reported

More information

Adversarial Search and Game Playing

Adversarial Search and Game Playing Games Adversarial Search and Game Playing Russell and Norvig, 3 rd edition, Ch. 5 Games: multi-agent environment q What do other agents do and how do they affect our success? q Cooperative vs. competitive

More information

CS 331: Artificial Intelligence Adversarial Search II. Outline

CS 331: Artificial Intelligence Adversarial Search II. Outline CS 331: Artificial Intelligence Adversarial Search II 1 Outline 1. Evaluation Functions 2. State-of-the-art game playing programs 3. 2 player zero-sum finite stochastic games of perfect information 2 1

More information

CS440/ECE448 Lecture 11: Stochastic Games, Stochastic Search, and Learned Evaluation Functions

CS440/ECE448 Lecture 11: Stochastic Games, Stochastic Search, and Learned Evaluation Functions CS440/ECE448 Lecture 11: Stochastic Games, Stochastic Search, and Learned Evaluation Functions Slides by Svetlana Lazebnik, 9/2016 Modified by Mark Hasegawa Johnson, 9/2017 Types of game environments Perfect

More information

Game Playing: Adversarial Search. Chapter 5

Game Playing: Adversarial Search. Chapter 5 Game Playing: Adversarial Search Chapter 5 Outline Games Perfect play minimax search α β pruning Resource limits and approximate evaluation Games of chance Games of imperfect information Games vs. Search

More information

Game Playing State-of-the-Art CSE 473: Artificial Intelligence Fall Deterministic Games. Zero-Sum Games 10/13/17. Adversarial Search

Game Playing State-of-the-Art CSE 473: Artificial Intelligence Fall Deterministic Games. Zero-Sum Games 10/13/17. Adversarial Search CSE 473: Artificial Intelligence Fall 2017 Adversarial Search Mini, pruning, Expecti Dieter Fox Based on slides adapted Luke Zettlemoyer, Dan Klein, Pieter Abbeel, Dan Weld, Stuart Russell or Andrew Moore

More information

CS 188: Artificial Intelligence

CS 188: Artificial Intelligence CS 188: Artificial Intelligence Adversarial Search Instructor: Stuart Russell University of California, Berkeley Game Playing State-of-the-Art Checkers: 1950: First computer player. 1959: Samuel s self-taught

More information

CS 188: Artificial Intelligence Spring Game Playing in Practice

CS 188: Artificial Intelligence Spring Game Playing in Practice CS 188: Artificial Intelligence Spring 2006 Lecture 23: Games 4/18/2006 Dan Klein UC Berkeley Game Playing in Practice Checkers: Chinook ended 40-year-reign of human world champion Marion Tinsley in 1994.

More information

Adversarial Search. Soleymani. Artificial Intelligence: A Modern Approach, 3 rd Edition, Chapter 5

Adversarial Search. Soleymani. Artificial Intelligence: A Modern Approach, 3 rd Edition, Chapter 5 Adversarial Search CE417: Introduction to Artificial Intelligence Sharif University of Technology Spring 2017 Soleymani Artificial Intelligence: A Modern Approach, 3 rd Edition, Chapter 5 Outline Game

More information

Game playing. Chapter 5, Sections 1 6

Game playing. Chapter 5, Sections 1 6 Game playing Chapter 5, Sections 1 6 Artificial Intelligence, spring 2013, Peter Ljunglöf; based on AIMA Slides c Stuart Russel and Peter Norvig, 2004 Chapter 5, Sections 1 6 1 Outline Games Perfect play

More information

CSC321 Lecture 23: Go

CSC321 Lecture 23: Go CSC321 Lecture 23: Go Roger Grosse Roger Grosse CSC321 Lecture 23: Go 1 / 21 Final Exam Friday, April 20, 9am-noon Last names A Y: Clara Benson Building (BN) 2N Last names Z: Clara Benson Building (BN)

More information

Game playing. Chapter 6. Chapter 6 1

Game playing. Chapter 6. Chapter 6 1 Game playing Chapter 6 Chapter 6 1 Outline Games Perfect play minimax decisions α β pruning Resource limits and approximate evaluation Games of chance Games of imperfect information Chapter 6 2 Games vs.

More information

Game Playing. Philipp Koehn. 29 September 2015

Game Playing. Philipp Koehn. 29 September 2015 Game Playing Philipp Koehn 29 September 2015 Outline 1 Games Perfect play minimax decisions α β pruning Resource limits and approximate evaluation Games of chance Games of imperfect information 2 games

More information

Set 4: Game-Playing. ICS 271 Fall 2017 Kalev Kask

Set 4: Game-Playing. ICS 271 Fall 2017 Kalev Kask Set 4: Game-Playing ICS 271 Fall 2017 Kalev Kask Overview Computer programs that play 2-player games game-playing as search with the complication of an opponent General principles of game-playing and search

More information

Game playing. Chapter 5. Chapter 5 1

Game playing. Chapter 5. Chapter 5 1 Game playing Chapter 5 Chapter 5 1 Outline Games Perfect play minimax decisions α β pruning Resource limits and approximate evaluation Games of chance Games of imperfect information Chapter 5 2 Types of

More information

CSE 573: Artificial Intelligence Autumn 2010

CSE 573: Artificial Intelligence Autumn 2010 CSE 573: Artificial Intelligence Autumn 2010 Lecture 4: Adversarial Search 10/12/2009 Luke Zettlemoyer Based on slides from Dan Klein Many slides over the course adapted from either Stuart Russell or Andrew

More information

Game playing. Chapter 6. Chapter 6 1

Game playing. Chapter 6. Chapter 6 1 Game playing Chapter 6 Chapter 6 1 Outline Games Perfect play minimax decisions α β pruning Resource limits and approximate evaluation Games of chance Games of imperfect information Chapter 6 2 Games vs.

More information

How AI Won at Go and So What? Garry Kasparov vs. Deep Blue (1997)

How AI Won at Go and So What? Garry Kasparov vs. Deep Blue (1997) How AI Won at Go and So What? Garry Kasparov vs. Deep Blue (1997) Alan Fern School of Electrical Engineering and Computer Science Oregon State University Deep Mind s vs. Lee Sedol (2016) Watson vs. Ken

More information

Monte Carlo Tree Search

Monte Carlo Tree Search Monte Carlo Tree Search 1 By the end, you will know Why we use Monte Carlo Search Trees The pros and cons of MCTS How it is applied to Super Mario Brothers and Alpha Go 2 Outline I. Pre-MCTS Algorithms

More information

Adversarial Search. Read AIMA Chapter CIS 421/521 - Intro to AI 1

Adversarial Search. Read AIMA Chapter CIS 421/521 - Intro to AI 1 Adversarial Search Read AIMA Chapter 5.2-5.5 CIS 421/521 - Intro to AI 1 Adversarial Search Instructors: Dan Klein and Pieter Abbeel University of California, Berkeley [These slides were created by Dan

More information

CS 188: Artificial Intelligence

CS 188: Artificial Intelligence CS 188: Artificial Intelligence Adversarial Search Prof. Scott Niekum The University of Texas at Austin [These slides are based on those of Dan Klein and Pieter Abbeel for CS188 Intro to AI at UC Berkeley.

More information

Artificial Intelligence. Minimax and alpha-beta pruning

Artificial Intelligence. Minimax and alpha-beta pruning Artificial Intelligence Minimax and alpha-beta pruning In which we examine the problems that arise when we try to plan ahead to get the best result in a world that includes a hostile agent (other agent

More information

Adversarial Search. Hal Daumé III. Computer Science University of Maryland CS 421: Introduction to Artificial Intelligence 9 Feb 2012

Adversarial Search. Hal Daumé III. Computer Science University of Maryland CS 421: Introduction to Artificial Intelligence 9 Feb 2012 1 Hal Daumé III (me@hal3.name) Adversarial Search Hal Daumé III Computer Science University of Maryland me@hal3.name CS 421: Introduction to Artificial Intelligence 9 Feb 2012 Many slides courtesy of Dan

More information

TD-Leaf(λ) Giraffe: Using Deep Reinforcement Learning to Play Chess. Stefan Lüttgen

TD-Leaf(λ) Giraffe: Using Deep Reinforcement Learning to Play Chess. Stefan Lüttgen TD-Leaf(λ) Giraffe: Using Deep Reinforcement Learning to Play Chess Stefan Lüttgen Motivation Learn to play chess Computer approach different than human one Humans search more selective: Kasparov (3-5

More information

Games CSE 473. Kasparov Vs. Deep Junior August 2, 2003 Match ends in a 3 / 3 tie!

Games CSE 473. Kasparov Vs. Deep Junior August 2, 2003 Match ends in a 3 / 3 tie! Games CSE 473 Kasparov Vs. Deep Junior August 2, 2003 Match ends in a 3 / 3 tie! Games in AI In AI, games usually refers to deteristic, turntaking, two-player, zero-sum games of perfect information Deteristic:

More information

Artificial Intelligence Adversarial Search

Artificial Intelligence Adversarial Search Artificial Intelligence Adversarial Search Adversarial Search Adversarial search problems games They occur in multiagent competitive environments There is an opponent we can t control planning again us!

More information

Foundations of Artificial Intelligence

Foundations of Artificial Intelligence Foundations of Artificial Intelligence 6. Board Games Search Strategies for Games, Games with Chance, State of the Art Joschka Boedecker and Wolfram Burgard and Bernhard Nebel Albert-Ludwigs-Universität

More information

46.1 Introduction. Foundations of Artificial Intelligence Introduction MCTS in AlphaGo Neural Networks. 46.

46.1 Introduction. Foundations of Artificial Intelligence Introduction MCTS in AlphaGo Neural Networks. 46. Foundations of Artificial Intelligence May 30, 2016 46. AlphaGo and Outlook Foundations of Artificial Intelligence 46. AlphaGo and Outlook Thomas Keller Universität Basel May 30, 2016 46.1 Introduction

More information

Foundations of Artificial Intelligence

Foundations of Artificial Intelligence Foundations of Artificial Intelligence 6. Board Games Search Strategies for Games, Games with Chance, State of the Art Joschka Boedecker and Wolfram Burgard and Frank Hutter and Bernhard Nebel Albert-Ludwigs-Universität

More information

CS 188: Artificial Intelligence Spring Announcements

CS 188: Artificial Intelligence Spring Announcements CS 188: Artificial Intelligence Spring 2011 Lecture 7: Minimax and Alpha-Beta Search 2/9/2011 Pieter Abbeel UC Berkeley Many slides adapted from Dan Klein 1 Announcements W1 out and due Monday 4:59pm P2

More information

Games vs. search problems. Adversarial Search. Types of games. Outline

Games vs. search problems. Adversarial Search. Types of games. Outline Games vs. search problems Unpredictable opponent solution is a strategy specifying a move for every possible opponent reply dversarial Search Chapter 5 Time limits unlikely to find goal, must approximate

More information

CS 5522: Artificial Intelligence II

CS 5522: Artificial Intelligence II CS 5522: Artificial Intelligence II Adversarial Search Instructor: Alan Ritter Ohio State University [These slides were adapted from CS188 Intro to AI at UC Berkeley. All materials available at http://ai.berkeley.edu.]

More information

COMP219: COMP219: Artificial Intelligence Artificial Intelligence Dr. Annabel Latham Lecture 12: Game Playing Overview Games and Search

COMP219: COMP219: Artificial Intelligence Artificial Intelligence Dr. Annabel Latham Lecture 12: Game Playing Overview Games and Search COMP19: Artificial Intelligence COMP19: Artificial Intelligence Dr. Annabel Latham Room.05 Ashton Building Department of Computer Science University of Liverpool Lecture 1: Game Playing 1 Overview Last

More information

Game playing. Outline

Game playing. Outline Game playing Chapter 6, Sections 1 8 CS 480 Outline Perfect play Resource limits α β pruning Games of chance Games of imperfect information Games vs. search problems Unpredictable opponent solution is

More information

CS 380: ARTIFICIAL INTELLIGENCE

CS 380: ARTIFICIAL INTELLIGENCE CS 380: ARTIFICIAL INTELLIGENCE ADVERSARIAL SEARCH 10/23/2013 Santiago Ontañón santi@cs.drexel.edu https://www.cs.drexel.edu/~santi/teaching/2013/cs380/intro.html Recall: Problem Solving Idea: represent

More information

Programming Project 1: Pacman (Due )

Programming Project 1: Pacman (Due ) Programming Project 1: Pacman (Due 8.2.18) Registration to the exams 521495A: Artificial Intelligence Adversarial Search (Min-Max) Lectured by Abdenour Hadid Adjunct Professor, CMVS, University of Oulu

More information

Game Playing State-of-the-Art

Game Playing State-of-the-Art Adversarial Search [These slides were created by Dan Klein and Pieter Abbeel for CS188 Intro to AI at UC Berkeley. All CS188 materials are available at http://ai.berkeley.edu.] Game Playing State-of-the-Art

More information

Games vs. search problems. Game playing Chapter 6. Outline. Game tree (2-player, deterministic, turns) Types of games. Minimax

Games vs. search problems. Game playing Chapter 6. Outline. Game tree (2-player, deterministic, turns) Types of games. Minimax Game playing Chapter 6 perfect information imperfect information Types of games deterministic chess, checkers, go, othello battleships, blind tictactoe chance backgammon monopoly bridge, poker, scrabble

More information

Automated Suicide: An Antichess Engine

Automated Suicide: An Antichess Engine Automated Suicide: An Antichess Engine Jim Andress and Prasanna Ramakrishnan 1 Introduction Antichess (also known as Suicide Chess or Loser s Chess) is a popular variant of chess where the objective of

More information

CS 380: ARTIFICIAL INTELLIGENCE ADVERSARIAL SEARCH. Santiago Ontañón

CS 380: ARTIFICIAL INTELLIGENCE ADVERSARIAL SEARCH. Santiago Ontañón CS 380: ARTIFICIAL INTELLIGENCE ADVERSARIAL SEARCH Santiago Ontañón so367@drexel.edu Recall: Problem Solving Idea: represent the problem we want to solve as: State space Actions Goal check Cost function

More information

Intuition Mini-Max 2

Intuition Mini-Max 2 Games Today Saying Deep Blue doesn t really think about chess is like saying an airplane doesn t really fly because it doesn t flap its wings. Drew McDermott I could feel I could smell a new kind of intelligence

More information

Lecture 5: Game Playing (Adversarial Search)

Lecture 5: Game Playing (Adversarial Search) Lecture 5: Game Playing (Adversarial Search) CS 580 (001) - Spring 2018 Amarda Shehu Department of Computer Science George Mason University, Fairfax, VA, USA February 21, 2018 Amarda Shehu (580) 1 1 Outline

More information

COMP219: Artificial Intelligence. Lecture 13: Game Playing

COMP219: Artificial Intelligence. Lecture 13: Game Playing CMP219: Artificial Intelligence Lecture 13: Game Playing 1 verview Last time Search with partial/no observations Belief states Incremental belief state search Determinism vs non-determinism Today We will

More information

Announcements. CS 188: Artificial Intelligence Spring Game Playing State-of-the-Art. Overview. Game Playing. GamesCrafters

Announcements. CS 188: Artificial Intelligence Spring Game Playing State-of-the-Art. Overview. Game Playing. GamesCrafters CS 188: Artificial Intelligence Spring 2011 Announcements W1 out and due Monday 4:59pm P2 out and due next week Friday 4:59pm Lecture 7: Mini and Alpha-Beta Search 2/9/2011 Pieter Abbeel UC Berkeley Many

More information

Game-playing: DeepBlue and AlphaGo

Game-playing: DeepBlue and AlphaGo Game-playing: DeepBlue and AlphaGo Brief history of gameplaying frontiers 1990s: Othello world champions refuse to play computers 1994: Chinook defeats Checkers world champion 1997: DeepBlue defeats world

More information

Adversarial Search Lecture 7

Adversarial Search Lecture 7 Lecture 7 How can we use search to plan ahead when other agents are planning against us? 1 Agenda Games: context, history Searching via Minimax Scaling α β pruning Depth-limiting Evaluation functions Handling

More information

Adversarial Search. CS 486/686: Introduction to Artificial Intelligence

Adversarial Search. CS 486/686: Introduction to Artificial Intelligence Adversarial Search CS 486/686: Introduction to Artificial Intelligence 1 Introduction So far we have only been concerned with a single agent Today, we introduce an adversary! 2 Outline Games Minimax search

More information

Adversarial Search. Human-aware Robotics. 2018/01/25 Chapter 5 in R&N 3rd Ø Announcement: Slides for this lecture are here:

Adversarial Search. Human-aware Robotics. 2018/01/25 Chapter 5 in R&N 3rd Ø Announcement: Slides for this lecture are here: Adversarial Search 2018/01/25 Chapter 5 in R&N 3rd Ø Announcement: q Slides for this lecture are here: http://www.public.asu.edu/~yzhan442/teaching/cse471/lectures/adversarial.pdf Slides are largely based

More information

Adversarial Search (Game Playing)

Adversarial Search (Game Playing) Artificial Intelligence Adversarial Search (Game Playing) Chapter 5 Adapted from materials by Tim Finin, Marie desjardins, and Charles R. Dyer Outline Game playing State of the art and resources Framework

More information

Game Playing State of the Art

Game Playing State of the Art Game Playing State of the Art Checkers: Chinook ended 40 year reign of human world champion Marion Tinsley in 1994. Used an endgame database defining perfect play for all positions involving 8 or fewer

More information

Announcements. Homework 1. Project 1. Due tonight at 11:59pm. Due Friday 2/8 at 4:00pm. Electronic HW1 Written HW1

Announcements. Homework 1. Project 1. Due tonight at 11:59pm. Due Friday 2/8 at 4:00pm. Electronic HW1 Written HW1 Announcements Homework 1 Due tonight at 11:59pm Project 1 Electronic HW1 Written HW1 Due Friday 2/8 at 4:00pm CS 188: Artificial Intelligence Adversarial Search and Game Trees Instructors: Sergey Levine

More information

4. Games and search. Lecture Artificial Intelligence (4ov / 8op)

4. Games and search. Lecture Artificial Intelligence (4ov / 8op) 4. Games and search 4.1 Search problems State space search find a (shortest) path from the initial state to the goal state. Constraint satisfaction find a value assignment to a set of variables so that

More information

Outline. Game playing. Types of games. Games vs. search problems. Minimax. Game tree (2-player, deterministic, turns) Games

Outline. Game playing. Types of games. Games vs. search problems. Minimax. Game tree (2-player, deterministic, turns) Games utline Games Game playing Perfect play minimax decisions α β pruning Resource limits and approximate evaluation Chapter 6 Games of chance Games of imperfect information Chapter 6 Chapter 6 Games vs. search

More information

Mastering Chess and Shogi by Self- Play with a General Reinforcement Learning Algorithm

Mastering Chess and Shogi by Self- Play with a General Reinforcement Learning Algorithm Mastering Chess and Shogi by Self- Play with a General Reinforcement Learning Algorithm by Silver et al Published by Google Deepmind Presented by Kira Selby Background u In March 2016, Deepmind s AlphaGo

More information

ADVERSARIAL SEARCH. Chapter 5

ADVERSARIAL SEARCH. Chapter 5 ADVERSARIAL SEARCH Chapter 5... every game of skill is susceptible of being played by an automaton. from Charles Babbage, The Life of a Philosopher, 1832. Outline Games Perfect play minimax decisions α

More information

Foundations of AI. 6. Adversarial Search. Search Strategies for Games, Games with Chance, State of the Art. Wolfram Burgard & Bernhard Nebel

Foundations of AI. 6. Adversarial Search. Search Strategies for Games, Games with Chance, State of the Art. Wolfram Burgard & Bernhard Nebel Foundations of AI 6. Adversarial Search Search Strategies for Games, Games with Chance, State of the Art Wolfram Burgard & Bernhard Nebel Contents Game Theory Board Games Minimax Search Alpha-Beta Search

More information

Game-Playing & Adversarial Search

Game-Playing & Adversarial Search Game-Playing & Adversarial Search This lecture topic: Game-Playing & Adversarial Search (two lectures) Chapter 5.1-5.5 Next lecture topic: Constraint Satisfaction Problems (two lectures) Chapter 6.1-6.4,

More information

Artificial Intelligence

Artificial Intelligence Artificial Intelligence Adversarial Search Instructors: David Suter and Qince Li Course Delivered @ Harbin Institute of Technology [Many slides adapted from those created by Dan Klein and Pieter Abbeel

More information

Foundations of AI. 6. Board Games. Search Strategies for Games, Games with Chance, State of the Art

Foundations of AI. 6. Board Games. Search Strategies for Games, Games with Chance, State of the Art Foundations of AI 6. Board Games Search Strategies for Games, Games with Chance, State of the Art Wolfram Burgard, Andreas Karwath, Bernhard Nebel, and Martin Riedmiller SA-1 Contents Board Games Minimax

More information

School of EECS Washington State University. Artificial Intelligence

School of EECS Washington State University. Artificial Intelligence School of EECS Washington State University Artificial Intelligence 1 } Classic AI challenge Easy to represent Difficult to solve } Zero-sum games Total final reward to all players is constant } Perfect

More information

Game Playing AI Class 8 Ch , 5.4.1, 5.5

Game Playing AI Class 8 Ch , 5.4.1, 5.5 Game Playing AI Class Ch. 5.-5., 5.4., 5.5 Bookkeeping HW Due 0/, :59pm Remaining CSP questions? Cynthia Matuszek CMSC 6 Based on slides by Marie desjardin, Francisco Iacobelli Today s Class Clear criteria

More information

Adversarial Search and Game- Playing C H A P T E R 6 C M P T : S P R I N G H A S S A N K H O S R A V I

Adversarial Search and Game- Playing C H A P T E R 6 C M P T : S P R I N G H A S S A N K H O S R A V I Adversarial Search and Game- Playing C H A P T E R 6 C M P T 3 1 0 : S P R I N G 2 0 1 1 H A S S A N K H O S R A V I Adversarial Search Examine the problems that arise when we try to plan ahead in a world

More information

DeepStack: Expert-Level AI in Heads-Up No-Limit Poker. Surya Prakash Chembrolu

DeepStack: Expert-Level AI in Heads-Up No-Limit Poker. Surya Prakash Chembrolu DeepStack: Expert-Level AI in Heads-Up No-Limit Poker Surya Prakash Chembrolu AI and Games AlphaGo Go Watson Jeopardy! DeepBlue -Chess Chinook -Checkers TD-Gammon -Backgammon Perfect Information Games

More information

Adversarial Search. CS 486/686: Introduction to Artificial Intelligence

Adversarial Search. CS 486/686: Introduction to Artificial Intelligence Adversarial Search CS 486/686: Introduction to Artificial Intelligence 1 AccessAbility Services Volunteer Notetaker Required Interested? Complete an online application using your WATIAM: https://york.accessiblelearning.com/uwaterloo/

More information

Game Playing AI. Dr. Baldassano Yu s Elite Education

Game Playing AI. Dr. Baldassano Yu s Elite Education Game Playing AI Dr. Baldassano chrisb@princeton.edu Yu s Elite Education Last 2 weeks recap: Graphs Graphs represent pairwise relationships Directed/undirected, weighted/unweights Common algorithms: Shortest

More information

Unit-III Chap-II Adversarial Search. Created by: Ashish Shah 1

Unit-III Chap-II Adversarial Search. Created by: Ashish Shah 1 Unit-III Chap-II Adversarial Search Created by: Ashish Shah 1 Alpha beta Pruning In case of standard ALPHA BETA PRUNING minimax tree, it returns the same move as minimax would, but prunes away branches

More information

Game Playing. Dr. Richard J. Povinelli. Page 1. rev 1.1, 9/14/2003

Game Playing. Dr. Richard J. Povinelli. Page 1. rev 1.1, 9/14/2003 Game Playing Dr. Richard J. Povinelli rev 1.1, 9/14/2003 Page 1 Objectives You should be able to provide a definition of a game. be able to evaluate, compare, and implement the minmax and alpha-beta algorithms,

More information

Games and Adversarial Search

Games and Adversarial Search 1 Games and Adversarial Search BBM 405 Fundamentals of Artificial Intelligence Pinar Duygulu Hacettepe University Slides are mostly adapted from AIMA, MIT Open Courseware and Svetlana Lazebnik (UIUC) Spring

More information

CS-E4800 Artificial Intelligence

CS-E4800 Artificial Intelligence CS-E4800 Artificial Intelligence Jussi Rintanen Department of Computer Science Aalto University March 9, 2017 Difficulties in Rational Collective Behavior Individual utility in conflict with collective

More information

Adversarial Search. CMPSCI 383 September 29, 2011

Adversarial Search. CMPSCI 383 September 29, 2011 Adversarial Search CMPSCI 383 September 29, 2011 1 Why are games interesting to AI? Simple to represent and reason about Must consider the moves of an adversary Time constraints Russell & Norvig say: Games,

More information

Local Search. Hill Climbing. Hill Climbing Diagram. Simulated Annealing. Simulated Annealing. Introduction to Artificial Intelligence

Local Search. Hill Climbing. Hill Climbing Diagram. Simulated Annealing. Simulated Annealing. Introduction to Artificial Intelligence Introduction to Artificial Intelligence V22.0472-001 Fall 2009 Lecture 6: Adversarial Search Local Search Queue-based algorithms keep fallback options (backtracking) Local search: improve what you have

More information

Contents. List of Figures

Contents. List of Figures 1 Contents 1 Introduction....................................... 3 1.1 Rules of the game............................... 3 1.2 Complexity of the game............................ 4 1.3 History of self-learning

More information

Artificial Intelligence

Artificial Intelligence Artificial Intelligence Adversarial Search Vibhav Gogate The University of Texas at Dallas Some material courtesy of Rina Dechter, Alex Ihler and Stuart Russell, Luke Zettlemoyer, Dan Weld Adversarial

More information

Google DeepMind s AlphaGo vs. world Go champion Lee Sedol

Google DeepMind s AlphaGo vs. world Go champion Lee Sedol Google DeepMind s AlphaGo vs. world Go champion Lee Sedol Review of Nature paper: Mastering the game of Go with Deep Neural Networks & Tree Search Tapani Raiko Thanks to Antti Tarvainen for some slides

More information

Outline. Game Playing. Game Problems. Game Problems. Types of games Playing a perfect game. Playing an imperfect game

Outline. Game Playing. Game Problems. Game Problems. Types of games Playing a perfect game. Playing an imperfect game Outline Game Playing ECE457 Applied Artificial Intelligence Fall 2007 Lecture #5 Types of games Playing a perfect game Minimax search Alpha-beta pruning Playing an imperfect game Real-time Imperfect information

More information

CSE 473: Artificial Intelligence. Outline

CSE 473: Artificial Intelligence. Outline CSE 473: Artificial Intelligence Adversarial Search Dan Weld Based on slides from Dan Klein, Stuart Russell, Pieter Abbeel, Andrew Moore and Luke Zettlemoyer (best illustrations from ai.berkeley.edu) 1

More information

Game AI Challenges: Past, Present, and Future

Game AI Challenges: Past, Present, and Future Game AI Challenges: Past, Present, and Future Professor Michael Buro Computing Science, University of Alberta, Edmonton, Canada www.skatgame.net/cpcc2018.pdf 1/ 35 AI / ML Group @ University of Alberta

More information

Today. Types of Game. Games and Search 1/18/2010. COMP210: Artificial Intelligence. Lecture 10. Game playing

Today. Types of Game. Games and Search 1/18/2010. COMP210: Artificial Intelligence. Lecture 10. Game playing COMP10: Artificial Intelligence Lecture 10. Game playing Trevor Bench-Capon Room 15, Ashton Building Today We will look at how search can be applied to playing games Types of Games Perfect play minimax

More information

CS 771 Artificial Intelligence. Adversarial Search

CS 771 Artificial Intelligence. Adversarial Search CS 771 Artificial Intelligence Adversarial Search Typical assumptions Two agents whose actions alternate Utility values for each agent are the opposite of the other This creates the adversarial situation

More information

6. Games. COMP9414/ 9814/ 3411: Artificial Intelligence. Outline. Mechanical Turk. Origins. origins. motivation. minimax search

6. Games. COMP9414/ 9814/ 3411: Artificial Intelligence. Outline. Mechanical Turk. Origins. origins. motivation. minimax search COMP9414/9814/3411 16s1 Games 1 COMP9414/ 9814/ 3411: Artificial Intelligence 6. Games Outline origins motivation Russell & Norvig, Chapter 5. minimax search resource limits and heuristic evaluation α-β

More information

Game Playing State-of-the-Art. CS 188: Artificial Intelligence. Behavior from Computation. Video of Demo Mystery Pacman. Adversarial Search

Game Playing State-of-the-Art. CS 188: Artificial Intelligence. Behavior from Computation. Video of Demo Mystery Pacman. Adversarial Search CS 188: Artificial Intelligence Adversarial Search Instructor: Marco Alvarez University of Rhode Island (These slides were created/modified by Dan Klein, Pieter Abbeel, Anca Dragan for CS188 at UC Berkeley)

More information

Learning to play Dominoes

Learning to play Dominoes Learning to play Dominoes Ivan de Jesus P. Pinto 1, Mateus R. Pereira 1, Luciano Reis Coutinho 1 1 Departamento de Informática Universidade Federal do Maranhão São Luís,MA Brazil navi1921@gmail.com, mateus.rp.slz@gmail.com,

More information

Announcements. CS 188: Artificial Intelligence Fall Local Search. Hill Climbing. Simulated Annealing. Hill Climbing Diagram

Announcements. CS 188: Artificial Intelligence Fall Local Search. Hill Climbing. Simulated Annealing. Hill Climbing Diagram CS 188: Artificial Intelligence Fall 2008 Lecture 6: Adversarial Search 9/16/2008 Dan Klein UC Berkeley Many slides over the course adapted from either Stuart Russell or Andrew Moore 1 Announcements Project

More information

Adversarial Search and Game Playing. Russell and Norvig: Chapter 5

Adversarial Search and Game Playing. Russell and Norvig: Chapter 5 Adversarial Search and Game Playing Russell and Norvig: Chapter 5 Typical case 2-person game Players alternate moves Zero-sum: one player s loss is the other s gain Perfect information: both players have

More information

CMSC 671 Project Report- Google AI Challenge: Planet Wars

CMSC 671 Project Report- Google AI Challenge: Planet Wars 1. Introduction Purpose The purpose of the project is to apply relevant AI techniques learned during the course with a view to develop an intelligent game playing bot for the game of Planet Wars. Planet

More information

CS 188: Artificial Intelligence. Overview

CS 188: Artificial Intelligence. Overview CS 188: Artificial Intelligence Lecture 6 and 7: Search for Games Pieter Abbeel UC Berkeley Many slides adapted from Dan Klein 1 Overview Deterministic zero-sum games Minimax Limited depth and evaluation

More information

Ar#ficial)Intelligence!!

Ar#ficial)Intelligence!! Introduc*on! Ar#ficial)Intelligence!! Roman Barták Department of Theoretical Computer Science and Mathematical Logic So far we assumed a single-agent environment, but what if there are more agents and

More information

Training a Back-Propagation Network with Temporal Difference Learning and a database for the board game Pente

Training a Back-Propagation Network with Temporal Difference Learning and a database for the board game Pente Training a Back-Propagation Network with Temporal Difference Learning and a database for the board game Pente Valentijn Muijrers 3275183 Valentijn.Muijrers@phil.uu.nl Supervisor: Gerard Vreeswijk 7,5 ECTS

More information

Pengju

Pengju Introduction to AI Chapter05 Adversarial Search: Game Playing Pengju Ren@IAIR Outline Types of Games Formulation of games Perfect-Information Games Minimax and Negamax search α-β Pruning Pruning more Imperfect

More information

Computing Science (CMPUT) 496

Computing Science (CMPUT) 496 Computing Science (CMPUT) 496 Search, Knowledge, and Simulations Martin Müller Department of Computing Science University of Alberta mmueller@ualberta.ca Winter 2017 Part IV Knowledge 496 Today - Mar 9

More information

More Adversarial Search

More Adversarial Search More Adversarial Search CS151 David Kauchak Fall 2010 http://xkcd.com/761/ Some material borrowed from : Sara Owsley Sood and others Admin Written 2 posted Machine requirements for mancala Most of the

More information

Adversarial Search: Game Playing. Reading: Chapter

Adversarial Search: Game Playing. Reading: Chapter Adversarial Search: Game Playing Reading: Chapter 6.5-6.8 1 Games and AI Easy to represent, abstract, precise rules One of the first tasks undertaken by AI (since 1950) Better than humans in Othello and

More information

CS 4700: Foundations of Artificial Intelligence

CS 4700: Foundations of Artificial Intelligence CS 4700: Foundations of Artificial Intelligence selman@cs.cornell.edu Module: Adversarial Search R&N: Chapter 5 Part II 1 Outline Game Playing Optimal decisions Minimax α-β pruning Case study: Deep Blue

More information

CS 188: Artificial Intelligence Spring 2007

CS 188: Artificial Intelligence Spring 2007 CS 188: Artificial Intelligence Spring 2007 Lecture 7: CSP-II and Adversarial Search 2/6/2007 Srini Narayanan ICSI and UC Berkeley Many slides over the course adapted from Dan Klein, Stuart Russell or

More information