Decision Making in Multiplayer Environments Application in Backgammon Variants

Similar documents
On the Design and Training of Bots to Play Backgammon Variants

Constructing Pin Endgame Databases for the Backgammon Variant Plakoto

Plakoto. A Backgammon Board Game Variant Introduction, Rules and Basic Strategy. (by J.Mamoun - This primer is copyright-free, in the public domain)

Reinforcement Learning in Games Autonomous Learning Systems Seminar

TD-Gammon, a Self-Teaching Backgammon Program, Achieves Master-Level Play

TUD Poker Challenge Reinforcement Learning with Imperfect Information

An Artificially Intelligent Ludo Player

Adversarial Search and Game Playing

CS 331: Artificial Intelligence Adversarial Search II. Outline

CS440/ECE448 Lecture 11: Stochastic Games, Stochastic Search, and Learned Evaluation Functions

Game Playing: Adversarial Search. Chapter 5

Game Playing State-of-the-Art CSE 473: Artificial Intelligence Fall Deterministic Games. Zero-Sum Games 10/13/17. Adversarial Search

CS 188: Artificial Intelligence

CS 188: Artificial Intelligence Spring Game Playing in Practice

Adversarial Search. Soleymani. Artificial Intelligence: A Modern Approach, 3 rd Edition, Chapter 5

Game playing. Chapter 5, Sections 1 6

CSC321 Lecture 23: Go

Game playing. Chapter 6. Chapter 6 1

Game Playing. Philipp Koehn. 29 September 2015

Set 4: Game-Playing. ICS 271 Fall 2017 Kalev Kask

Game playing. Chapter 5. Chapter 5 1

CSE 573: Artificial Intelligence Autumn 2010

Game playing. Chapter 6. Chapter 6 1

How AI Won at Go and So What? Garry Kasparov vs. Deep Blue (1997)

Monte Carlo Tree Search

Adversarial Search. Read AIMA Chapter CIS 421/521 - Intro to AI 1

CS 188: Artificial Intelligence

Artificial Intelligence. Minimax and alpha-beta pruning

Adversarial Search. Hal Daumé III. Computer Science University of Maryland CS 421: Introduction to Artificial Intelligence 9 Feb 2012

TD-Leaf(λ) Giraffe: Using Deep Reinforcement Learning to Play Chess. Stefan Lüttgen

Games CSE 473. Kasparov Vs. Deep Junior August 2, 2003 Match ends in a 3 / 3 tie!

Artificial Intelligence Adversarial Search

Foundations of Artificial Intelligence

46.1 Introduction. Foundations of Artificial Intelligence Introduction MCTS in AlphaGo Neural Networks. 46.

Foundations of Artificial Intelligence

CS 188: Artificial Intelligence Spring Announcements

Games vs. search problems. Adversarial Search. Types of games. Outline

CS 5522: Artificial Intelligence II

COMP219: COMP219: Artificial Intelligence Artificial Intelligence Dr. Annabel Latham Lecture 12: Game Playing Overview Games and Search

Game playing. Outline

CS 380: ARTIFICIAL INTELLIGENCE

Programming Project 1: Pacman (Due )

Game Playing State-of-the-Art

Games vs. search problems. Game playing Chapter 6. Outline. Game tree (2-player, deterministic, turns) Types of games. Minimax

Automated Suicide: An Antichess Engine

CS 380: ARTIFICIAL INTELLIGENCE ADVERSARIAL SEARCH. Santiago Ontañón

Intuition Mini-Max 2

Lecture 5: Game Playing (Adversarial Search)

COMP219: Artificial Intelligence. Lecture 13: Game Playing

Announcements. CS 188: Artificial Intelligence Spring Game Playing State-of-the-Art. Overview. Game Playing. GamesCrafters

Game-playing: DeepBlue and AlphaGo

Adversarial Search Lecture 7

Adversarial Search. CS 486/686: Introduction to Artificial Intelligence

Adversarial Search. Human-aware Robotics. 2018/01/25 Chapter 5 in R&N 3rd Ø Announcement: Slides for this lecture are here:

Adversarial Search (Game Playing)

Game Playing State of the Art

Announcements. Homework 1. Project 1. Due tonight at 11:59pm. Due Friday 2/8 at 4:00pm. Electronic HW1 Written HW1

4. Games and search. Lecture Artificial Intelligence (4ov / 8op)

Outline. Game playing. Types of games. Games vs. search problems. Minimax. Game tree (2-player, deterministic, turns) Games

Mastering Chess and Shogi by Self- Play with a General Reinforcement Learning Algorithm

ADVERSARIAL SEARCH. Chapter 5

Foundations of AI. 6. Adversarial Search. Search Strategies for Games, Games with Chance, State of the Art. Wolfram Burgard & Bernhard Nebel

Game-Playing & Adversarial Search

Artificial Intelligence

Foundations of AI. 6. Board Games. Search Strategies for Games, Games with Chance, State of the Art

School of EECS Washington State University. Artificial Intelligence

Game Playing AI Class 8 Ch , 5.4.1, 5.5

Adversarial Search and Game- Playing C H A P T E R 6 C M P T : S P R I N G H A S S A N K H O S R A V I

DeepStack: Expert-Level AI in Heads-Up No-Limit Poker. Surya Prakash Chembrolu

Adversarial Search. CS 486/686: Introduction to Artificial Intelligence

Game Playing AI. Dr. Baldassano Yu s Elite Education

Unit-III Chap-II Adversarial Search. Created by: Ashish Shah 1

Game Playing. Dr. Richard J. Povinelli. Page 1. rev 1.1, 9/14/2003

Games and Adversarial Search

CS-E4800 Artificial Intelligence

Adversarial Search. CMPSCI 383 September 29, 2011

Local Search. Hill Climbing. Hill Climbing Diagram. Simulated Annealing. Simulated Annealing. Introduction to Artificial Intelligence

Contents. List of Figures

Artificial Intelligence

Google DeepMind s AlphaGo vs. world Go champion Lee Sedol

Outline. Game Playing. Game Problems. Game Problems. Types of games Playing a perfect game. Playing an imperfect game

CSE 473: Artificial Intelligence. Outline

Game AI Challenges: Past, Present, and Future

Today. Types of Game. Games and Search 1/18/2010. COMP210: Artificial Intelligence. Lecture 10. Game playing

CS 771 Artificial Intelligence. Adversarial Search

6. Games. COMP9414/ 9814/ 3411: Artificial Intelligence. Outline. Mechanical Turk. Origins. origins. motivation. minimax search

Game Playing State-of-the-Art. CS 188: Artificial Intelligence. Behavior from Computation. Video of Demo Mystery Pacman. Adversarial Search

Learning to play Dominoes

Announcements. CS 188: Artificial Intelligence Fall Local Search. Hill Climbing. Simulated Annealing. Hill Climbing Diagram

Adversarial Search and Game Playing. Russell and Norvig: Chapter 5

CMSC 671 Project Report- Google AI Challenge: Planet Wars

CS 188: Artificial Intelligence. Overview

Ar#ficial)Intelligence!!

Training a Back-Propagation Network with Temporal Difference Learning and a database for the board game Pente

Pengju

Computing Science (CMPUT) 496

More Adversarial Search

Adversarial Search: Game Playing. Reading: Chapter

CS 4700: Foundations of Artificial Intelligence

CS 188: Artificial Intelligence Spring 2007

Transcription:

Decision Making in Multiplayer Environments Application in Backgammon Variants PhD Thesis by Nikolaos Papahristou AI researcher Department of Applied Informatics Thessaloniki, Greece

Contributions Expert playing agents for the first time in Tavli games New training method for self-play learning Statistics for tavli games Palamedes: A free program where anyone can play against the trained agents 1 st place and gold medal in 2011 and 2015 Backgammon Computer Olympiad Nikolaos Papahristou PhD Thesis AI in backgammon Variants 2

AI history in Games 1993, TD-Gammon: Surpasses world-champion in backgammon 1994, Chinook: Beat world champion in Checkers 1997, Logistello: Beat world champion in Othello 1997, Deep Blue: Beat world champion in Chess 1998, Maven: Beat world champion in Scrabble 2007, Chinook: Weakly solving of checkers 2015, Cepheus: Effectively solving Heads up Limit Texas Holdem Poker 2016, AlphaGo: Beat legendary player at Go Nikolaos Papahristou PhD Thesis AI in backgammon Variants 3

Tavli Games - Motivation In Greece there are three popular variants Portes (similar to standard backgammon) Plakoto Fevga No previous research in these variants Can we make AI agents that play these games expertly? Nikolaos Papahristou PhD Thesis AI in backgammon Variants 4

Outline 1. Background 2. Learning to Play Tavli Games 3. Statistics and Match Play 4. Palamedes 5. Conclusion Nikolaos Papahristou PhD Thesis AI in backgammon Variants 5

Backgammon Games Portes Plakoto Fevga Tavli Match Nikolaos Papahristou PhD Thesis AI in backgammon Variants 6

Modes of Play Money games The goal of the player is to maximize his/her points from a single game Money games can be viewed as games played in a match of infinite length Backgammon Matches Players accumulate points until one player reaches a predefined number of points Typical match length is 5 or 7 points. Goal of the player is to win the match. Matches can also be comprised by different variants (e.g. Tavli match). In this thesis we deal with matches of one variant only. Nikolaos Papahristou PhD Thesis AI in backgammon Variants 7

Machine Learning Categories Supervised learning Examples of inputs and their desired outputs (labels) are given. Find a model that learns to map new inputs. Unsupervised learning No labels given. Find structure in data. Reinforcement learning No inputs/labels. Find best behavior by interacting a (dynamic) environment. Nikolaos Papahristou PhD Thesis AI in backgammon Variants 8

Reinforcement Learning (RL) MDP μ = (S, Α, P, R, I) S (s 1, s 2, s T ) is the state space A is the action space: Α(s) P is the transition model: P(s t+1 s t, α) R is the reward function: R(s t,, α, s t+1 ) Ι is the initial state Markov property Transitions and rewards are independent of history Nikolaos Papahristou PhD Thesis AI in backgammon Variants 9

RL - Value Function Methods A value function V: S R, maps a state to a real number value when following a policy π: V π s = E π R t s t = s π is better than π, if and only if V π (s) V π (s) for every state. All optimal policies share the same optimal value function: V s = max π Vπ s Nikolaos Papahristou PhD Thesis AI in backgammon Variants 10

Temporal Difference Learning TD(0) δ t = r t+1 + γv s t+1 V s t V s t V s t + aδ t δ t is called the temporal difference error V s t+1 is the target of the update α [0,1] : is the learning rate TD(0) uses bootstrapping is a stochastic approximation algorithm uses one-step backups Nikolaos Papahristou PhD Thesis AI in backgammon Variants 11

Temporal Difference Learning TD(λ) base the backup on more than one future reward multi-step backup V x t V x t + a γλ k k=0 λ [0,1] For λ=0 TD(λ) equivalent to TD(0) δ t+k For λ=1 TD(λ) resembles Monte Carlo sampling λ (0,1): offers a way of future states affecting the present. A carefully selected value of λ can speed up learning In practice λ is determined by trial and error Nikolaos Papahristou PhD Thesis AI in backgammon Variants 12

Function Approximation Previous algorithms work on small state spaces Real-world applications have large state spaces Computing and storing the values of all states impractical Solution: generalize from a limited subsets of states = function approximation Typical function approximation methods Neural Networks Decision trees Tile coding Radial Basis Functions (RBF) Nikolaos Papahristou PhD Thesis AI in backgammon Variants 13

Outline 1. Background 2. Learning to Play Tavli Games 3. Statistics and Match Play 4. Palamedes 5. Conclusion Nikolaos Papahristou PhD Thesis AI in backgammon Variants 14

Neural Network Architecture Outputs: W: Win game WD: Win Double game LD: Lose Double game OUTPUTS W WD LD OUTPUT LAYER ʃ ʃ ʃ Linear combination of outputs produces position score ʃ ʃ ʃ HIDDEN LAYER..... INPUT LAYER....... Backgammon position encoded at input layer Nikolaos Papahristou PhD Thesis AI in backgammon Variants 15

Evaluation Evaluation procedures: against Tavli3D (open source program) against stored weights of the same NN against previously trained agent 10000 games per benchmark Evaluation value: estimated points per game (ppg) Nikolaos Papahristou PhD Thesis AI in backgammon Variants 16

Training procedure Generate sample game Self-play: Neural Network used as evaluation function At every time step all legal afterstates are scored and the best one is played Update weights Apply TD(λ) update for every move of the game using the back-propagation procedure of the NN Repeat procedure until no more improvement is observed Nikolaos Papahristou PhD Thesis AI in backgammon Variants 17

Game sequence creation for learning Database of games already available Observe (or play against) expert(s) Self-play 1. Learning online (each update is done immediately after a move is played). 2. Learning offline (updates are done incrementally after the game ends) a) Forward offline: Updates are done starting from the first position of the game and ending at the terminal position. b) Reverse offline: Updates are done starting from the terminal position of the game and ending at the first. c) Reverse offline recalc: As previous, but recalculate target value after each update. Nikolaos Papahristou PhD Thesis AI in backgammon Variants 18

Comparison of sequence creation methods ppg vs pubeval ppg vs Fevga-1 ppg vs Plakoto1 0 Backgammon 0.5 Plakoto -0.5 0-1 -0.5-1 -1.5-1.5-2 0 50000 100000 0-0.5 Fevga -2 0 50000 100000 Reverse Offline Recalc Online Reverse Offline Forward Offline -1-1.5 -Each line average of 10 training runs -λ = 0, α = 0.1 -ΝΝ has 10 Hidden Units -2 0 50000 100000 Nikolaos Papahristou PhD Thesis AI in backgammon Variants 19

Determining the target of the update a b c a: Update the values without flipping the board. b: Updates are split in two. c: Updates are done on the inverted value of the next player. Nikolaos Papahristou PhD Thesis AI in backgammon Variants 20

Plakoto features Plakoto-1 (raw encoding) 4 binary inputs for every point per player 1 input for the checkers off board per player 1 binary input for every point per player for pins Plakoto-2 (raw + smart features) Replaced the player pin units with the probability of the opponent pinning the points Nikolaos Papahristou PhD Thesis AI in backgammon Variants 21

Fevga features Fevga-1 (raw encoding) 4 binary inputs for every point per player 1 input for the checkers off board per player Fevga-2 (raw + smart features) existence of primes (the most powerful strategy in fevga) pipcount, existence of a race situation Fevga-3 (raw + smart + intermediate reward) Primes are treated as winning positions Strategy learned is based on the creation of primes This strategy is considered one of the most powerful by expert players Nikolaos Papahristou PhD Thesis AI in backgammon Variants 22

points per game (ppg) vs Tavli3D points per game (ppg) vs Tavli3D Training progress of all agents examined 1.6 1.6 1.2 1.4 0.8 0.4 Plakoto-1 Plakoto-2 Plakoto-3 1.2 Fevga2 Fevga3 Fevga4 Fevga5 0 Games trained (millions) Games trained (millions) 1.0 0 0.5 1 1.5 0 0.5 1 1.5 Summary of techniques used by the various agents Plakoto Agent Updating method Sequence creation and update direction Fevga agent Updating method Sequence creation and update direction Intermediate reward Plakoto-1 b Forward offline Fevga-2 b Forward offline No Plakoto-2 b Forward offline Fevga-3 b Forward offline Yes Plakoto-3 c Reverse offline recalc Fevga-4 c Reverse offline recalc No Fevga-5 c Reverse offline recalc Yes Nikolaos Papahristou PhD Thesis AI in backgammon Variants 23

Final Training Setup NNs as game evaluation function of states Training examples by self-play Temporal Difference Learning (λ) for weight update Offline updates Updates start from the terminal position and work back to the starting position Incremental updating of weights Gradual decrease of α, λ parameters Nikolaos Papahristou PhD Thesis AI in backgammon Variants 24

Selected values of α and λ parameters. Games Trained Portes Plakoto Fevga 0-10000 λ=0.7 α=1 λ=0 α=0.3 λ=0.7 α=1 10000-100000 λ=0.7 α=0.3 λ=0 α=0.3 λ=0.7 α=0.3 100000-250000 λ=0.7 α=0.1 λ=0 α=0.1 λ=0.7 α=0.1 250000-500000 λ=0 α=0.3 λ=0 α=0.1 λ=0 α=0.3 500000-1500000 λ=0 α=0.1 λ=0 α=0.1 λ=0 α=0.1 1500000-5000000 λ=0 α=0.1 λ=0 α=0.01 λ=0 α=0.01 5000000- λ=0 α=0.01 - - Nikolaos Papahristou PhD Thesis AI in backgammon Variants 25

New features added Plakoto-5 Race, PipDiff, PipBearoff, PipBearoff, ChFrontOfPin, Esc_Prob Fevga-6 Probability of making a prime instead of binary feature Race, PipDiff, PipBearoff, PipBearoff Nikolaos Papahristou PhD Thesis AI in backgammon Variants 26

Performance of the new bots New Bot Opponent ppg Portes-1(1-ply) Pubeval (1-ply) 0.603 Plakoto-5(1-ply) Plakoto-4(1-ply) 0.356 Plakoto-5(2-ply) Plakoto-4(1-ply) 0.422 Fevga-6(1-ply) Fevga-4(1-ply) 0.215 Fevga-6(2-ply) Fevga-4(1-ply) 0.323 The number of games played are 100000 for 1-ply and 10000 for 2-ply. In order to speed up the testing time of 2-ply, the expansion of depth-2 was performed only for the best 15 candidate moves (forward pruning). Nikolaos Papahristou PhD Thesis AI in backgammon Variants 27

Outline 1. Background 2. Learning to Play Tavli Games 3. Statistics and Match Play 4. Palamedes 5. Conclusion Nikolaos Papahristou PhD Thesis AI in backgammon Variants 28

Motivation - Method Investigate first player advantage in Tavli games (Portes, Plakoto, Fevga) using simulation and the Palamedes bot. Extract useful statistics (e.g. % games won as double wins) for each game Construct effective match strategies Method: Self-Play simulations for every roll and every starting move Nikolaos Papahristou PhD Thesis AI in backgammon Variants 29

First player estimated equity of all opening rolls Nikolaos Papahristou PhD Thesis AI in backgammon Variants 30

Advantage of the first player SINGLE ROLLS 0.042 0.072 0.195 Portes Plakoto Fevga DOUBLE ROLLS 0.267 0.265 0.298 ALL ROLLS 0.079 0.104 0.213 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 Notable points: First player has big advantage on Fevga Double rolls significantly increase first player chances (except on Fevga) Fairness ranking: 1)Backgammon 2)Portes 3)Plakoto 4)Fevga Nikolaos Papahristou PhD Thesis AI in backgammon Variants 31

Expected outcome (%) of the first player Fevga All Rolls 47.31% 10.22% 38.42% 4.05% Fevga Double Rolls 47.39% 12.89% 36.06% 3.66% Fevga Single Rolls 47.30% 9.69% 38.89% 4.12% Plakoto All Rolls 29.90% 22.77% 29.62% 17.71% Plakoto Double Rolls 27.85% 28.91% 27.30% 15.95% Plakoto Single Rolls 30.31% 21.55% 30.09% 18.06% Portes All Rolls 38.18% 14.62% 34.87% 12.31% Portes Double Rolls 41.84% 17.57% 30.68% 9.73% Portes Single Rolls 37.45% 14.03% 35.70% 12.82% 0% 100% Single Wins (WS) Double Wins (WD) Single Losses (LS) Double Losses (LD) Table 1. Gammon rates of Tavli variants Variant Gammon Rate Portes 26.85% Plakoto 40.48% Fevga 14.27% Nikolaos Papahristou PhD Thesis AI in backgammon Variants 32

Example of a MWC table (Fevga) A MATCH WINNING CHANCE (MWC) away B 1 2 3 4 5 6 7 8 9 away 1 50.00 68.28 81.68 89.04 93.53 96.16 97.73 98.65 99.20 2 31.73 50.00 65.85 76.78 84.56 89.83 93.37 95.72 97.25 3 18.32 34.15 50.00 62.91 73.20 80.98 86.72 90.84 93.75 4 10.96 23.22 37.09 50.00 61.39 70.85 78.41 84.26 88.69 5 6.47 15.44 26.80 38.61 50.00 60.25 69.07 76.36 82.23 6 3.84 10.17 19.02 29.15 39.75 50.00 59.41 67.68 74.71 7 2.27 6.63 13.28 21.59 30.93 40.59 50.00 58.74 66.56 8 1.35 4.28 9.16 15.74 23.64 32.32 41.26 50.00 58.20 9 0.80 2.75 6.25 11.31 17.77 25.29 33.44 41.80 50.00 Move selection Money Strategy: E = WS LS + 2 * (WD LD) Match Strategy: MWC = WS * mwc(a-1, B) + WD * mwc(a-2, B) + LS * mwc(a, B-1) + LD * mwc(a, B-2) Nikolaos Papahristou PhD Thesis AI in backgammon Variants 33

Match strategy vs Money Strategy Variant Match Wins Diff. moves Games WS Games WD Games LS Games LD Total game points Portes 5144 ± 98 7.1% 22937 7094 19558 9066-565 Plakoto 5103 ± 98 4.6% 15994 10627 15238 11007-4 Fevga 5067 ± 98 5.3% 28395 4453 27358 5401-635 10000 5-point matches All results from the point of the match-strategy player WS: Single wins WD: Double Wins LS: Single Losses LD: Double Losses Diff. moves: % of match strategy moves that is different from the ones that the money strategy would have made in its place. Nikolaos Papahristou PhD Thesis AI in backgammon Variants 34

Outline 1. Background 2. Learning to Play Tavli Games 3. Statistics and Match Play 4. Palamedes 5. Conclusion Nikolaos Papahristou PhD Thesis AI in backgammon Variants 35

Palamedes Free Software to play against all agents Windows and Android versions Developed in C++ using Qt Framework and Eigen library Nikolaos Papahristou PhD Thesis AI in backgammon Variants 36

Palamedes Features Several variants supported Human vs AI Look-ahead search (2-ply) Endgame databases supported Money-game and match modes Player Statistics Analysis of played moves (Windows only) Nikolaos Papahristou PhD Thesis AI in backgammon Variants 37

Palamedes Analytics 15000 installs 250 active users / day 1500 games / day 18 / session User results per game: Portes : -0.425 ppg Plakoto: -0.655 ppg Fevga: -0.505 ppg Nikolaos Papahristou PhD Thesis AI in backgammon Variants 38

Palamedes in Computer Olympiads Participated two times in backgammon computer Olympiads (2011, 2015) 1 st place both times Game type: standard backgammon Opponents GNUBG: open source BGBlitz: commercial Nikolaos Papahristou PhD Thesis AI in backgammon Variants 39

Outline 1. Background 2. Learning to Play Tavli Games 3. Statistics and Match Play 4. Palamedes 5. Conclusion Nikolaos Papahristou PhD Thesis AI in backgammon Variants 40

Contributions Expert playing agents for the first time in Tavli games New training method for self-play learning Statistics for tavli games Palamedes: A free program where anyone can play against the trained agents Nikolaos Papahristou PhD Thesis AI in backgammon Variants 41

Future work Apply training algorithm to other games/env Make training algorithm multi-threaded Graying the NN black box Extend MWC tables for tavli matches Change Fevga rules to reduce first player adv. Add more endgame databases Palamedes: add deeper search (3-ply, 4-ply) Palamedes: tutor mode Nikolaos Papahristou PhD Thesis AI in backgammon Variants 42

Thank you! Nikos Papahristou nikpapa@gmail.com