Algorithms for solving sequential (zero-sum) games. Main case in these slides: chess. Slide pack by Tuomas Sandholm

Similar documents
Algorithms for solving sequential (zero-sum) games. Main case in these slides: chess! Slide pack by " Tuomas Sandholm"

SEARCH VS KNOWLEDGE: EMPIRICAL STUDY OF MINIMAX ON KRK ENDGAME

CS 4700: Foundations of Artificial Intelligence

CS440/ECE448 Lecture 9: Minimax Search. Slides by Svetlana Lazebnik 9/2016 Modified by Mark Hasegawa-Johnson 9/2017

Adversarial Search. CMPSCI 383 September 29, 2011

Outline. Game Playing. Game Problems. Game Problems. Types of games Playing a perfect game. Playing an imperfect game

Adversarial Search: Game Playing. Reading: Chapter

SEARCH VERSUS KNOWLEDGE: AN EMPIRICAL STUDY OF MINIMAX ON KRK

Artificial Intelligence. Topic 5. Game playing

COMP219: COMP219: Artificial Intelligence Artificial Intelligence Dr. Annabel Latham Lecture 12: Game Playing Overview Games and Search

Adversarial Search. Soleymani. Artificial Intelligence: A Modern Approach, 3 rd Edition, Chapter 5

CS885 Reinforcement Learning Lecture 13c: June 13, Adversarial Search [RusNor] Sec

Last update: March 9, Game playing. CMSC 421, Chapter 6. CMSC 421, Chapter 6 1

COMP219: Artificial Intelligence. Lecture 13: Game Playing

CPS331 Lecture: Search in Games last revised 2/16/10

CS 331: Artificial Intelligence Adversarial Search II. Outline

Game Playing. Philipp Koehn. 29 September 2015

Chess Algorithms Theory and Practice. Rune Djurhuus Chess Grandmaster / September 23, 2013

Computer Science and Software Engineering University of Wisconsin - Platteville. 4. Game Play. CS 3030 Lecture Notes Yan Shi UW-Platteville

Games CSE 473. Kasparov Vs. Deep Junior August 2, 2003 Match ends in a 3 / 3 tie!

Today. Types of Game. Games and Search 1/18/2010. COMP210: Artificial Intelligence. Lecture 10. Game playing

Adversary Search. Ref: Chapter 5

Data Structures and Algorithms

Adversarial Search. CS 486/686: Introduction to Artificial Intelligence

Artificial Intelligence

CITS3001. Algorithms, Agents and Artificial Intelligence. Semester 2, 2016 Tim French

Module 3. Problem Solving using Search- (Two agent) Version 2 CSE IIT, Kharagpur

Set 4: Game-Playing. ICS 271 Fall 2017 Kalev Kask

Game-Playing & Adversarial Search

Foundations of AI. 5. Board Games. Search Strategies for Games, Games with Chance, State of the Art. Wolfram Burgard and Luc De Raedt SA-1

Game Engineering CS F-24 Board / Strategy Games

Foundations of Artificial Intelligence

CPS 570: Artificial Intelligence Two-player, zero-sum, perfect-information Games

Foundations of Artificial Intelligence

2/5/17 ADVERSARIAL SEARCH. Today. Introduce adversarial games Minimax as an optimal strategy Alpha-beta pruning Real-time decision making

Adversarial Search Aka Games

Adversarial Search. CS 486/686: Introduction to Artificial Intelligence

Foundations of AI. 6. Adversarial Search. Search Strategies for Games, Games with Chance, State of the Art. Wolfram Burgard & Bernhard Nebel

Artificial Intelligence

Games and Adversarial Search

Ar#ficial)Intelligence!!

Game playing. Outline

Game playing. Chapter 5. Chapter 5 1

Programming Project 1: Pacman (Due )

Outline. Game playing. Types of games. Games vs. search problems. Minimax. Game tree (2-player, deterministic, turns) Games

ADVERSARIAL SEARCH. Chapter 5

Game playing. Chapter 6. Chapter 6 1

Game Playing. Dr. Richard J. Povinelli. Page 1. rev 1.1, 9/14/2003

Artificial Intelligence Search III

CSE 40171: Artificial Intelligence. Adversarial Search: Game Trees, Alpha-Beta Pruning; Imperfect Decisions

Artificial Intelligence Adversarial Search

Adversarial Search and Game- Playing C H A P T E R 6 C M P T : S P R I N G H A S S A N K H O S R A V I

Lecture 14. Questions? Friday, February 10 CS 430 Artificial Intelligence - Lecture 14 1

Theory and Practice of Artificial Intelligence

Foundations of AI. 6. Board Games. Search Strategies for Games, Games with Chance, State of the Art

4. Games and search. Lecture Artificial Intelligence (4ov / 8op)

Intuition Mini-Max 2

Multiple Agents. Why can t we all just get along? (Rodney King)

Introduction to Game Theory

Contents. Foundations of Artificial Intelligence. Problems. Why Board Games?

Game Playing State-of-the-Art

Artificial Intelligence. Minimax and alpha-beta pruning

CS 5522: Artificial Intelligence II

CS 380: ARTIFICIAL INTELLIGENCE ADVERSARIAL SEARCH. Santiago Ontañón

Pengju

Adversarial search (game playing)

Game Playing: Adversarial Search. Chapter 5

Games vs. search problems. Game playing Chapter 6. Outline. Game tree (2-player, deterministic, turns) Types of games. Minimax

CS 771 Artificial Intelligence. Adversarial Search

School of EECS Washington State University. Artificial Intelligence

Game playing. Chapter 6. Chapter 6 1

Game playing. Chapter 5, Sections 1 6

ARTIFICIAL INTELLIGENCE (CS 370D)

Game-playing AIs: Games and Adversarial Search FINAL SET (w/ pruning study examples) AIMA

ADVERSARIAL SEARCH. Today. Reading. Goals. AIMA Chapter Read , Skim 5.7

Adversarial Search. Human-aware Robotics. 2018/01/25 Chapter 5 in R&N 3rd Ø Announcement: Slides for this lecture are here:

Game Playing. Why do AI researchers study game playing? 1. It s a good reasoning problem, formal and nontrivial.

ADVERSARIAL SEARCH. Today. Reading. Goals. AIMA Chapter , 5.7,5.8

Ch.4 AI and Games. Hantao Zhang. The University of Iowa Department of Computer Science. hzhang/c145

CS 188: Artificial Intelligence

Games vs. search problems. Adversarial Search. Types of games. Outline

Artificial Intelligence

Game-playing: DeepBlue and AlphaGo

Introduc)on to Ar)ficial Intelligence

Adversarial Search. Chapter 5. Mausam (Based on slides of Stuart Russell, Andrew Parks, Henry Kautz, Linda Shapiro) 1

Games (adversarial search problems)

CS 4700: Foundations of Artificial Intelligence

Game Playing State-of-the-Art. CS 188: Artificial Intelligence. Behavior from Computation. Video of Demo Mystery Pacman. Adversarial Search

More Adversarial Search

Adversarial Search (Game Playing)

Adversarial Search and Game Playing

Game Playing AI Class 8 Ch , 5.4.1, 5.5

Adversarial Search. Rob Platt Northeastern University. Some images and slides are used from: AIMA CS188 UC Berkeley

CS 380: ARTIFICIAL INTELLIGENCE

CS 188: Artificial Intelligence Spring 2007

Presentation Overview. Bootstrapping from Game Tree Search. Game Tree Search. Heuristic Evaluation Function

Lecture 5: Game Playing (Adversarial Search)

Adversarial Search. Read AIMA Chapter CIS 421/521 - Intro to AI 1

Game playing. Chapter 5, Sections 1{5. AIMA Slides cstuart Russell and Peter Norvig, 1998 Chapter 5, Sections 1{5 1

CS 188: Artificial Intelligence

Transcription:

Algorithms for solving sequential (zero-sum) games Main case in these slides: chess Slide pack by Tuomas Sandholm

Rich history of cumulative ideas

Game-theoretic perspective Game of perfect information Finite game Finite action sets Finite length Chess has a solution: win/tie/lose (Nash equilibrium) Subgame perfect Nash equilibrium (via backward induction) REALITY: computational complexity bounds rationality

Chess game tree

Opening books (available on CD) Example opening where the book goes 16 moves (32 plies) deep

Minimax algorithm (not all branches are shown)

Deeper example of minimax search ABJKL is equally good

Search depth pathology Beal (1980) and Nau (1982, 83) analyzed whether values backed up by minimax search are more trustworthy than the heuristic values themselves. The analyses of the model showed that backed-up values are somewhat less trustworthy Anomaly goes away if sibling nodes values are highly correlated [Beal 1982, Bratko & Gams 1982, Nau 1982] Pearl (1984) partly disagreed with this conclusion, and claimed that while strong dependencies between sibling nodes can eliminate the pathology, practical games like chess don t possess dependencies of sufficient strength. He pointed out that few chess positions are so strong that they cannot be spoiled abruptly if one really tries hard to do so. He concluded that success of minimax is based on the fact that common games do not possess a uniform structure but are riddled with early terminal positions, colloquially named blunders, pitfalls or traps. Close ancestors of such traps carry more reliable evaluations than the rest of the nodes, and when more of these ancestors are exposed by the search, the decisions become more valid. Still not fully understood. For new results, see, e.g., Sadikov, Bratko, Kononenko. (2003) Search versus Knowledge: An Empirical Study of Minimax on KRK, In: van den Herik, Iida and Heinz (eds.) Advances in Computer Games: Many Games, Many Challenges, Kluwer Academic Publishers, pp. 33-44

α-β -pruning

α-β -search on ongoing example

α-β -search

Complexity of α-β -search

Evaluation function Difference (between player and opponent) of Material Mobility King position Bishop pair Rook pair Open rook files Control of center (piecewise) Others Values of knight s position in Deep Blue

Evaluation function... Deep Blue used ~6,000 different features in its evaluation function (in hardware) A different weighting of these features is downloaded to the chips after every real world move (based on current situation on the board) Contributed to strong positional play Acquiring the weights for Deep Blue Weight learning based on a database of 900 grand master games (~120 features) Alter weight of one feature => 5-6 ply search => if matches better with grand master play, then alter that parameter in the same direction further Least-squares with no search Other learning is possible, e.g. Tesauroʼs Backgammon Solves credit assignment problem Was confined to linear combination of features Manually: Grand master Joel Benjamin played take-back chess. At possible errors, the evaluation was broken down, visualized, and weighting possibly changed Deep Blue is brute force Smart search and knowledge engineered evaluation

Horizon problem

Ways to tame the horizon effect Quiescence search Evaluation function (domain specific) returns another number in addition to evaluation: stability Threats Other Continue search (beyond normal horizon) if position is unstable Introduces variance in search time Singular extension Domain independent A node is searched deeper if its value is much better than its siblingsʼ Even 30-40 ply A variant is used by Deep Blue

Transpositions

Transpositions are important

Transposition table Store millions of positions in a hash table to avoid searching them again Position Hash code Score Exact / upper bound / lower bound Depth of searched tree rooted at the position Best move to make at the position Algorithm When a position P is arrived at, the hash table is probed If there is a match, and new_depth(p) stored_depth(p), and score in the table is exact, or the bound on the score is sufficient to cause the move leading to P to be inferior to some other choice then P is assigned the attributes from the table else computer scores (by direct evaluation or search (old best move searched first)) P and stores the new attributes in the table Fills up => replacement strategies Keep positions with greater searched tree depth under them Keep positions with more searched nodes under them

Search tree illustrating the use of a transposition table

End game databases

Generating databases for solvable subgames State space = {WTM, BTM} x {all possible configurations of remaining pieces} BTM table, WTM table, legal moves connect states between these Start at terminal positions: mate, stalemate, immediate capture without compensation (=reduction). Mark whiteʼs wins by won-in-0 Mark unclassified WTM positions that allow a move to a wonin-0 by won-in-1 (store the associated move) Mark unclassified BTM positions as won-in-2 if forced moved to won-in-1 position Repeat this until no more labellings occurred Do the same for black Remaining positions are draws

Compact representation methods to help endgame database representation & generation

Endgame databases

Endgame databases

How end game databases changed chess All 5 piece endgames solved (can have > 10^8 states) & many 6 piece KRBKNN (~10^11 states): longest path-to-reduction 223 Rule changes Max number of moves from capture/pawn move to completion Chess knowledge Splitting rook from king in KRKQ KRKN game was thought to be a draw, but White wins in 51% of WTM White wins in 87% of BTM

Endgame databases

Deep Blueʼs search ~200 million moves / second = 3.6 * 10^10 moves in 3 minutes 3 min corresponds to ~7 plies of uniform depth minimax search 10-14 plies of uniform depth alpha-beta search 1 sec corresponds to 380 years of human thinking time Software searches first Selective and singular extensions Specialized hardware searches last 5 ply

Deep Blueʼs hardware 32-node RS6000 SP multicomputer Each node had 1 IBM Power2 Super Chip (P2SC) 16 chess chips Move generation (often takes 40-50% of time) Evaluation Some endgame heuristics & small endgame databases 32 Gbyte opening & endgame database

Role of computing power

Kasparov lost to Deep Blue in 1997 Win-loss-draw-draw-draw-loss (In even-numbered games, Deep Blue played white)

Future directions Engineering Better evaluation functions for chess Faster hardware Empirically better search algorithms Learning from examples and especially from self-play There already are grandmaster-level programs that run on a regular PC, e.g., Fritz Fun Harder games, e.g. Go Easier games, e.g., checkers (some openings solved [2005]) Science Extending game theory with normative models of bounded rationality Developing normative (e.g. decision theoretic) search algorithms MGSS* [Russell&Wefald 1991] is an example of a first step Conspiracy numbers Impacts are beyond just chess Impacts of faster hardware Impacts of game theory with bounded rationality, e.g. auctions, voting, electronic commerce, coalition formation