Multiple Agents. Why can t we all just get along? (Rodney King)

Similar documents
CS510 \ Lecture Ariel Stolerman

ARTIFICIAL INTELLIGENCE (CS 370D)

Game Tree Search. CSC384: Introduction to Artificial Intelligence. Generalizing Search Problem. General Games. What makes something a game?

Game Engineering CS F-24 Board / Strategy Games

Outline. Game Playing. Game Problems. Game Problems. Types of games Playing a perfect game. Playing an imperfect game

Adversarial Search and Game Theory. CS 510 Lecture 5 October 26, 2017

School of EECS Washington State University. Artificial Intelligence

CS440/ECE448 Lecture 9: Minimax Search. Slides by Svetlana Lazebnik 9/2016 Modified by Mark Hasegawa-Johnson 9/2017

CSC384: Introduction to Artificial Intelligence. Game Tree Search

Data Structures and Algorithms

Adversarial Search. Soleymani. Artificial Intelligence: A Modern Approach, 3 rd Edition, Chapter 5

Adversarial Search 1

CS188 Spring 2014 Section 3: Games

Artificial Intelligence Adversarial Search

LECTURE 26: GAME THEORY 1

16.410/413 Principles of Autonomy and Decision Making

Adversarial Search. Rob Platt Northeastern University. Some images and slides are used from: AIMA CS188 UC Berkeley

Game Tree Search. Generalizing Search Problems. Two-person Zero-Sum Games. Generalizing Search Problems. CSC384: Intro to Artificial Intelligence

Game-Playing & Adversarial Search

Game Playing. Philipp Koehn. 29 September 2015

Adversarial Search. CS 486/686: Introduction to Artificial Intelligence

Module 3. Problem Solving using Search- (Two agent) Version 2 CSE IIT, Kharagpur

Adversarial Search Aka Games

Artificial Intelligence

CS 380: ARTIFICIAL INTELLIGENCE ADVERSARIAL SEARCH. Santiago Ontañón

Pengju

Introduction to Game Theory

Lecture 14. Questions? Friday, February 10 CS 430 Artificial Intelligence - Lecture 14 1

Adversarial Search Lecture 7

CS 4700: Foundations of Artificial Intelligence

CS 188: Artificial Intelligence

CS 440 / ECE 448 Introduction to Artificial Intelligence Spring 2010 Lecture #5

Adversarial Search (Game Playing)

Theory and Practice of Artificial Intelligence

CS 331: Artificial Intelligence Adversarial Search II. Outline

Programming Project 1: Pacman (Due )

Today. Nondeterministic games: backgammon. Algorithm for nondeterministic games. Nondeterministic games in general. See Russell and Norvig, chapter 6

Foundations of Artificial Intelligence

Adversarial Search. CS 486/686: Introduction to Artificial Intelligence

Last update: March 9, Game playing. CMSC 421, Chapter 6. CMSC 421, Chapter 6 1

Announcements. Homework 1. Project 1. Due tonight at 11:59pm. Due Friday 2/8 at 4:00pm. Electronic HW1 Written HW1

Games and Adversarial Search

Multiagent Systems: Intro to Game Theory. CS 486/686: Introduction to Artificial Intelligence

CS188 Spring 2010 Section 3: Game Trees

CS188 Spring 2010 Section 3: Game Trees

Game Theory. Vincent Kubala

CS 1571 Introduction to AI Lecture 12. Adversarial search. CS 1571 Intro to AI. Announcements

Multiagent Systems: Intro to Game Theory. CS 486/686: Introduction to Artificial Intelligence

Solving Problems by Searching: Adversarial Search

Adversarial Search: Game Playing. Reading: Chapter

Game playing. Chapter 6. Chapter 6 1

Computer Science and Software Engineering University of Wisconsin - Platteville. 4. Game Play. CS 3030 Lecture Notes Yan Shi UW-Platteville

Game Playing: Adversarial Search. Chapter 5

Game playing. Outline

Summary Overview of Topics in Econ 30200b: Decision theory: strong and weak domination by randomized strategies, domination theorem, expected utility

Game playing. Chapter 5, Sections 1 6

Game Playing. Dr. Richard J. Povinelli. Page 1. rev 1.1, 9/14/2003

Game playing. Chapter 5. Chapter 5 1

Set 4: Game-Playing. ICS 271 Fall 2017 Kalev Kask

CS 380: ARTIFICIAL INTELLIGENCE

CS 2710 Foundations of AI. Lecture 9. Adversarial search. CS 2710 Foundations of AI. Game search

Multiagent Systems: Intro to Game Theory. CS 486/686: Introduction to Artificial Intelligence

COMP219: COMP219: Artificial Intelligence Artificial Intelligence Dr. Annabel Latham Lecture 12: Game Playing Overview Games and Search

Artificial Intelligence

Game Playing. Why do AI researchers study game playing? 1. It s a good reasoning problem, formal and nontrivial.

Artificial Intelligence

Game Playing State-of-the-Art. CS 188: Artificial Intelligence. Behavior from Computation. Video of Demo Mystery Pacman. Adversarial Search

Games vs. search problems. Game playing Chapter 6. Outline. Game tree (2-player, deterministic, turns) Types of games. Minimax

Game playing. Chapter 6. Chapter 6 1

CS 771 Artificial Intelligence. Adversarial Search

CS 5522: Artificial Intelligence II

Lecture 5: Game Playing (Adversarial Search)

COMP219: Artificial Intelligence. Lecture 13: Game Playing

Adversarial Search. Robert Platt Northeastern University. Some images and slides are used from: 1. CS188 UC Berkeley 2. RN, AIMA

Outline. Game playing. Types of games. Games vs. search problems. Minimax. Game tree (2-player, deterministic, turns) Games

CS325 Artificial Intelligence Ch. 5, Games!

Game Playing Beyond Minimax. Game Playing Summary So Far. Game Playing Improving Efficiency. Game Playing Minimax using DFS.

Algorithms for Data Structures: Search for Games. Phillip Smith 27/11/13

2 person perfect information

CMU-Q Lecture 20:

CSE 473: Artificial Intelligence Fall Outline. Types of Games. Deterministic Games. Previously: Single-Agent Trees. Previously: Value of a State

ADVERSARIAL SEARCH. Today. Reading. Goals. AIMA Chapter , 5.7,5.8

Artificial Intelligence 1: game playing

Game Playing State-of-the-Art

1. Simultaneous games All players move at same time. Represent with a game table. We ll stick to 2 players, generally A and B or Row and Col.

ADVERSARIAL SEARCH. Chapter 5

Artificial Intelligence, CS, Nanjing University Spring, 2018, Yang Yu. Lecture 4: Search 3.

5.4 Imperfect, Real-Time Decisions

Artificial Intelligence. Minimax and alpha-beta pruning

CSCI 699: Topics in Learning and Game Theory Fall 2017 Lecture 3: Intro to Game Theory. Instructor: Shaddin Dughmi

CS 188: Artificial Intelligence

CS 188: Artificial Intelligence Spring Announcements

Announcements. Homework 1 solutions posted. Test in 2 weeks (27 th ) -Covers up to and including HW2 (informed search)

4. Games and search. Lecture Artificial Intelligence (4ov / 8op)

Adversarial search (game playing)

Adversary Search. Ref: Chapter 5

CS440/ECE448 Lecture 11: Stochastic Games, Stochastic Search, and Learned Evaluation Functions

Adversarial Search. Read AIMA Chapter CIS 421/521 - Intro to AI 1

COMP9414: Artificial Intelligence Adversarial Search

Adversarial Search and Game Playing

Transcription:

Multiple Agents Why can t we all just get along? (Rodney King) Nash Equilibriums........................................ 25 Multiple Nash Equilibriums................................. 26 Prisoners Dilemma....................................... 27 Tragedy of the Commons.................................. 28 Computing Nash Equilibria................................. 29 Learning.............................................. 30 Introduction 2 Assumptions............................................ 2 Definitions............................................. 3 Games 4 Normal Form of a Game.................................... 4 Simple Example: Rock-Paper-Scissors........................... 5 Extensive Form of a Game................................... 6 Perfect Information Example................................. 7 Imperfect Information Example................................ 8 Multiagent Decision Networks................................ 9 Example Multiagent Decision Network.......................... 10 Two-Agent Zero-Sum Games 11 Two-Agent Zero-Sum Games................................ 11 Minimax Example........................................ 12 Minimax Procedure...................................... 13 Evaluation Functions...................................... 14 Example.............................................. 15 Minimax with Eval....................................... 16 Idea of Alpha-Beta Pruning................................. 17 Example.............................................. 18 α-β with Eval.......................................... 19 Performance of Minimax and Alpha-Beta........................ 20 ther Issues........................................... 21 Partially bservable Multiagent Reasoning 22 Example.............................................. 22 Example.............................................. 23 Strategy Profiles........................................ 24 1 2

Introduction 2 Assumptions Each agent can act autonomously. Each agent has its own information about the world. Each agent can have its own utility function. A mechanism specifies how the actions of the agents lead to outcomes, e.g., rules of chess. A rational agent acts strategically; its actions are based on utility. Nature can be defined as an agent with no utility and no strategy. CS 3793/5233 Artificial Intelligence Multiple Agents 2 Games 4 Normal Form of a Game The strategic form or normal form of a game contains: a finite set I of agents, {1,...,n}. a set of strategies for each agent. A strategy profile s = (s 1,...,s n ) means that agent i follows strategy s i. utility functions u i (s) gives the expected utility for agent i when all agents follow strategy profile s. An outcome is produced when all the agents follow a strategy profile. CS 3793/5233 Artificial Intelligence Multiple Agents 4 Simple Example: Rock-Paper-Scissors Definitions Agents can be fully cooperative; they share the same utility. Agents can be fully competitive; they have the opposite utility. In a zero-sum game, the sum of the utilities for the agents is zero for every outcome. Game theory studies what agents should do in a multi-agent setting. CS 3793/5233 Artificial Intelligence Multiple Agents 3 In the game of rock-paper-scissors, there are two agents, each choosing one of three actions {rock,paper, scissors}. For each combinations of actions, a payoff matrix can be used to specify the utilities. Agent 2 rock paper scissors rock 0,0 1,1 1, 1 Agent 1 paper 1, 1 0,0 1,1 scissors 1,1 1, 1 0,0 CS 3793/5233 Artificial Intelligence Multiple Agents 5 3 4

Extensive Form of a Game The extensive form of a game or a game tree is a finite tree where the nodes are states and the edges are actions. Each internal node is controlled by an particular agent. Each edge out of a node controlled by agent i corresponds to an action for agent i. Each node controlled by nature has a probability distribution over its children. The leaves represent final outcomes and are labeled with a utility for each agent. Imperfect Information Example An agent cannot distinguish the nodes in an information set. CS 3793/5233 Artificial Intelligence Multiple Agents 6 Perfect Information Example CS 3793/5233 Artificial Intelligence Multiple Agents 8 1, 1 Multiagent Decision Networks 1,1 0,0 1, 1 0,0 A multiagent decision network is a factored representation of a multiagent decision problem. Each decision node is labeled with an agent that makes the decision for the node. Each agent has a utility node. As with a decision network, the parents of a decision node are the information for making the decision. CS 3793/5233 Artificial Intelligence Multiple Agents 9 CS 3793/5233 Artificial Intelligence Multiple Agents 7 5 6

Example Multiagent Decision Network Minimax Example The scenario is that two roommates each might have a test to study for, choosing then to study or cook some food for both of them to eat. MA Test1 Cook1/Study1 Ready1 Food Utility1 1 MIN Test2 Cook2/Study2 Ready2 Utility2 CS 3793/5233 Artificial Intelligence Multiple Agents 10 1 MA 0 1 0 MIN Two-Agent Zero-Sum Games 11 Two-Agent Zero-Sum Games A two-agent zero-sum game is when a positive reward for one agent is an equally negative reward for the other agent. The utility can be characterized by a single number that one agent is trying to maximize and the other agent is trying to minimize. Having a single value for a two-agent zero-sum game leads to a minimax strategy. Each node is either a MA node, if controlled by the maximizing agent, or a MIN node if controlled by the minimizing agent. Treat agent currently in control (whose turn it is to move) as MA. CS 3793/5233 Artificial Intelligence Multiple Agents 11 CS 3793/5233 Artificial Intelligence Multiple Agents 12 Minimax Procedure Procedure Minimax(N) Inputs: a node in a game tree if N is a leaf node, then v value of N else if N is a MA node v v maximum of v and Minimax(C) else if N is a MIN node v + v minimum of v and Minimax(C) return v CS 3793/5233 Artificial Intelligence Multiple Agents 13 7 8

Evaluation Functions The game tree of many games is too large to search. An alternative is: Search as deeply as possible given time requirement. Use an evaluation function to estimate the values of nodes at the fringe. Use minimax to combine the values into an overall evaluation. CS 3793/5233 Artificial Intelligence Multiple Agents 14 Example evaluation = +10 if MA has 3 in line 10 if MIN has 3 in line MA is ; MIN is +1 for each potential 3 in line for MA 1 for each potential 3 in line for MIN MA MIN min( 2, 3, 2, 3) = 3 max( 3, 4) = 3 min( 4, 3, 4, 3) = 4 2 3 2 3 4 3 4 3 CS 3793/5233 Artificial Intelligence Multiple Agents 15 Minimax Procedure with Evaluation Function Procedure Minimax(N, d, f) Inputs: game tree node, search depth, eval. fn. if N is a leaf node, then return value of N else if d = 0, then return f(n) else if N is a MA node v v maximum of v and Minimax(C,d 1,f) else if N is a MIN node v + v minimum of v and Minimax(C,d 1,f) return v CS 3793/5233 Artificial Intelligence Multiple Agents 16 Idea of Alpha-Beta Pruning Alpha-beta pruning avoids search that won t change the minimax evaluation. Example: If MA has a move with value 3, stop searching other moves known to be 3. General Principle: Consider a node N. α = largest v in MA ancestors of N. β = smallest v in MIN ancestors of N. If α β, processing N cannot change eval. Proof: Let v = N s minimax value. α β implies α v or v β. α v implies v can t change ancestor with α. v β implies v can t change ancestor with β. This implies v cannot propagate to the top. CS 3793/5233 Artificial Intelligence Multiple Agents 17 9 10

Example MA MIN MA 2 1 = 1 >= 1 >=0 >= 1 max( 1,<= 1) = 1 min( 1,>= 1,>=0,>= 1) = 1 1 2 1 0 1 2 1 <= 1 3 1 max( 2, 1, 3, 1, 2, 2) = 1 2 2 CS 3793/5233 Artificial Intelligence Multiple Agents 18 Performance of Minimax and Alpha-Beta b = branching factor d = depth of search Minimax visits every state from level 0 to d. d i=0 bi = bd+1 1 b 1 (b d ) Alpha-Beta visits as few as Ω(b d/2 ) states. Depends on a good ordering of children. Actual programs approach the minimum bound. Alpha-beta pruning allows programs to look ahead nearly twice as many moves as minimax. CS 3793/5233 Artificial Intelligence Multiple Agents 20 Alpha-Beta Procedure with Eval Function Procedure Alpha-Beta(N, d, f, α, β) if N is a leaf node, then return value of N else if d = 0, then return f(n) else if N is a MA node α max(α, Alpha-Beta(C,d 1,f,α,β)) if α β return α else if N is a MIN node β min(β, Alpha-Beta(C,d 1,f,α,β)) if α β return β if N is a MA node, then return α, else return β ther Issues Horizon problem Quiescence Data bases of openings and end games Games of chance Expectiminimax extends minimax to include chance nodes in the game tree. Example: Roll of dice in Backgammon Evaluate chance node by summing probability times child s value. CS 3793/5233 Artificial Intelligence Multiple Agents 21 CS 3793/5233 Artificial Intelligence Multiple Agents 19 11 12

Partially bservable Multiagent Reasoning 22 Example of Partially bservable Multiagents What should the kicker and goalie do? Example of Partially bservable Multiagents p k = kicker kicks right, p j = goalie jumps right P(goal) = 0.9p k p j +0.3p k (1 p j )+0.2(1 p k )p j +0.6(1 p k )(1 p j ) goalie left right kicker left 0.6 0.2 right 0.3 0.9 CS 3793/5233 Artificial Intelligence Multiple Agents 23 CS 3793/5233 Artificial Intelligence Multiple Agents 22 13 14

Strategy Profiles Assume a general n-player game, A strategy for an agent is a probability distribution over the actions for this agent. A strategy profile is an assignment of a strategy to each agent. A strategy profile s has a utility for each agent. Let utility(s,i) be the utility of strategy profile s for agent i. If s is a strategy profile: s i is the strategy of agent i in s, s i is the set of strategies of the other agents. Thus s is s i s i CS 3793/5233 Artificial Intelligence Multiple Agents 24 Multiple Nash Equilibriums Hawk-Dove game (D > R): Agent 2 dove hawk Agent 1 dove R/2,R/2 0,R hawk R, 0 D, D Do what activity together? Agent 2 shopping football Agent 1 shopping 2,1 0,0 football 0, 0 1, 2 CS 3793/5233 Artificial Intelligence Multiple Agents 26 Nash Equilibriums s i is a best response to s i if for all other strategies s i for agent i: utility(s i s i,i) utility(s i s i,i) A strategy profile s is a Nash equilibrium if, for each agent i, strategy s i is a best response to s i. A Nash equilibrium is a strategy profile such that no agent can be better by changing its strategy. Theorem [Nash, 1950] Every finite game has at least one Nash equilibrium. In the soccer example, p k = 0.4 and p j = 0.3 is a Nash equilibrium. CS 3793/5233 Artificial Intelligence Multiple Agents 25 Prisoners Dilemma Two prisoners for a crime can either stay silent or talk, providing evidence against the other prisoner. The payoff is the number of years in prison. Prisoner 2 silent talk Prisoner 1 silent 1, 1 3,0 talk 0, 3 2, 2 CS 3793/5233 Artificial Intelligence Multiple Agents 27 15 16

Tragedy of the Commons There are 100 agents. There is a common environment shared by the agents. Each agent can choose do nothing with a 0 payoff, or do a selfish action that has a +10 payoff for the agent but includes a 1 payoff for all the other agents. If only one agent does a selfish action, that agent has a +9 payoff. If every agent does a selfish action, each agent gets a 90 payoff. CS 3793/5233 Artificial Intelligence Multiple Agents 28 Learning Issues: multiple equilibria, unknown utilities and outcomes for other agents Approach: Agent learns what is good for itself. Repeat: Select action a using distribution P. Do a and observe payoff. Update Q[a] Q[a]+α(payoff Q[a]). Find action a best that maximizes Q. Increase P[a best ] and normalize P. If other agents have fixed, possibly stochastic, strategies, this algorithm converges near a best response (as long as all actions are tried occasionally). CS 3793/5233 Artificial Intelligence Multiple Agents 30 Computing Nash Equilibria To compute Nash equilibria: Eliminate dominated strategies. Determine which actions will have non-zero probabilities. This is the support set. Determine the probability for the actions in the support set. Agent 2 x y z a 3,5 5,1 1,2 Agent 1 b 1,1 2,9 6,4 c 2,6 4,7 0,8 CS 3793/5233 Artificial Intelligence Multiple Agents 29 17 18