Network-building. Introduction. Page 1 of 6

Similar documents
Games in Networks and connections to algorithms. Éva Tardos Cornell University

Introduction to Algorithms / Algorithms I Lecturer: Michael Dinitz Topic: Algorithms and Game Theory Date: 12/4/14

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Algorithmic Game Theory Date: 12/6/18

Problem 1 (15 points: Graded by Shahin) Recall the network structure of our in-class trading experiment shown in Figure 1

Stanford University CS261: Optimization Handout 9 Luca Trevisan February 1, 2011

CS510 \ Lecture Ariel Stolerman

Lecture 23. Offense vs. Defense & Dynamic Games

Convergence in competitive games

PARALLEL NASH EQUILIBRIA IN BIMATRIX GAMES ISAAC ELBAZ CSE633 FALL 2012 INSTRUCTOR: DR. RUSS MILLER

Multiagent Systems: Intro to Game Theory. CS 486/686: Introduction to Artificial Intelligence

Topic 1: defining games and strategies. SF2972: Game theory. Not allowed: Extensive form game: formal definition

and 6.855J. Network Simplex Animations

Multiagent Systems: Intro to Game Theory. CS 486/686: Introduction to Artificial Intelligence

Noncooperative Games COMP4418 Knowledge Representation and Reasoning

Multiagent Systems: Intro to Game Theory. CS 486/686: Introduction to Artificial Intelligence

Game Theory and Algorithms Lecture 3: Weak Dominance and Truthfulness

Lectures: Feb 27 + Mar 1 + Mar 3, 2017

Auctions as Games: Equilibria and Efficiency Near-Optimal Mechanisms. Éva Tardos, Cornell

COMP Online Algorithms. Paging and k-server Problem. Shahin Kamali. Lecture 11 - Oct. 11, 2018 University of Manitoba

(a) Left Right (b) Left Right. Up Up 5-4. Row Down 0-5 Row Down 1 2. (c) B1 B2 (d) B1 B2 A1 4, 2-5, 6 A1 3, 2 0, 1

CMPUT 396 Tic-Tac-Toe Game

3 Game Theory II: Sequential-Move and Repeated Games

1. Introduction to Game Theory

Algorithms and Data Structures: Network Flows. 24th & 28th Oct, 2014

Microeconomics II Lecture 2: Backward induction and subgame perfection Karl Wärneryd Stockholm School of Economics November 2016

CSCI 699: Topics in Learning and Game Theory Fall 2017 Lecture 3: Intro to Game Theory. Instructor: Shaddin Dughmi

Algorithmic Game Theory and Applications. Kousha Etessami

Student Name. Student ID

Networks and Games. Brighten Godfrey Discover Engineering CS Camp July 24, slides by Brighten Godfrey unless otherwise noted

Arpita Biswas. Speaker. PhD Student (Google Fellow) Game Theory Lab, Dept. of CSA, Indian Institute of Science, Bangalore

UMBC CMSC 671 Midterm Exam 22 October 2012

CS188 Spring 2014 Section 3: Games

Lecture 6: Basics of Game Theory

3-2 Lecture 3: January Repeated Games A repeated game is a standard game which isplayed repeatedly. The utility of each player is the sum of

UPenn NETS 412: Algorithmic Game Theory Game Theory Practice. Clyde Silent Confess Silent 1, 1 10, 0 Confess 0, 10 5, 5

SF2972 Game Theory Written Exam March 17, 2011

Contents. MA 327/ECO 327 Introduction to Game Theory Fall 2017 Notes. 1 Wednesday, August Friday, August Monday, August 28 6

Modeling, Analysis and Optimization of Networks. Alberto Ceselli

LECTURE 26: GAME THEORY 1

Greedy Algorithms. Kleinberg and Tardos, Chapter 4

Microeconomics of Banking: Lecture 4

Coding for Efficiency

Domination Rationalizability Correlated Equilibrium Computing CE Computational problems in domination. Game Theory Week 3. Kevin Leyton-Brown

CSE 573 Problem Set 1. Answers on 10/17/08

Game Theory and Randomized Algorithms

Crossing Game Strategies

Extensive-Form Correlated Equilibrium: Definition and Computational Complexity

Lecture 24. Extensive-Form Dynamic Games

Mohammad Hossein Manshaei 1394

Partial Answers to the 2005 Final Exam

final examination on May 31 Topics from the latter part of the course (covered in homework assignments 4-7) include:

Cutting a Pie Is Not a Piece of Cake

Lecture 7: Dominance Concepts

Notes for Recitation 3

Introduction to (Networked) Game Theory. Networked Life NETS 112 Fall 2016 Prof. Michael Kearns

Optimisation and Operations Research

Informed search algorithms. Chapter 3 (Based on Slides by Stuart Russell, Richard Korf, Subbarao Kambhampati, and UW-AI faculty)

Lecture 1, CS 2050, Intro Discrete Math for Computer Science

CMU Lecture 22: Game Theory I. Teachers: Gianni A. Di Caro

Economics II: Micro Winter 2009 Exercise session 4 Aslanyan: VŠE

ESSENTIALS OF GAME THEORY

U strictly dominates D for player A, and L strictly dominates R for player B. This leaves (U, L) as a Strict Dominant Strategy Equilibrium.

Introduction to (Networked) Game Theory. Networked Life NETS 112 Fall 2014 Prof. Michael Kearns

CS269I: Incentives in Computer Science Lecture #20: Fair Division

Design of intelligent surveillance systems: a game theoretic case. Nicola Basilico Department of Computer Science University of Milan

A GRAPH THEORETICAL APPROACH TO SOLVING SCRAMBLE SQUARES PUZZLES. 1. Introduction

Games. Episode 6 Part III: Dynamics. Baochun Li Professor Department of Electrical and Computer Engineering University of Toronto

NANYANG TECHNOLOGICAL UNIVERSITY SEMESTER II EXAMINATION MH1301 DISCRETE MATHEMATICS. Time Allowed: 2 hours

6.042/18.062J Mathematics for Computer Science December 17, 2008 Tom Leighton and Marten van Dijk. Final Exam

Extensive Games with Perfect Information. Start by restricting attention to games without simultaneous moves and without nature (no randomness).

Basic Solution Concepts and Computational Issues

1 Simultaneous move games of complete information 1

Cooperative versus Noncooperative Game Theory

Cognitive Radios Games: Overview and Perspectives

18.204: CHIP FIRING GAMES

MASSACHUSETTS INSTITUTE OF TECHNOLOGY

Computational aspects of two-player zero-sum games Course notes for Computational Game Theory Section 3 Fall 2010

Alternation in the repeated Battle of the Sexes

Exercises for Introduction to Game Theory SOLUTIONS

M U LT I C A S T C O M M U N I C AT I O N S. Tarik Cicic

8.F The Possibility of Mistakes: Trembling Hand Perfection

Lecture Notes 3: Paging, K-Server and Metric Spaces

Physics 2310 Lab #5: Thin Lenses and Concave Mirrors Dr. Michael Pierce (Univ. of Wyoming)

Appendix A A Primer in Game Theory

Lecture #3: Networks. Kyumars Sheykh Esmaili

UMBC 671 Midterm Exam 19 October 2009

Adversarial Search and Game Theory. CS 510 Lecture 5 October 26, 2017

Sequential games. Moty Katzman. November 14, 2017

Extensive Form Games: Backward Induction and Imperfect Information Games

Final Practice Problems: Dynamic Programming and Max Flow Problems (I) Dynamic Programming Practice Problems

Lecture 20 November 13, 2014

Leandro Chaves Rêgo. Unawareness in Extensive Form Games. Joint work with: Joseph Halpern (Cornell) Statistics Department, UFPE, Brazil.

Solution Concepts 4 Nash equilibrium in mixed strategies

Multiple Agents. Why can t we all just get along? (Rodney King)

arxiv: v1 [cs.cc] 21 Jun 2017

Chapter 3 Learning in Two-Player Matrix Games

Algorithmic Game Theory

Pay or Play. Abstract. 1 Introduction. Moshe Tennenholtz Technion - Israel Institute of Technology and Microsoft Research

Week 1. 1 What Is Combinatorics?

Backward Induction. ISCI 330 Lecture 14. March 1, Backward Induction ISCI 330 Lecture 14, Slide 1

Transcription:

Page of 6 CS 684: Algorithmic Game Theory Friday, March 2, 2004 Instructor: Eva Tardos Guest Lecturer: Tom Wexler (wexler at cs dot cornell dot edu) Scribe: Richard C. Yeh Network-building This lecture describes a game that models the building of a network. There are three main points:. For the general case of this game, Nash equilibria do not always exist. The finding of Nash equilibria in this game is NP-complete. 2. When Nash equilibria exist in this game, the total cost of building certain Nashequilibrium networks is between and k (the number of players) times the cost of the optimal network. 3. For a simplified version of this game, the total cost of building the Nashequilibrium network can be found and is equal to the cost of the optimal network. For more details, please see E. Anshelevich, A. Dasgupta, É. Tardos, and T. Wexler, Near-optimal network design with selfish agents, STOC 03, available at http://www.cs.cornell.edu/~wexler/. Introduction Previously, Professor Tardos presented Roughgarden games, in which players route traffic in a network. The players selfish motive: to achieve the shortest routing time. Today s lecture will take a different perspective: the building of a network.

Page 2 of 6 The game We start with a graph representing a network G=(V, E), where we can think of the vertices as servers and the edges as possible links. Every edge e has some cost c e 0 needed to install the link, connecting the servers at the ends. We have k players. Each player i has a pair of nodes s i, t i, and is interested only in building just enough of the network to connect those nodes, not in building the entire network. The player strategies are given by the matrix p i (e), whose elements represent the statements player i contributes p i (e) 0 towards the cost of edge e. Bought network First, we ll check whether edge e is bought. Add up all players contributions and check whether this sum is at least the cost of the edge. Define the bought graph or bought network to be the set B of edges e satisfying: p ( e). i i c e To have the players connect their pairs as cheaply as possible, define each player s utility u i to be: e B p ( e) [minus the total amount paid] if he connects his pair i (regardless of whether all the edges were actually built.); [minus infinity] otherwise (thereby forcing players to connect). Details and Comments Today, we just want to connect the pair as cheaply as possible. There is no notion of fairness or capacity. Today, The sources and sinks are not necessarily disjoint. The graph is undirected (but the directed case is not much different). For all edges, set the edge cost =. There can be nodes that are not terminals. (We will look at Steiner nodes later in the lecture.) In a real situation, we might have other constraints that we will ignore today, such as: Multiple pairs per player Need for redundant network links Desire for shortest time (This will be a full-information game: everyone knows the graph and contributions.)

Page 3 of 6 Plan: Study the Nash Equilibria of this game. What are the stable solutions? For illustration, consider the following network: s 3 t 3 Suppose we arrange the strategies as: t 3 Player p i ( ) p i ( ) p i ( ) p i ( ) p i ( ) p i ( ) p i ( ) 0.5 0 0 0 0 0.5 2 0 0.5 0 0 0 0.5 3 0 0 0 0 0 2 s 3 Bought? Yes Yes Yes Yes No No Yes What happens? All players have connected their sources and sinks. However, this is not a Nash equilibrium, because player would prefer to switch his strategy to: Result: s 3 t 3 Player p i ( ) p i ( ) p i ( ) p i ( ) p i ( ) p i ( ) p i ( ) 0 0 0 0 0 0 2 0 0.5 0 0 0 0 3 0 0 0 0 0 2 Bought? player 's utility goes from.5 to ; player 2's utility goes from.5 to (fails to connect). No No Yes Yes No Yes Yes u i u i Burning questions Do Nash equilibria always exist? If so, how expensive are they (compared to an optimum network)? Can we find these equilibria? All of these questions have disappointing answers.

Page 4 of 6 Basic properties of Nash equilibria in this game Any Nash equilibrium must:. buy an acyclic network (a tree or a forest). (If there were a cycle, then players would prefer to buy one fewer edge, not affecting connectivity.) 2. players only contribute to edges on their unique path in the bought network. 3. for any edge e, total payment is either c e or zero. Example with no Nash equilibrium Consider the following simple example where there is no Nash equilibrium: Two players, four nodes (two sinks and two sources), and four edges, each with cos. (This is also figure from the paper.) For example, we could begin with both players paying for.5 edges: But then player would prefer to defect: And then player 2 would prefer to change to: This example shows that Nash equilibria don t necessarily exist. Example with multiple Nash equilibria Here s another example: imagine a two-node, two-edge graph, where all k players have the same source and sink nodes. The two parallel links have cost and k, as shown.,,,s k k,,,t k There are at least three Nash equilibria: Each player could pay /k, and the group as a whole buys the cheaper edge. Each player could pay, and the group as a whole buys the cost-k edge. One player could pay, buying the cheaper edge, and the other players free-ride.

Page 5 of 6 Calculating the Nash/Opt ratio: Claim: Any Nash equilibrium costs at most k times the total cost of the optimal solution. (k cost(opt)). Proof: Suppose otherwise that there exists a Nash equilibrium where the total cost exceeds (k cost(opt)). Then there must be at least one player paying more than the optimal total cost. This is a contradiction, because that one player would have preferred to pay just the optimal total cost. (Recall and compare: in the Roughgarden game, the costs of Nash equilibria were unique, and Nash equilibria always existed.) Define the optimistic price of anarchy to be the ratio of the total cost of the cheapest Nash equilibrium to the total cost of the optimal solution. This quantity indicates how good uncoordinated solutions can be. Even the best Nash equilibrium can be terrible; for example, combine the two previous examples by inserting the no-equilibrium network into the cost-k edge:,,,s k k 0,,,t k or,,,s k k,,,t k (where, as before, the no-equilibrium network players (not included in k) have sources and sinks at opposite corners of the little network) This forces the k players to buy the expensive edge. Nash claimed that mixed equilibria always exist. In our case, we only consider pure strategies because all players must connect; otherwise, the expected utility is not welldefined. Any given connection is a manifestation of a pure strategy. 0

Page 6 of 6 Single-source connection game Since finding Nash equilibria is in general NP-complete, we will consider a simpler case: Define a single-source connection game to be one in which all players share the same source node. (For all players i, s i = s.) Outside of the game-theoretic context, this is the Steiner tree problem, with the root as the source. Theorem: in a single-source connection game, Nash equilibria exist, and the optimistic price of anarchy i. Proof sketch: Simple case: all nodes are player-terminals. Imagine starting with a minimum-cost spanning tree. This is stable if every node pays for the edge immediately toward root. If any player were to prefer to buy a different edge (i.e., one not in the minimum-cost spanning tree), then we must not have started with a minimum-cost spanning tree. s Complication: If we add Steiner (non-terminal) nodes (represented in the figures as filled disks), then we must determine a way to pay for the edges from the Steiner nodes toward the root. We must begin with an optimum Steiner tree. 4 5 s 3 5 Here, players will pay a maximum of 5. 3 3 s 4 4 4 4 4 A B C D 5 The payments don t have to be split evenly: players A and D will pay up to 8, while players B and C will only pay up to 5. The idea is that we can add the payments from the bottom up (from the sinks to the source), while never violating the implicit constraints that a player will pay only as much as her or his cheapest alternative path to the root. This works; below is an argument by contradiction: What if we ask all the players what they re willing to pay, and it s not enough to buy the optimum Steiner tree? It must be that some player has some cheaper alternate edge to buy than that assigned by the optimum Steiner tree. But if we were to allow this player to deviate, then the total payment for the bought network would be less than for the optimal Steiner tree, which is a contradiction. Either this player or some other player must have lied. In both cases, the minimum-cost spanning tree or minimum-cost Steiner tree is optimal, and the optimistic price of anarchy i.