(a) Left Right (b) Left Right. Up Up 5-4. Row Down 0-5 Row Down 1 2. (c) B1 B2 (d) B1 B2 A1 4, 2-5, 6 A1 3, 2 0, 1

Similar documents
ECON 282 Final Practice Problems

Student Name. Student ID

Summary Overview of Topics in Econ 30200b: Decision theory: strong and weak domination by randomized strategies, domination theorem, expected utility

DECISION MAKING GAME THEORY

UPenn NETS 412: Algorithmic Game Theory Game Theory Practice. Clyde Silent Confess Silent 1, 1 10, 0 Confess 0, 10 5, 5

Chapter 13. Game Theory

Game Theory. Wolfgang Frimmel. Dominance

Game Theory: The Basics. Theory of Games and Economics Behavior John Von Neumann and Oskar Morgenstern (1943)

final examination on May 31 Topics from the latter part of the course (covered in homework assignments 4-7) include:

Microeconomics II Lecture 2: Backward induction and subgame perfection Karl Wärneryd Stockholm School of Economics November 2016

Introduction to Game Theory I

Homework 5 Answers PS 30 November 2013

ECON 2100 Principles of Microeconomics (Summer 2016) Game Theory and Oligopoly

The extensive form representation of a game

Exercises for Introduction to Game Theory SOLUTIONS

Backward Induction and Stackelberg Competition

CS510 \ Lecture Ariel Stolerman

U strictly dominates D for player A, and L strictly dominates R for player B. This leaves (U, L) as a Strict Dominant Strategy Equilibrium.

Game Theory Lecturer: Ji Liu Thanks for Jerry Zhu's slides

Dominant and Dominated Strategies

Econ 302: Microeconomics II - Strategic Behavior. Problem Set #5 June13, 2016

Games. Episode 6 Part III: Dynamics. Baochun Li Professor Department of Electrical and Computer Engineering University of Toronto

NORMAL FORM (SIMULTANEOUS MOVE) GAMES

Section Notes 6. Game Theory. Applied Math 121. Week of March 22, understand the difference between pure and mixed strategies.

Economics 201A - Section 5

1\2 L m R M 2, 2 1, 1 0, 0 B 1, 0 0, 0 1, 1

Game Theory ( nd term) Dr. S. Farshad Fatemi. Graduate School of Management and Economics Sharif University of Technology.

ECO 199 B GAMES OF STRATEGY Spring Term 2004 B February 24 SEQUENTIAL AND SIMULTANEOUS GAMES. Representation Tree Matrix Equilibrium concept

Domination Rationalizability Correlated Equilibrium Computing CE Computational problems in domination. Game Theory Week 3. Kevin Leyton-Brown

INSTRUCTIONS: all the calculations on the separate piece of paper which you do not hand in. GOOD LUCK!

Session Outline. Application of Game Theory in Economics. Prof. Trupti Mishra, School of Management, IIT Bombay

Game Theory and Randomized Algorithms

Games in Extensive Form, Backward Induction, and Subgame Perfection:

Resource Allocation and Decision Analysis (ECON 8010) Spring 2014 Foundations of Game Theory

THEORY: NASH EQUILIBRIUM

ECO 220 Game Theory. Objectives. Agenda. Simultaneous Move Games. Be able to structure a game in normal form Be able to identify a Nash equilibrium

ECO 463. SimultaneousGames

Introduction to Game Theory

Mixed Strategies; Maxmin

Advanced Microeconomics (Economics 104) Spring 2011 Strategic games I

Reading Robert Gibbons, A Primer in Game Theory, Harvester Wheatsheaf 1992.

Dominant and Dominated Strategies

Non-Cooperative Game Theory

Rationality and Common Knowledge

Chapter 30: Game Theory

DYNAMIC GAMES. Lecture 6

Finance Solutions to Problem Set #8: Introduction to Game Theory

Game Theory Refresher. Muriel Niederle. February 3, A set of players (here for simplicity only 2 players, all generalized to N players).

Finite games: finite number of players, finite number of possible actions, finite number of moves. Canusegametreetodepicttheextensiveform.

RECITATION 8 INTRODUCTION

Sequential games. Moty Katzman. November 14, 2017

Repeated Games. Economics Microeconomic Theory II: Strategic Behavior. Shih En Lu. Simon Fraser University (with thanks to Anke Kessler)

Advanced Microeconomics: Game Theory

Prisoner 2 Confess Remain Silent Confess (-5, -5) (0, -20) Remain Silent (-20, 0) (-1, -1)

Note: A player has, at most, one strictly dominant strategy. When a player has a dominant strategy, that strategy is a compelling choice.

Introduction to Industrial Organization Professor: Caixia Shen Fall 2014 Lecture Note 6 Games and Strategy (ch.4)-continue

1 Simultaneous move games of complete information 1

Partial Answers to the 2005 Final Exam

Multiagent Systems: Intro to Game Theory. CS 486/686: Introduction to Artificial Intelligence

Name. Midterm, Econ 171, February 27, 2014

ECON 312: Games and Strategy 1. Industrial Organization Games and Strategy

Computing Nash Equilibrium; Maxmin

Game Theory. Wolfgang Frimmel. Subgame Perfect Nash Equilibrium

Lecture 24. Extensive-Form Dynamic Games

Simultaneous-Move Games: Mixed Strategies. Games Of Strategy Chapter 7 Dixit, Skeath, and Reiley

Part 1. Midterm exam PS 30 November 2012

Game Theory Week 1. Game Theory Course: Jackson, Leyton-Brown & Shoham. Game Theory Course: Jackson, Leyton-Brown & Shoham Game Theory Week 1

Lecture 5: Subgame Perfect Equilibrium. November 1, 2006

Microeconomics of Banking: Lecture 4

CMU-Q Lecture 20:

EC3224 Autumn Lecture #02 Nash Equilibrium

EconS Game Theory - Part 1

Multi-player, non-zero-sum games

Game theory. Logic and Decision Making Unit 2

Minmax and Dominance

FIRST PART: (Nash) Equilibria

Noncooperative Games COMP4418 Knowledge Representation and Reasoning

LECTURE 26: GAME THEORY 1

4/21/2016. Intermediate Microeconomics W3211. Lecture 20: Game Theory 2. The Story So Far. Today. But First.. Introduction

Lecture 6: Basics of Game Theory

Game Theory. Lecture Notes By Y. Narahari. Department of Computer Science and Automation Indian Institute of Science Bangalore, India August 2012

CHAPTER LEARNING OUTCOMES. By the end of this section, students will be able to:

Multiagent Systems: Intro to Game Theory. CS 486/686: Introduction to Artificial Intelligence

Game Theory and Economics of Contracts Lecture 4 Basics in Game Theory (2)

SF2972 Game Theory Written Exam March 17, 2011

Math 152: Applicable Mathematics and Computing

Extensive Games with Perfect Information. Start by restricting attention to games without simultaneous moves and without nature (no randomness).

International Economics B 2. Basics in noncooperative game theory

Lecture #3: Networks. Kyumars Sheykh Esmaili

EconS Sequential Move Games

Games of Perfect Information and Backward Induction

Economics II: Micro Winter 2009 Exercise session 4 Aslanyan: VŠE

CPS 570: Artificial Intelligence Game Theory

Dominance and Best Response. player 2

ECO 5341 Strategic Behavior Lecture Notes 3

Introduction to IO. Introduction to IO

Dominance Solvable Games

Analyzing Games: Mixed Strategies

1. Introduction to Game Theory

Normal Form Games: A Brief Introduction

Transcription:

Economics 109 Practice Problems 2, Vincent Crawford, Spring 2002 In addition to these problems and those in Practice Problems 1 and the midterm, you may find the problems in Dixit and Skeath, Games of Strategy, Chapters 2-9 and (later) Chapter 10 helpful. P14. For each of the following statements, say whether it is true or false. If it is true, explain why or give a proof). If it is false, give an example of a game for which it is false. (a) If a two-person zero-sum game in which each player has two pure strategies can be reduced to a single strategy combination by iterated deletion of strictly dominated strategies, then players' strategies in that combination are their unique security-level-maximizing (maximin) strategies. (b) If a strategy is weakly but not strictly dominated by another strategy for a player, it can be one of that player's security-level-maximizing strategies. (c) If a strategy is weakly but not strictly dominated by another strategy for a player, it can be that player's unique security-level-maximizing strategy. (d) It is optimal to play your security-level-maximizing (maximin) strategy in any game, zero-sum or not. P15. In each of the following games, graph players' best-response curves, letting p be the probability that Row plays Up and q be the probability that plays Left. Then use your bestresponse curves to find all the equilibria in each game, whether in pure or mixed strategies. (Games (a) and (b) are zero-sum and only Row's payoffs are shown; games (c) and (d) are non-zero-sum and both players' payoffs are shown.) (a) Left Right (b) Left Right Up - 3 1 Up 5-4 Row Down 0-5 Row Down 1 2 Bill Bill (c) B1 B2 (d) B1 B2 A1 4, 2-5, 6 A1 3, 2 0, 1 Ann A2-1, 5 0, - 2 Ann A2 1, 0 4, 6 P20. Find all Nash equilibria, in pure or mixed strategies, in the following game. For each equilibrium, find the expected payoffs of the players. 1

Left Center Right Top 1, - 1 8, 6 3, -4 Middle - 7, 100 6, 75 1, 200 Row Bottom 5, 2 2, -1 0, 1 P16. Letting p be the Row player's probabilities of playing his strategy T and letting q be the player's probability of playing his strategy L, find the optimal strategies for both players and the Row player's equilibrium expected payoff in the zero-sum two-person game with payoff matrix (where only the Row player's payoffs are shown; the player's are minus the Row player's): L T 0 k B 1 0 (a) when k < 0; (b) when k > 0. (c) Can increasing one of a player's payoffs ever reduce his equilibrium expected payoff in a zerosum two-person game? Explain. (Hint: use maximin logic.) (d) Can increasing one of a player's payoffs ever reduce the probability with which he plays the associated pure strategy? Explain. Now suppose that Row must choose between T and B before chooses between L and R, and observes Row's choice before making his own choice. (e) Clearly identifying each player's strategies, write the payoff matrix for this version of the game. (f) Identify each player's optimal strategy or strategies when k < 0. (g) Identify each player's optimal strategy or strategies when k > 0. P17. In the game morra, each of two players, R and C, simultaneously holds up either one or two fingers and calls out a number either 2, 3, or 4. (The action of holding up i fingers and calling out the number j will be called "ij".) If the number a player calls out equals the total number of fingers both players hold up, the player wins that amount from his opponent. (If both players call out the correct total, no money changes hands.) Assume that players try to maximize the expectations of their money payoffs. This question deals with a modified version of morra, in which C has a spy who tells him, before he chooses his action, what number R is going to call out (but not how many fingers he is going to hold up). Assume that the spy can always predict R's number correctly, and that he never lies. Also assume that the players know everything about the game, including the existence of the spy and what information about R's choice the spy will give to C. R 2

(a) Draw the game tree for this game. Indicate clearly each player's choices, the information he has when he makes them, and the payoffs that result from each combination of choices. Show only the choices that have a chance of getting the total number of fingers correct, given the other player's possible choices: "12", "13", "23", and "24" for each player. (b) Bearing in mind that a strategy must be a complete contingent plan for playing the game, how many different pure (unrandomized) strategies does R have? How many does C have? (You are not asked to list them, just to say how many there are.) (c) Which, if any, of C's pure strategies are (weakly or strictly) dominated? (You are not asked to list them, just to identify them clearly. You might, for example, say something like "any strategy in which C does not do x after hearing y is dominated".) (d) Draw the payoff matrix of the game, leaving out any of C's strategies identified as dominated in (c). (Please make R the row player and C the column player, and be sure to identify C's remaining (undominated) strategies clearly enough so that someone could tell from your description exactly what C is supposed to do in every possible situation.) (e) Use your payoff matrix from (d) to identify which of R's pure strategies become (weakly or strictly) dominated when C's dominated pure strategies are eliminated. (f) Identify R's maximin (or, equivalently, Nash equilibrium) strategy or strategies and the associated expected payoff. (There is more than one right way to do this. Pick the one that's easiest for you, but explain your argument.) (g) Identify C's maximin (or, equivalently, Nash equilibrium) strategy or strategies and the associated expected payoff, again explaining your argument. (h) How much is C's spy worth to him, in terms of expected payoff? P18. You and your sister (both risk-neutral expected-money maximizers) find two $1 bills on the sidewalk. Mom says that you can keep them if you can agree on how to divide them. There are only three possible ways to divide them: $2 to you, $0 to Sis; $1 to each of you; and $0 to you, $2 to Sis. Mom asks you to propose a division, which Sis can observe before deciding whether to say Yes or No. If Sis says Yes, Mom will enforce your proposal as the division, but if Sis says No, you both get nothing! (a) Draw the game tree for this game, identifying your pure strategies by the amount you propose for yourself, $2, $1, or $0; and identifying Sis's pure strategies by specifying whether she says Y (for Yes) or N (for No) in each of the possible contingencies. (Remember that she gets to hear your proposal before deciding whether to say Yes or No.) In assigning payoffs, assume that both of you are expected-money-payoff maximizers. (b) Draw the payoff matrix for this game, identifying pure strategies as in part (a). (c) Identify all of the pure-strategy Nash equilibria in this game. (d) Identify all of the subgame-perfect pure-strategy Nash equilibria in this game. (e) Which strategy would you play? Explain your reasoning. Now suppose that Mom asks you and Sis to submit simultaneous proposals (identified, for each of you, by the amount you propose to give yourself), with the understanding that if your proposals total $2 or less she will give each of you the amount you proposed, but if they total more than $2, you both get nothing. (f) Answer part (a) again. (g) Answer part (b) again. 3

(h) Answer part (c) again. (i) Answer part (d) again. (h) Answer part (e) again. P19. Two firms, 1 and 2, choose quantities to produce and sell in a market for a homogeneous good. Given q 1 and q 2, firm i's profits, i = 1,2, are p(q 1 +q 2 )q i cq i. (p(.) is a differentiable function.) Assume that p'( ) < 0 and p'(q) + p"(q)q < 0 for all q = 0. Suppose first that firms 1 and 2 choose their quantities simultaneously. (a) Assuming that q 1 > 0 and q 2 > 0, write the first-order conditions for a Nash equilibrium in this case (which is the Cournot equilibrium). Now suppose that firm 1 chooses its quantity first, and firm 2 gets to observe firm 1's choice before choosing its own quantity. (b) Assuming that q 1 > 0 and q 2 > 0, write the first-order conditions for a rollback (subgameperfect) equilibrium in this case (which is the Stackelberg equilibrium, with firm 1 the leader). Now suppose that firm 1 chooses its quantity first, but that firm 2 does NOT get to observe firm 1's choice before choosing its own quantity. (c) Write the game tree for this game, clearly identify each firm's possible strategies, and then find its Nash and rollback (subgame-perfect) equilibrium or equilibria. (Hint: Is it possible, in this case, for firm 2 to base its choice of q 2 on firm 1's choice of q 1?) Now suppose that firm 1 chooses its quantity first, and that firm 2 observes firm 1's choice before choosing its own quantity, but that firm 1 then gets to revise its choice, costlessly, after observing firm 2's choice.. (d) Write the game tree for this game, clearly identify each firm's possible strategies, and then find its Nash and rollback (subgame-perfect) equilibrium or equilibria. (Hint: Does firm 1's initial choice have any effect on the outcome in this case?) P20. (a) Find the mixed-strategy Nash equilibrium in the Battle of the Sexes game shown below. (b) Explain why it is an equilibrium and compute players' equilibrium expected payoffs. (c) If the game described here is a complete model of the players' situation, would you expect them to be able to coordinate on one of the (more efficient) pure-strategy equilibria? Why or why not? (d) What if one player (only) could make a suggestion about strategies before players choose them? (e) What if both players could make suggestions, simultaneously or sequentially? (f) What if one player could choose his/her strategy first, and the other could observe it before choosing his/her own strategy? Fights Ballet Fights 1, 3 0, 0 Row Ballet 0, 0 3, 1 4

P21. Suppose that two players, Row and, play the following matrix game. L C R T 5, 5 0, 6 0, 0 M 6, 0 2, 2 0, 0 Row B 0, 0 0, 0 0, 0 (a) Find all of the game's pure-strategy Nash equilibria. Now suppose that the players play this game twice in a row. They observe what each other did in the first stage before they decide what to do in the second stage. Each player's payoff is the (undiscounted) sum of his payoffs in the first and second stages. (b) Draw the game tree for the two-stage game, clearly showing players' decisions and payoffs. (c) Find a pure-strategy rollback (subgame-perfect) equilibrium in which the players' decisions in the second stage do not depend on their decisions in the first stage. Be sure to specify players' strategies clearly, remembering that a strategy must be a complete contingent plan for playing the game. Are there any such equilibria in which players do anything other than play one of the equilibria you identified in (a) in each stage? (d) Now find a pure-strategy rollback (subgame-perfect) equilibrium in which the players play T, L in the first stage, and therefore do better than by repeating the best symmetric Nash equilibrium in the one-stage version of the game? (Hint: The players would like to play T, L in the first stage because it has high payoffs for both players, and this allows the, to do better than by repeating the best symmetric Nash equilibrium in the one-stage version of the game. However, T, L is not an equilibrium in the first-stage game taken by itself. Find a way to make the players' second-stage decisions depend on their first-stage decisions that always gives an equilibrium in the second stage (as required by rollback) but gives the players an incentive to play T, L in the first stage.) (e) Would your answers to part (d) change if the players could not observe what each other did in the first stage before they decide what to do in the second stage? Explain why or why not. (f) Why is it possible to support a desirable but non-equilibrium outcome like T, L in the first stage of this two-stage game as part of a rollback equilibrium, but not the desirable but non-equilibrium outcome Cooperate, Cooperate in the first stage of a two-stage Prisoner's Dilemma? P22 (Chapter 10). In the following environments, a large number of identical players choose simultaneously between two pure strategies; they cannot randomize. In each case, graph the payoffs of the two strategies against the population frequency of the first strategy in a way that is consistent with the verbal description. Then use your graph to determine what pattern (or patterns) of behavior will emerge in the long run, and whether the pattern(s) that emerge(s) will be Pareto-efficient, in the sense of maximizing all players' average expected payoff. 5

(a) Each person can either install a car alarm in his car or not. Car alarms are highly effective when only a few cars have them, but (because people ignore them when they hear them go off too often) they are ineffective when most of the cars have them. (b) There is a wall running through the center of your city, left over from the Cold War. Each person can either try to tear down the wall or ignore it. Everyone hates the wall, but everyone knows that if only a few people try to tear it down the government will arrest them and send them to jail. However, everyone also knows that if more than a few people try to tear it down, the government is unlikely to punish them. (c) Each person can either shirk (effort level 1) or work hard (effort level 2). Each wishes to minimize the distance between his own effort level and the average effort level in the population (in other words, his payoff is minus this distance). (d) Answer part (c) again, but assume that each person wishes to minimize the difference between his own effort level and one-half the average effort level in the population. P23 (Chapter 10). Suppose that the speed limit is 70 on the freeway, and that a large number N of drivers simultaneously and independently choose speeds from 70 to 100. Everyone prefers to go as fast as possible, other things equal, but the police are sure to ticket any driver whose speed is strictly faster than x% of the drivers, where x is a parameter such that 0 < x < 100. (Thus, only by driving exactly 70 can a driver be sure of not being ticketed.) Suppose further that each driver ignores his own influence on the percentage, and the cost of being ticketed outweighs any benefit of going faster. (a) Model this situation as a noncooperative game and analyze its set of pure-strategy Nash equilibria as far as possible. (Assume that x is a multiple of 1/N, that is x = k/n, where k is an integer.) (b) Does the set of Nash equilibria depend on x when 0 < x < 100? (c) What is the set of pure-strategy Nash equilibria when the police don't ticket anyone? Explain. (d) What is the set of pure-strategy Nash equilibria when the police ticket everyone who speeds? Explain. (e) If the same drivers play this game repeatedly, observing the outcome after each play, how would you expect their speeds to change over time as they learn to predict each other's speeds? Explain intuitively or formally, whichever you prefer. 6