U strictly dominates D for player A, and L strictly dominates R for player B. This leaves (U, L) as a Strict Dominant Strategy Equilibrium.

Similar documents
Repeated Games. Economics Microeconomic Theory II: Strategic Behavior. Shih En Lu. Simon Fraser University (with thanks to Anke Kessler)

ECON 282 Final Practice Problems

Terry College of Business - ECON 7950

Summary Overview of Topics in Econ 30200b: Decision theory: strong and weak domination by randomized strategies, domination theorem, expected utility

Game Theory and Algorithms Lecture 3: Weak Dominance and Truthfulness

(a) Left Right (b) Left Right. Up Up 5-4. Row Down 0-5 Row Down 1 2. (c) B1 B2 (d) B1 B2 A1 4, 2-5, 6 A1 3, 2 0, 1

Strategies and Game Theory

Lecture 7. Repeated Games

Game Theory. Wolfgang Frimmel. Dominance

Microeconomics II Lecture 2: Backward induction and subgame perfection Karl Wärneryd Stockholm School of Economics November 2016

8.F The Possibility of Mistakes: Trembling Hand Perfection

Econ 302: Microeconomics II - Strategic Behavior. Problem Set #5 June13, 2016

Student Name. Student ID

UPenn NETS 412: Algorithmic Game Theory Game Theory Practice. Clyde Silent Confess Silent 1, 1 10, 0 Confess 0, 10 5, 5

CS510 \ Lecture Ariel Stolerman

ECON 312: Games and Strategy 1. Industrial Organization Games and Strategy

Normal Form Games: A Brief Introduction

Game Theory Refresher. Muriel Niederle. February 3, A set of players (here for simplicity only 2 players, all generalized to N players).

Resource Allocation and Decision Analysis (ECON 8010) Spring 2014 Foundations of Game Theory

Lecture 6: Basics of Game Theory

Chapter 13. Game Theory

1\2 L m R M 2, 2 1, 1 0, 0 B 1, 0 0, 0 1, 1

Exercises for Introduction to Game Theory SOLUTIONS

DECISION MAKING GAME THEORY

Domination Rationalizability Correlated Equilibrium Computing CE Computational problems in domination. Game Theory Week 3. Kevin Leyton-Brown

ECON 301: Game Theory 1. Intermediate Microeconomics II, ECON 301. Game Theory: An Introduction & Some Applications

Dominant and Dominated Strategies

Introduction to Game Theory

ECO 463. SimultaneousGames

Game Theory ( nd term) Dr. S. Farshad Fatemi. Graduate School of Management and Economics Sharif University of Technology.

Finance Solutions to Problem Set #8: Introduction to Game Theory

DYNAMIC GAMES. Lecture 6

Dominant Strategies (From Last Time)

Solution Concepts 4 Nash equilibrium in mixed strategies

Microeconomics of Banking: Lecture 4

EconS Game Theory - Part 1

NORMAL FORM (SIMULTANEOUS MOVE) GAMES

Section Notes 6. Game Theory. Applied Math 121. Week of March 22, understand the difference between pure and mixed strategies.

Game Theory -- Lecture 6. Patrick Loiseau EURECOM Fall 2016

Game Theory Lecturer: Ji Liu Thanks for Jerry Zhu's slides

Session Outline. Application of Game Theory in Economics. Prof. Trupti Mishra, School of Management, IIT Bombay

Strategic Bargaining. This is page 1 Printer: Opaq

Advanced Microeconomics (Economics 104) Spring 2011 Strategic games I

Games. Episode 6 Part III: Dynamics. Baochun Li Professor Department of Electrical and Computer Engineering University of Toronto

Non-Cooperative Game Theory

Finite games: finite number of players, finite number of possible actions, finite number of moves. Canusegametreetodepicttheextensiveform.

Backward Induction and Stackelberg Competition

final examination on May 31 Topics from the latter part of the course (covered in homework assignments 4-7) include:

ECO 220 Game Theory. Objectives. Agenda. Simultaneous Move Games. Be able to structure a game in normal form Be able to identify a Nash equilibrium

LECTURE 26: GAME THEORY 1

Mixed Strategies; Maxmin

FIRST PART: (Nash) Equilibria

1 Simultaneous move games of complete information 1

2. The Extensive Form of a Game

Advanced Microeconomics: Game Theory

Games in Extensive Form

Games in Extensive Form, Backward Induction, and Subgame Perfection:

Partial Answers to the 2005 Final Exam

Extensive Form Games. Mihai Manea MIT

February 11, 2015 :1 +0 (1 ) = :2 + 1 (1 ) =3 1. is preferred to R iff

CMU Lecture 22: Game Theory I. Teachers: Gianni A. Di Caro

Computing Nash Equilibrium; Maxmin

Lecture #3: Networks. Kyumars Sheykh Esmaili

Introduction to Industrial Organization Professor: Caixia Shen Fall 2014 Lecture Note 6 Games and Strategy (ch.4)-continue

Sequential games. Moty Katzman. November 14, 2017

Algorithmic Game Theory and Applications. Kousha Etessami

Introduction to Game Theory

Rationality and Common Knowledge

CHAPTER LEARNING OUTCOMES. By the end of this section, students will be able to:

Appendix A A Primer in Game Theory

Economics 201A - Section 5

Game A. Auction Block

Nash Equilibrium. Felix Munoz-Garcia School of Economic Sciences Washington State University. EconS 503

Game theory. Logic and Decision Making Unit 2

14.12 Game Theory Lecture Notes Lectures 10-11

Reading Robert Gibbons, A Primer in Game Theory, Harvester Wheatsheaf 1992.

Lecture 24. Extensive-Form Dynamic Games

GAME THEORY Day 5. Section 7.4

Game Theory: The Basics. Theory of Games and Economics Behavior John Von Neumann and Oskar Morgenstern (1943)

Basic Game Theory. Economics Auction Theory. Instructor: Songzi Du. Simon Fraser University. September 7, 2016

Introduction to Game Theory I

4/21/2016. Intermediate Microeconomics W3211. Lecture 20: Game Theory 2. The Story So Far. Today. But First.. Introduction

The book goes through a lot of this stuff in a more technical sense. I ll try to be plain and clear about it.

Dominance and Best Response. player 2

Introduction: What is Game Theory?

ECON 2100 Principles of Microeconomics (Summer 2016) Game Theory and Oligopoly

BS2243 Lecture 3 Strategy and game theory

Introduction to (Networked) Game Theory. Networked Life NETS 112 Fall 2016 Prof. Michael Kearns

CSC304 Lecture 2. Game Theory (Basic Concepts) CSC304 - Nisarg Shah 1

EC3224 Autumn Lecture #02 Nash Equilibrium

Game Theory. Vincent Kubala

Refinements of Sequential Equilibrium

Game Theory two-person, zero-sum games

Grade 7/8 Math Circles. February 14 th /15 th. Game Theory. If they both confess, they will both serve 5 hours of detention.

Chapter 30: Game Theory

ESSENTIALS OF GAME THEORY

1. Simultaneous games All players move at same time. Represent with a game table. We ll stick to 2 players, generally A and B or Row and Col.

Note: A player has, at most, one strictly dominant strategy. When a player has a dominant strategy, that strategy is a compelling choice.

Introduction to (Networked) Game Theory. Networked Life NETS 112 Fall 2014 Prof. Michael Kearns

Game Theory. Vincent Kubala

Transcription:

Problem Set 3 (Game Theory) Do five of nine. 1. Games in Strategic Form Underline all best responses, then perform iterated deletion of strictly dominated strategies. In each case, do you get a unique prediction for the outcome of the game? (a) L R U 2, 3 4, 2 A D 1, 2 1, 1 L R (a) U 2, 3 4, 2 A D 1, 2 1, 1 U strictly dominates D for player A, and L strictly dominates R for player. This leaves (U, L) as a Strict Dominant Strategy Equilibrium. (b) L M R T 0, 10 2, 1 2, 2 U 1, 1 0, 3 2, 2 D 1, 1 2, 2 4, 1 A 7, 4 1, 5 3, 3 L M R T 0, 10 2, 1 2, 2 U 1, 1 0, 3 2, 2 D 1, 1 2, 2 4, 1 A 7, 4 1, 5 3, 3 First round of deletion: M strictly dominates R for player. strictly dominates U and T for player A. Second round of deletion: M strictly dominates L for player. D strictly dominates U and for player A. This leaves (D, ) as the Strict Dominant Strategy Equilibrium. (c) L C R U 3, 5 1, 4 2, 2 M 2, 4 3, 3 1, 3 A D 1, 1 1, 11 5, 0 1

L C R U 3, 5 1, 4 2, 2 M 2, 4 3, 3 1, 3 A D 1, 1 1, 11 5, 0 First round: C strictly dominates R for player. M strictly dominates D for player A. Second round: L strictly dominates C for player. U strictly dominates M for player A. This leaves (U, L) as the Strict Dominant Strategy Equilibrium. (d) L C O R U 2, 2 1, 3 6, 2 1, 1 T 4, 3 2, 2 5, 1 2, 3 A M 1, 9 1, 5 3, 2 8, 8 D 5, 8 1, 7 2, 2 1, 10 L C O R U 2, 2 1, 3 6, 2 1, 1 T 4, 3 2, 2 5, 1 2, 3 A M 1, 9 1, 5 3, 2 8, 8 D 5, 8 1, 7 2, 2 1, 10 Then C strictly dominates O for the column player. Once O is gone, T strictly dominates U for the row player. Then L strictly dominates C for the column player. Nothing else can be deleted. You can see that no strategy profile is a mutual best response (two underlines), so there is no pure-strategy Nash equilibrium. This means there isn t a weak dominant strategy or strict dominant strategy equilibrium, either. 2. Write the following games in strategic form. Find all pure-strategy Nash equilibria, if they exist. (a). Two firms that make alcohol spend money trying to outdo one another in advertising. If they both advertise, they share industry profits of 10, 000 equally, but if one firm advertises and the other doesn t, the advertising firm gets 7, 000 and the non-advertiser gets 3, 000. If they fail to advertise, there are fewer sales but both save on the cost of the ad campaigns, and make 6, 000. F irm Advertise Don t Advertise 5, 5 7, 3 F irma Don t 3, 7 6, 6 The Nash equilibrium is (A, A), since it is a mutual best response: If either player changes her behavior, she gets 3 instead of 5, making her worse off. Therefore, no one has an incentive to change. 2

(b). Two firms work closely together and have to decide whether to buy Mac or Windows computers. If they both buy the same platform, they coordinate well together, and earn profits of 3 each. If they buy different platforms, they have trouble coordinating, and get a payoff of 1 each. F irm W indows Mac W indows 3, 3 1, 1 F irma Mac 1, 1 3, 3 There are two pure-strategy Nash equilibria: (W, W ) and (M, M). Check: No player can change her behavior and get a higher pay-off. (c). You and an opponent both have a penny. Secretly, you choose Heads or Tails, then simultaneously reveal the strategies you picked. If the coins match (both heads or both tails), you get both pennies. If they are different (one head and one tail), your opponent gets both. (This game is called matching pennies, and it a simpler version of rock-paper-scissors-type games that we ll use frequently) T hem H T H 1, 1 1, 1 Y ou T 1, 1 1, 1 There is NO pure-strategy Nash equilibrium. However, you should now see there s a mixedstrategy Nash equilibrium: Play H and T each with probability 1. 2 3. Consider the following three player simultaneous-move game: Player A chooses the strategic from {1, 2}, player chooses a row, and C chooses a column. A gets the first number as payoff, gets the second, C gets the third. (1) C L R U 2, 2, 3 1, 1, 2 D 1, 3, 4 1, 4, 3 (2) C L R U 1, 1, 3 2, 1, 2 D -2, 5, 2 2, 2, 1 Just to recap, Player A chooses either the left or the right strategic form above, not knowing what players and C are doing; Player chooses from U or D, not knowing what players A or C are doing; and Player C chooses from L or R, not knowing what Players A or are doing. Find the dominant strategy equilibrium of the three-player game. (Explain all your reasoning). It s a simultaneous game, so the row and column players act as usual, but you also have a third player deciding which of (1) and (2) is used. If A picks (1), the row and column players face the left strategic form. For instance, the strategy profile (1, U, L) leads to payoffs (2, 2, 3), but (2, U, L) leads to payoffs (1, 1, 3). Despite this, you can solve by eliminating strictly dominated strategies. D strictly dominates 3

U for the Row player () in both strategic forms (1) and (2), so you can cross it out on both strategic forms. Then L strictly dominates R for the column player in both strategic forms. Finally, player A faces (1, 3, 4) from choosing (1) and ( 2, 5, 2) from choosing (2), so (1) strictly dominates (2) for player A. So the strict dominant strategy equilibrium of the game is (1, D, L). 4. Two firms can choose any whole dollar price from 0 to 4 to charge for their product, for which they have no production costs. The overall demand is for the good is D(p) = 4 p; however, since there are two firms, their prices determine what share of the market each firm gets. If the firms choose the same price, they split the market half-half. If one firm charges a strictly higher price, it gets nothing, while the firm that charged less serves the whole market. If p i is firm i s price and p j is its opponent s price, then profits for firm i are (4 p i )p i, p i < p j π i (p i, p j ) = (4 p i )p i /2, p i = p j 0, p i > p j Write out the strategic form and solve by iterated elimination of weakly dominated strategies. Find all Nash equilibria. Were any Nash equilibria removed by elimination of weakly dominated strategies? Is there an economic argument to focus on any of the eliminated strategy profiles that you can think of? If we fill out the strategic form, we get 0 1 2 3 4 0 0,0 0,0 0,0 0,0 0,0 1 0,0 3/2,3/2 3,0 3,0 3,0 2 0,0 0,3 1,1 4,0 4,0 3 0,0 0,3 0,4 1/2,1/2 1,0 4 0,0 0,3 0,4 0,1 0,0 If we do IDWDS, the following happens: 0, 3 and 4 are weakly dominated for both players, by 1 and 2. That leaves us with a strategy set of {1, 2} for both players. ut then 1 weakly dominates 2 for both players, so the outcome of IDWDS is (1, 1) each player uses a price of 1 in equilibrium. Note there are two Nash equilibria: (0, 0) and (1, 1). You might say the (0, 0) equilibrium is silly, but remember that marginal costs here are 0: The model is really saying that charging marginal cost or marginal cost plus the smallest unit of currency are the equilibria of the game. This is pretty surprising, since it says that two competing firms will behave very similarly to a perfectly competitive market. The interpretation of the previous paragraph is one way of thinking about the game: As long as the players are pricing above marginal cost, the other player has an incentive to undercut. IDWDS leads to the (1, 1) equilibrium, making it a good candidate solution, because it has 4

the appealing feature that the players can reason there separately by using strategy dominance arguments. However, like the previous paragraph said, it s fair to think of the (0, 0) equilibrium as one in which the firms are pricing at marginal cost. (Like most things in game theory, there are different arguments that support different predictions.) 5. There are two buyers, A and, who both have value 3 for a good. They can bid either 1, 2, or 3. i. Suppose the seller auctions the good using a first-price auction, where the high bidder wins the good and pays his bid; if there is a tie, the auctioneer flips a coin, the two bidders each get it with equal likelihood. Then the players payoffs are given by 3 b i, b i > b j u i (b i, b j ) = (3 b i )/2, b i = b j 0, b i < b j Write out the strategic form and solve by iterated deletion of weakly dominated strategies. Find all Nash equilibria. In the FPA, we get 1 2 3 1 1,1 0,1 0,0 2 1,0 1/2, 1/2 0,0 3 0,0 0,0 0,0 1 weakly dominates 3 for both players, and 2 weakly dominates 1 for both players, so the outcome is both bidding 2 and getting payoffs of 1/2. There are Nash eqa at (1,1), (2,2), and (3,3). ii. Suppose the seller uses second-price auction to sell the good, where the high bidder wins the good but pays the second-highest bid; if there is a tie, the auctioneer flips a coin, the two bidders each get it with equal likelihood. Then the players payoffs are given by 3 b j, b i > b j u i (b i, b j ) = (3 b j )/2, b i = b j 0, b i < b j Write out the strategic form and solve by iterated deletion of weakly dominated strategies. Find all Nash equilibria. 5

In the FPA, we get 1 2 3 1 1,1 0,2 0,2 2 2,0 1/2, 1/2 0,1 3 2,0 1,0 0,0 3 weakly dominates 3 and 1 for both players, so they each have a weakly dominant strategy to bid 3. There is a unique Nash eqa at (3,3). iii. Which auction format would you recommend to the seller? Does one raise more profits than the other? Explain your answer. I would recommend the SPA. There is a weakly dominant strategy to bid 3, yielding profits of 3. In the FPA, it depends on which equilibrium the players adopt: the seller could get a payoff of 1, 2, or 3, and the outcome of iterated deletion of weakly dominated strategies is actually the (1,1) profile, yielding profits of just 1. So the SPA seems like a better auction format, even though there s the chance that they yield the same amount with bids of 3. 6. Our assumptions of intelligent players who have perfect information are pretty strong. It s easy to imagine players failing to complete a long string of iterated deletions of dominated strategies, or not being savvy enough to complete this process in the first place. One suggestion to model learning in games is called best-response dynamics, and it works like this: The game is repeated a large number of rounds, and players choose their strategy each new round to play a best-response to their opponent s last move. So consider the game below. If the row player used b this round, the column player s bestresponse would have been z for a payoff of 3. If the column player s strategies are decided by best-response dynamics, then, we d predict he should play z in the next round. x y z a 3,3 1,2 1,0 b 0,1 2,2 3,3 c 1,2-1,-1 4,4 i. Find all Nash equilibria. ii. Show that if you start at (b, y), play eventually reaches (c, z) and stays there permanently. iii. Show that if you start at (a, z), the game never reaches a Nash equilibrium. iv. Is there any starting point that converges to the Nash equilibrium at (a, x)? Why is this a fragile Nash equilibrium under best-response dynamics? v. Show that if either player knew the other was using best-response dynamics, that player could guide the other player to the Nash equilibrium at (c, z). vi. Is this a good model of learning or not? Explain your answer (particularly taking your answer to part (v) into account). 6

i. Find all Nash equilibria. Show that if you start at (b, y), play converges to (c, z). Is this a Nash equilibrium? The Nash equilibria are underlined. Here is a map of how the game moves: (a, x) (a, x) (a, y) (b, x) (a, z) (c, x) (b, x) (a, z) (b, y) (b, z) (b, z) (c, z) (c, x) (a, z) (c, y) (b, z) (c, z) (c, z) The best thing to do is draw arrows on the strategic form showing the above information; this lets you visualize the dynamics. iii. Show that if you start at (a, z), the game never converges to an equilibrium. From the map above, (a, z) (c, x) and (c, x) (a, z), so the players bounce back and forth between those two profiles forever. iv. Is there any starting point that converges to the Nash equilibrium at (a, x)? Why is this a fragile Nash equilibrium under best-response dynamics? Only (a, x). If you look at the map, no profile ends at (a, x) except (a, x) itself. This means that unless the game starts there, the players will never find it, and if either player makes a mistake and does something else, the game will never return to this outcome. Imagine that the dynamics are like a ball rolling on a bowl. If the bowl is right-side up and you drop the ball into the bowl, it will roll to the bottom and stay there. If you tweak the ball, it will return to the bottom. If you flip the bowl upside down, you can balance the ball exactly and it will stay there, but if you tweak it, it will roll away. That s the difference between the Nash equilibrium in the upper-right corner and the lower-left corner. v. Show that if either player knew the other was using best-response dynamics, that player could guide the other player to the Nash equilibrium at (c, z). If the column player uses z twice, the row player will use c on the second round and they can stay at the good Nash equilibrium forever. If the row player uses c twice, the column player will use z for sure on the second round and they can stay at the good Nash equilibrium forever. This shows that if your opponent is using best-response dynamics, you probably don t have an incentive to use best-response dynamics yourself. vi. Is this a good model of learning or not? Explain your answer (particularly taking your answer to part (v) into account). Yes: It incorporates feedback that players receive through their payoffs to make decisions about future play. Normal game theory assumes that players know what equilibrium they are using or can reason their way to a unique prediction, which isn t always a good assumption. Here, the players update over time, and as long as we don t end up in the cycle they converge fairly quickly to a Nash equilibrium anyway. 7

No: In part iii, iv, and v, we see that players can end up in unrealistic situations as a result of best-response dynamics, where real players would probably abandon that approach and try something else. Instead of using a forward-looking, deductive approach it relies on a backward-looking, inductive approach that is not always going to accurately predict future play. Ideally, we d like something in the middle, but this seems too simplistic. 7. A server the row player and a customer the column player, are at a restaurant: Tip Don t Good Service 2,2-1,3 ad Service 3,-1 0,0 Suppose both parties discount future payoffs at the same rate, δ. i. If the game is played once, what is the outcome? ad service and no tip is the unique Nash eqm. ii. If the game is played twice, what is the outcome? ad service and no tip is the unique Nash eqm in the second period. Given that play in the second period is decided, the game in the first period is a one-shot game, so they should play the Nash eqm. iii. If the game is player a large but finite number of times, T, what is the outcome? In period T, they should play bad service/no tip, since there is no future and that s the unique Nash eqm. In period T 1, their behavior doesn t affect the future, so it s basically a one shot version of the game, so they should play the unique Nash eqm. This argument works for all t T, so that the players adopt bad service/no tip in every period. iv. If the game is played an infinite number of times, use the Nash Threats Folk Theorem to show that if δ is sufficiently close to one, the players use (Good Service, Tip) in every period. If the customer finds it profitable to tip today, this will be true tomorrow and every day after, yielding a stream of payoffs of 2 + δ2 + δ 2 2 +... = 2 1 δ, 8

while cheating today and reverting to the Nash eqm yields so cooperating is a more profitable strategy if For the waiter, the two calculations are 3 + δ0 + δ 2 0 +... = 3, 2 1 δ 3 δ 2 3. 2 + δ2 + δ 2 2 +... = 2 1 δ and yielding the same δ 2/3. 3 + δ0 + δ 2 0 +... = 3, 8. There are two profit-maximizing firms who compete by selecting quantities q 1 and q 2 in a Cournot market. The price is 1 q 1 q 2, and they have no costs. oth firms discount future payoffs at rate δ. i. What are the equilibrium quantities and profits of the one-shot game? The equilibrium quantities are 1/3 for each firm and profits are 1/9. ii. If the game is played a large but finite number of times, what is the outcome? In the final period, it is just a one-shot Cournot game, so the firms play 1/3 and get payoffs of 1/9.111. In period T 1, their actions do not affect the future (which is already decided), so they play the one-shot solution. This argument applied for all t < T, so they play the one-shot solution in every period. iii. If the game is played an infinite number of times, use the Nash Threats Folk Theorem to show that if δ is sufficiently close to one, the firms can each play half the monopoly quantity in every period. A monopolist would select Q = 1/2, so each of the colluding Cournot duopolists should select q m = 1/4, yielding them each profits of (1 1/4 1/4)(1/4) = 1/8.125. The optimal deviation solves (1 1/4 q d )q d 3/4 2q d = 0 q d = 3/8, 9

yielding profits 9/64.141. Then cooperating yields while deviating yields So cooperating is better than deviating if 1 8 + δ 1 8 + 1 δ2 8 +... = 1 1 1 δ 8 9 64 + δ 1 9 + 1 δ2 9 +... = 9 64 + δ 1 9(1 δ). or or 1 1 1 δ 8 9 64 + δ 1 1 1 δ 9, ( δ 9 64 ) 1 9 δ 9 17.5294. So if δ is a little bigger than 1/2, the two firms can collude and behave jointly like a monopolist. 9. Suppose there is a couple. They are forgetful, and typically forget their plans and their phones on their way to a regular movie night. They have to decide which ticket to buy, not knowing what ticket the other person is buying: Action Romance Action 2,1 0,0 Romance 0,0 1,2 i. If the game is played once, what are the pure-strategy Nash equilibria of the game? Action/Action, Romance/Romance. ii. Show there is a mixed-strategy Nash equilibrium of the game, where the row player goes to an action movie with probability 2/3 and a romance with probability 1/3, while the column player goes to an action movie with probability 1/3 and a romance movie with probability 2/3. What are the players payoffs? What is the probability they go to the same movie? For the row player to randomize, row must be indifferent between Action and Romance. To make row indifferent between Action and Romance, column must choose the probability of Action, σ C, to make the expected payoff to row of Action equal to the expected payoff of Romance 2σ C + 0(1 σ C ) = 0σ C + (1 σ C )1 10

or σc = 1/3. For the column player to randomize, column must be indifferent between Action and Romance. To make column indifferent between Action and Romance, row must choose the probability of Action, σ R, to make the expected payoff to column of Action equal to the expected payoff of Romance, σ R 1 + (1 σ R )0 = σ R 0 + (1 σ R )2 or σr = 2/3. The players expected payoffs are each the probability they go to the same movie is which is strictly less than 1/2, actually. Sad. 2 1 3 3 2 + 1 2 3 3 1 + 1 1 3 3 0 + 2 2 3 3 0 = 5 9. 2 1 3 3 + 1 2 3 3 = 4 9 iii. Now suppose that they alternate: in odd periods, they go to a romance movie, and in even period they go to an action movie. Use the mixed-strategy Nash equilibrium from ii as the threat in the Nash Threats Folk Theorem, and solve for the lowest δ for which the couple can cooperate. or If they alternate, we get a stream of payoffs while deviating yields 2 + δ + 2δ 2 + δ 3 +... = (2 + δ)(1 + δ 2 + δ 4 +...) = 2 + δ 1 δ 2 1 + δ2 + 1δ 2 + δ 3 2 +... = (1 + δ2)(1 + δ 2 + δ 4 +...) = 1 + δ2 1 δ 2 0 + δ 5 9 + 5 δ2 9 +... = 0 + δ 5 1 δ 9 and since 2 > 1 > 5/9, it is never profitable to deviate. iv. True or false, and explain: obviously mutually beneficial cooperation is easy to sustain. True: here, if you deviate from the alternating pattern, you get a much worse payoff even from going to the opposite party s preferred movie. That s why the minimum δ is actually negative: even squirrels and potatoes would want to cooperate here. 11