Contents. MA 327/ECO 327 Introduction to Game Theory Fall 2017 Notes. 1 Wednesday, August Friday, August Monday, August 28 6

Similar documents
Sequential games. We may play the dating game as a sequential game. In this case, one player, say Connie, makes a choice before the other.

Plan. Related courses. A Take-Away Game. Mathematical Games , (21-801) - Mathematical Games Look for it in Spring 11

Math 152: Applicable Mathematics and Computing

Tutorial 1. (ii) There are finite many possible positions. (iii) The players take turns to make moves.

Advanced Microeconomics: Game Theory

Game Theory Lecturer: Ji Liu Thanks for Jerry Zhu's slides

Game Theory and Algorithms Lecture 19: Nim & Impartial Combinatorial Games

Introduction To Game Theory: Two-Person Games of Perfect Information and Winning Strategies. Wes Weimer, University of Virginia

2. The Extensive Form of a Game

16.410/413 Principles of Autonomy and Decision Making

Mohammad Hossein Manshaei 1394

CHECKMATE! A Brief Introduction to Game Theory. Dan Garcia UC Berkeley. The World. Kasparov

STAJSIC, DAVORIN, M.A. Combinatorial Game Theory (2010) Directed by Dr. Clifford Smyth. pp.40

EXPLORING TIC-TAC-TOE VARIANTS

Formidable Fourteen Puzzle = 6. Boxing Match Example. Part II - Sums of Games. Sums of Games. Example Contd. Mathematical Games II Sums of Games

Game, Set, and Match Carl W. Lee September 2016

Stat 155: solutions to midterm exam

ECON 282 Final Practice Problems

Sequential games. Moty Katzman. November 14, 2017

Multiagent Systems: Intro to Game Theory. CS 486/686: Introduction to Artificial Intelligence

Distributed Optimization and Games

A Brief Introduction to Game Theory

CS510 \ Lecture Ariel Stolerman

1. Simultaneous games All players move at same time. Represent with a game table. We ll stick to 2 players, generally A and B or Row and Col.

Multiagent Systems: Intro to Game Theory. CS 486/686: Introduction to Artificial Intelligence

Math 464: Linear Optimization and Game

Another Form of Matrix Nim

Surreal Numbers and Games. February 2010

CSCI 699: Topics in Learning and Game Theory Fall 2017 Lecture 3: Intro to Game Theory. Instructor: Shaddin Dughmi

Game Simulation and Analysis

A Brief Introduction to Game Theory

Mixed Strategies; Maxmin

Summary Overview of Topics in Econ 30200b: Decision theory: strong and weak domination by randomized strategies, domination theorem, expected utility

ECO 220 Game Theory. Objectives. Agenda. Simultaneous Move Games. Be able to structure a game in normal form Be able to identify a Nash equilibrium

game tree complete all possible moves

Exercises for Introduction to Game Theory SOLUTIONS

Crossing Game Strategies

Distributed Optimization and Games

Multiagent Systems: Intro to Game Theory. CS 486/686: Introduction to Artificial Intelligence

Jim and Nim. Japheth Wood New York Math Circle. August 6, 2011

Solutions to Part I of Game Theory

Computational aspects of two-player zero-sum games Course notes for Computational Game Theory Section 3 Fall 2010

SF2972 Game Theory Written Exam March 17, 2011

Impartial Combinatorial Games Berkeley Math Circle Intermediate II Ted Alper Evans Hall, room 740 Sept 1, 2015

Tangent: Boromean Rings. The Beer Can Game. Plan. A Take-Away Game. Mathematical Games I. Introduction to Impartial Combinatorial Games

Game Tree Search. CSC384: Introduction to Artificial Intelligence. Generalizing Search Problem. General Games. What makes something a game?

Computing Nash Equilibrium; Maxmin

NIM Games: Handout 1

Game, Set, and Match Carl W. Lee September 2016

Grade 7/8 Math Circles Game Theory October 27/28, 2015

Chapter 15: Game Theory: The Mathematics of Competition Lesson Plan

PRIMES STEP Plays Games

The extensive form representation of a game

Resource Allocation and Decision Analysis (ECON 8010) Spring 2014 Foundations of Game Theory

Obliged Sums of Games

Chapter 3 Learning in Two-Player Matrix Games

CS 491 CAP Intro to Combinatorial Games. Jingbo Shang University of Illinois at Urbana-Champaign Nov 4, 2016

Playing Games. Henry Z. Lo. June 23, We consider writing AI to play games with the following properties:

Senior Math Circles February 10, 2010 Game Theory II

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Algorithmic Game Theory Date: 12/6/18

ECO 199 B GAMES OF STRATEGY Spring Term 2004 B February 24 SEQUENTIAL AND SIMULTANEOUS GAMES. Representation Tree Matrix Equilibrium concept

Definition 1 (Game). For us, a game will be any series of alternating moves between two players where one player must win.

Japanese. Sail North. Search Search Search Search

Game, Set, and Match A Personal Journey

Game Theory and Randomized Algorithms

Introduction to Game Theory

DECISION MAKING GAME THEORY

Multiple Agents. Why can t we all just get along? (Rodney King)

ARTIFICIAL INTELLIGENCE (CS 370D)

Lecture 7: Dominance Concepts

1. Introduction to Game Theory

Background. Game Theory and Nim. The Game of Nim. Game is Finite 1/27/2011

(a) Left Right (b) Left Right. Up Up 5-4. Row Down 0-5 Row Down 1 2. (c) B1 B2 (d) B1 B2 A1 4, 2-5, 6 A1 3, 2 0, 1

(b) In the position given in the figure below, find a winning move, if any. (b) In the position given in Figure 4.2, find a winning move, if any.

TROMPING GAMES: TILING WITH TROMINOES. Saúl A. Blanco 1 Department of Mathematics, Cornell University, Ithaca, NY 14853, USA

Dynamic Games: Backward Induction and Subgame Perfection

Game Theory Refresher. Muriel Niederle. February 3, A set of players (here for simplicity only 2 players, all generalized to N players).

Adversarial Search 1

Adversary Search. Ref: Chapter 5

Optimal Rhode Island Hold em Poker

Lecture 33: How can computation Win games against you? Chess: Mechanical Turk

Adversarial Search Aka Games

The Hex game and its mathematical side

Game-playing AIs: Games and Adversarial Search I AIMA

Instability of Scoring Heuristic In games with value exchange, the heuristics are very bumpy Make smoothing assumptions search for "quiesence"

Game Theory: introduction and applications to computer networks

Game Theory and Economics of Contracts Lecture 4 Basics in Game Theory (2)

On Variations of Nim and Chomp

Introduction to Algorithms / Algorithms I Lecturer: Michael Dinitz Topic: Algorithms and Game Theory Date: 12/4/14

The tenure game. The tenure game. Winning strategies for the tenure game. Winning condition for the tenure game

37 Game Theory. Bebe b1 b2 b3. a Abe a a A Two-Person Zero-Sum Game

Game Theory, Alive. Yuval Peres with contributions by David B. Wilson. September 27, Check for updates at

Combined Games. Block, Alexander Huang, Boao. icamp Summer Research Program University of California, Irvine Irvine, CA

CSC384: Introduction to Artificial Intelligence. Game Tree Search

GAME THEORY. Part II. Two-Person Zero-Sum Games. Thomas S. Ferguson

Copyright 2010 DigiPen Institute Of Technology and DigiPen (USA) Corporation. All rights reserved.

Game Theory: The Basics. Theory of Games and Economics Behavior John Von Neumann and Oskar Morgenstern (1943)

Introduction to (Networked) Game Theory. Networked Life NETS 112 Fall 2016 Prof. Michael Kearns

V. Adamchik Data Structures. Game Trees. Lecture 1. Apr. 05, Plan: 1. Introduction. 2. Game of NIM. 3. Minimax

Advanced Microeconomics (Economics 104) Spring 2011 Strategic games I

Transcription:

MA 327/ECO 327 Introduction to Game Theory Fall 2017 Notes Contents 1 Wednesday, August 23 4 2 Friday, August 25 5 3 Monday, August 28 6 4 Wednesday, August 30 8 5 Friday, September 1 9 6 Wednesday, September 6 10 7 Friday, September 8 11 8 Monday, September 11 12 9 Wednesday, September 13 13 10 Friday, September 15 15 1

11 Monday, September 18 16 12 Wednesday, September 20 17 13 Friday, September 22 18 14 Monday, September 25 19 15 Wednesday, September 27 20 16 Friday, September 29 21 17 Monday, October 2 22 18 Wednesday, October 3 23 19 Friday, October 6 24 20 Monday, October 9 25 21 Wednesday, October 11 26 22 Friday, October 13 27 23 Monday, October 16 28 24 Wednesday, October 18 29 2

25 Friday, October 20 30 26 Monday, October 23 31 27 Wednesday, October 25 32 28 Friday October 27 33 29 Monday, October 30 34 30 Wednesday, November 1 35 31 Friday, November 3 36 32 Monday, November 6 37 33 Wednesday, November 8 38 34 Friday, November 10 39 3

1 Wednesday, August 23 1. Played the game of Fifteen and eventually figured out it was isomorphic to Tic-Tac- Toe arrange the numbers 1 through 9 as in a magic square. Discussed some of the characteristics of this game. 2. Played the game of Hex (also called Nash). Stated that this game cannot end in a tie, that it is known that the first player has a winning strategy, but that no one knows what this strategy is for large boards. 3. Reviewed the syllabus and the scope of the course. 4

2 Friday, August 25 1. Section 1.1. 2. Examples of combinatorial games: Pick-Up-Bricks, Chop, Chomp, Tic. 3. Definition of combinatorial game. 2-player, players L and R, a set of positions, a move rule indicating what positions either can move to from each position, a win rule specifying terminal positions and an outcome for each terminal position +, +, 00. To play, specify a starting position and a starting player and alternate moves. 4. Normal play: the player who cannot move loses. Many combinatorial games are up of this type. 5. No random elements. Full information no hidden information. 6. Game of Tic. 7. Game tree. A representation of the game. Procedure: Build a Game Tree. 8. W-L-D game tree when outcomes of terminal positions are given, and each branch node labeled R or L other game specifics are superfluous could just play the game by moving directly on the W-L-D tree. Could have only one node trivial tree. 9. Require that the game trees are finite. 10. Strategy: Specify a move choice at each node. The strategy might not be good, though. A winning strategy for R guarantees that R will win when following it. A drawing strategy for R guarantees that R will win or draw when following it (i.e., not lose). Note that if strategy is followed, may never get to certain nodes. 11. Procedure: Working Backwards. To label each node with +, +, or 00. Such a labeling determines who has a winning strategy or if both have a drawing strategy. 12. Comments on my program for tic-tac-toe anyone want to code this? 5

3 Monday, August 28 1. Played the game of Gale. We will see why this has a winninig strategy for the first player. 2. Section 1.2. 3. Zermelo s Theorem. Every W-L-D game tree is one of three types. +. L has a winning strategy; i.e., by following this strategy, L will win no matter what R does. +. R has a winning strategy; i.e., by following this strategy, R will win no matter what L does. 00. Both players have drawing strategies; i.e., by following this strategy, L will not lose no matter what R does, and R will not lose no matter what L does, 4. Proofs by induction. Induction:Proof::Recursion:Programming. You must prove the simplest case (base case), and you must show you can prove any complicated case from knowing the truth for less complicated cases (inductive step). 5. Proof of Zermelo s Theorem. We will label each node with a symbol and a choice of move, starting at the terminal nodes. Base case: One node. Inductive step. R is faced with possible moves. (Case for L is similar.) At least one move to a + subtree. Label is +, and move is to this subtree. All moves are to + subtrees. Label is +, and move is to any subtree. There are no moves to a + subtree, but at least one move to a 00 subtree. Label is 00 and move is to this subtree. 6. Corollary. Working backwards produces the correct label on the root node. 7. Section 1.3. 8. Can often discuss strategies (choices of moves) without drawing entire game tree. 6

9. Symmetry. Proposition 1.12. Consider an m n position in Chop. If n = m, the second player has a winning strategy. If n m, the first player has a winning strategy. The winning strategy is to always move to a square position. 10. Simultaneous chess. You can play two games of chess with two different people and win at least one of them or else draw in both of them. 11. Proposition 1.13. Consider a Pick-up-Bricks position of n bricks. If three divides n, the second player has a winning strategy. Otherwise the first player has a winning strategy. The winning strategy is to always moved to a position that is a multiple of three. This can be viewed as a pairing strategy. If the allowable move is to take one, two, or three bricks, then the winning strategy is to always move to position that is a multiple of four. 12. Strategy stealing. Proposition 1.15. For every rectangular position in Chomp except one by one, the first player has a winning strategy. 7

4 Wednesday, August 30 1. Played 4D-Tic-Tac-Toe. It turns out that the first player has a winning strategy for 3D 4 4 4 Tic-Tac-Toe. 2. Strategy stealing. Proposition 1.15. For every rectangular position in Chomp except one by one, the first player has a winning strategy. 3. Proposition 1.16. The first player has a winning strategy in Hex. However, no one has a description of this strategy for all sizes of gameboards. 4. It turns out that you cannot tie in Gale, so a similar argument shows the first player has a winning strategy in Gale. An explicit pairing strategy exists. 5. The second player has no winning strategy in Four-Dimensional Tic-Tac-Toe. 6. Discussed some of Homework #1. I decided to collect this on Friday. 8

5 Friday, September 1 1. Section 2.1. 2. See handout LRNP. 3. Normal play combinatorial games. The winner is the last player to move. There are no ties. Pick-up-bricks, Chop, Chomp. 4. Cut-cake. Rectangular pieces of cake with horizontal and vertical lines marked. Louise can make vertical cuts. Richard can make horizontal cuts. This is an example of a partizan game. Games in which each player have the same options are called impartial games. 5. Representations of normal play games. For each position indicate which positions L can move to and which position R can move to. Each position can be equated to an ordered pair of sets of positions. Examples with Cut-Cake. 6. Types of positions. Depending upon who starts, either L or R will have a winning strategy by Zermelo s Theorem. This leads to the following classification. Type L. Louise has a winning strategy whoever goes first. Type R. Richard has a winning strategy whoever goes first. Type N. The next player to move as a winning strategy. Type P. The second or previous player has a winning strategy. Examples with Pick-Up-Bricks and Cut-Cake. We were unable to obtain any Cut-Cake positions of type N are there any? 7. Determining type. Proposition 2.3. If γ = {α 1,..., α m β 1,..., β n }, the type of γ is given by: Some β j is type R or P All β j are types L or N Some α i is type L or P N L All α i are types R or N R P 8. Thus can determine types of positions by working your way up from simpler positions. Example: 2 3 Cut-Cake. 9

6 Wednesday, September 6 1. Played Three Pile Pick-up-Bricks and determined the types of some positions. 2. Posed extra credit problem: explain why Cut-Cake has no N positions. 3. Reviewed Proposition 2.3 and why it is true. 4. Worked on types of some particular Cut-Cake positions, working up from simpler positions. 10

7 Friday, September 8 1. Played Sprouts. See https://en.wikipedia.org/wiki/sprouts_(game). 2. Comments on Homework #1 solutions. Solutions are posted in Files in Canvas. 3. Section 2.2. 4. Sums of games. If α and β are positions in normal-play games, define α + β to be a new position consisting of the components α and β. To move in α+β, a player chooses one of the components and makes a valid move in that component. 5. Examples. 6. The type of a sum. 7. Proposition 2.6. If β is type P then α and α + β are the same type. This is an example of a determinate sum. Proof: Case 1: L+P. L has the following winning strategy: If L goes first, make a winning strategy move in α. Thereafter, respond with the appropriate winning strategy move in whichever component R chooses to move in. If R goes first, L should then and thereafter respond with the appropriate winning strategy move in whichever component R chooses to move in. Case 2: R+P. R has the following winning strategy. Essentially the same argument but with the rules of R and L reversed. Case 3: N+P. The first player has the following winning strategy: Make a winning strategy move in α. Thereafter, respond with the appropriate winning strategy move in whichever component the opponent chooses to move in. Case 4: P+P. The second player has the following winning strategy: After the first player moves, then and thereafter, respond with the appropriate winning strategy move in whichever component the opponent chooses to move in. 8. Proposition 2.7. If α and β are both type L, then α + β is type L. Similarly, if α and β are both type R, then α + β has type R. These are examples of determinate sums. 9. The types of other sums are ambiguous; i.e., indeterminate sums. 11

8 Monday, September 11 1. Domineering. G = 2 2 has type N. H 1 = 1 2 has type R. H 2 = 1 4 has type R. G + H 1 has type N. G + H 2 has type R. 2. Section 2.3. See handout on Equivalence. 3. Equivalent games. Not the same as isomorphic two games may be equivalent but not isomorphic. 4. Two positions α and α in normal-play games are equivalent if for every position β in any normal-play game, the two positions α + β and α = β have the same type. Write α α. 5. Worked on homework problems. 12

9 Wednesday, September 13 1. Game of Nim. Piles of stones. Play using 234. 2. See example in book: Domineering on 3 square L and pick-up-bricks with 1 brick. 3. Proposition 2.10. If α, β, and γ are positions in normal-play games, then (a) α α (Reflexive Property). (b) α β implies β α (Symmetric Property). (c) α β and β γ implies α γ (Transitive Property). Thus we have an equivalence relation. 4. Proposition 2.11. If α is equivalent to α, then α and α have the same type. (Add the 0 Pick-up-Bricks game to each.) 5. The converse statement is not a proposition. So equivalence is a finer distinction than type. Domineering: (1 2) + (2 2) is type N but (1 4) + (2 2) is type R. 6. Below: Positions behave like numbers in some ways. 7. Proposition 2.12. If α, β, γ are positions in normal-play games, then (a) α + β β + α (Commutative Property). (b) (α + β) + γ α + (β + γ) (Associative Property). 8. Lemma 2.13. For positions of normal-play games, (a) If α α, then α + β α + β. (Add any γ to both sides and use associativity.) (b) If α i α i for 1 i n, then α 1 + + α n α 1 + + α n. (c) If α i α i for 1 i m and β i β i for 1 i n, then {α 1,..., α m β 1,..., β n } {α 1,..., α m β 1,..., β n}. 9. Lemma 2.14. If β is of type P, then α + β α. (Add any γ to both sides and use 2.6.) Type P behaves like zero. 10. Proposition 2.15. If α and α are type P, then α α. (Add any γ to both sides and use 2.6.) Uniqueness of zero. 13

11. Lemma 2.16. If α + β and α + β are both type P then α α. (α α + (α + β) α + (α + β) α. Uniqueness of additive inverse. 12. If you get β by interchanging all the move options in α, then α + β is of type P. 13. Chapter 3. Restrict attention to impartial normal-play games. Pick-up-Bricks, Chop, Chomp, Sprouts, Game of 100, etc. 14. All positions are of type P or N, and the winning strategy is to always move to a P position. 14

10 Friday, September 15 1. Chapter 3 2. How to use a binary balancing strategy to identify P and N positions in nim and to move to balanced positions. 3. Definition of nim sum. 4. a is the nim game with a single pile of a stones. This game is called a nimber. 0 is of type P ; the rest are of type N. 5. Main points of this Chapter. (a) Every impartial game is equivalent to some k, where k is a nonnegative integer called the nim or Grundy value of the game. (b) The MEX rule gives the nim value, but in practice these values can be hard to find. (c) The P positions of nim and a winning strategy can be determined using a certain balancing strategy. (d) The nim sum explains how to get the nim value of the sum of two games. 6. MEX values of positions. Example of two pile nim, noticing that the MEX values in this case are the nim sums of the pile sizes. 15

11 Monday, September 18 1. P and N positions and the power of twos balancing strategy for general sums of games. Procedure 3.7 and Proposition 3.8. 2. Definition of nim sum. Theorem 3.10. a 1 + + a l b where a 1 a l = b. 3. Section 3.2. Sprague-Grundy Theory. 4. Theorems 3.12 and 3.13. If α has MEX value b, then α b. Proof: Show α + b is of type P via a pairing strategy. Thus α + b 0. Add b to both sides, noting b + b 0. So every position is equivalent to a nimber, and the winning strategy is to always move to a position with MEX value 0. The MEX value of a sum is the nim-sum of the MEX values. 5. Section 3.3. Analysis of Chop, Pick-up-Bricks, some Chomp positions. 16

12 Wednesday, September 20 Worked on homework problems for Chapter 3. 17

13 Friday, September 22 Chapter 4. Introduced the game of Checker Stacks for which each position can be associated with a number or value. The negative of a position is obtained by interchanging the labels of all the checkers. The value of the sum of two games is the sum of their values. Based on these principles we determined the value of several positions. 18

14 Monday, September 25 See handout Games That are Numbers. 19

15 Wednesday, September 27 Worked on homework problems for Chapter 4. 20

16 Friday, September 29 1. Chapter 5. 2. Introduced the notion of a two-person zero-sum matrix game. Example: Rock-Scissors- Paper. 3. Gave an example to illustrate the notion of expected value. 4. Defined pure strategy and mixed strategy. 5. Discussed why a particular mixed strategy might still be poor if your opponent determines what probabilities you are using for your choices. 6. Defined what it means for one pure strategy to dominate another pure strategy, and used this to determine the optimal pure strategies for a particular small game. 21

17 Monday, October 2 1. Defined the saddle point of a matrix game. Not every game has a saddle point, but if it does, then there is an optimal pure strategy for both R and C. 2. If dominated rows and columns are sequentially eliminated, resulting in a one by one matrix, that the remaining entry is a saddle point of the original matrix. However, there are games with saddle points that cannot be discovered in this way. 22

18 Wednesday, October 3 1. Played Morra. 2. Explained again why if dominated rows and columns are sequentially eliminated, resulting in a one by one matrix, that the remaining entry is a saddle point of the original matrix. 3. R mixes her strategies by using probabilities p i to take a weighted sum of the rows to get a row of expected values. R s goal is to choose the probabilities so as to maximize the minimum entry in that row. Call this maximum value M. C mixes his strategies by using probabilities q j to take a weighted sum of the columns to get a column of expected values. C s goal is to choose the probabilities so as to minimize the maximum entry in that column. Call this minimum value m. von Neumann s Theorem states that M = m. This is called the value of the game. 4. We illustrated this theorem with an example. 23

19 Friday, October 6 Exam review. 24

20 Monday, October 9 Exam #1 on Chapters 1 4. 25

21 Wednesday, October 11 1. See file matrix.pdf on course website. 2. von Neumann Minimax Theorem. M = max min pa = min max Aq = m. p q This common optimal value is called the value, v, of the game. 3. Theorem. min pa max Aq for any p and q. This is an easy consequence of the von Neumann Theorem, but can easily be proved directly. Corollary. If for some p and q you have min pa = max Aq, then p is optimal for R and q is optimal for C. This equality is a Certificate of Optimality. 4. Some methods of solving matrix games. (a) See if there is a saddle point. (b) Guess solutions p and q and confirm optimality using the above Certificate of Optimality. (c) Graphical method for 2 n or m 2. (d) If you know p is optimal for R with value v = M, compute pa. Then C can only call on (have nonzero q j ) for those j for which entry j of pa equals v. Then set up and solve equations for q. Similar statement given optimal q. (e) First sequentially eliminate dominated rows and columns, and then try one of the above methods. (f) Iterative method to try to converge to solutions to be discussed later. (g) Formulate and solve a linear program to be discussed later. (h) Use a matrix game solver app see the course website. 26

22 Friday, October 13 No class. 27

23 Monday, October 16 1. More discussion on the graphical method. Note: When solving a 2 3 matrix game, for example, note that each line segment corresponds to a column of the game. Once you solve the game for R and find the point that determines the max, note which line segments contain that point. These particular line segments are then used to find p and 1 p and the value of the game. These particular line segments also tell you which strategies C should use the other columns of the matrix can be discarded. This may reduce the number of choices for C down to one or two, thus making it easier to now solve for C s optimal strategies graphically. Given any 2 2 games, either it can be reduced to a 1 1 game via domination, or else it can be solved by the graphical method. 2. Worked on some homework problems. 28

24 Wednesday, October 18 1. Iterative method to converge to solutions. 2. Formulating matrix games as linear programs. 3. Non-simultaneous Morra. Game theory shows the value of knowledge. 4. Kuhn s Poker. Game theory confirms the value of bluffing. 5. See the file morrapoker.pdf in Canvas Files. 29

25 Friday, October 20 Introduction to general matrix games. 30

26 Monday, October 23 1. Utility and the von Neumann - Morgenstern lottery. 2. The Prisoner s Dilemma. 3. See the link to the Cooperation Game on the course website. 4. Pure strategies. 5. Eliminating rows and columns via domination. 6. Identifying a pure Nash equilibrium. 7. Movement diagrams. 31

27 Wednesday, October 25 1. Converting a game represented in tree form, with probabilistic elements and information sets, into a game represented in matrix form. 2. Converting the game represented in matrix form into a game represented in tree form. 32

28 Friday October 27 1. Discussed the notion of a best response. Note that there may be more than one best response to p or to q. 2. A Nash equilibrium is a choice of p and q such that p is a best response to q and q is a best response to p. A game may have more than one Nash equilibrium. 3. Finding Nash equilibria for 2 2 matrix games. Use a motion diagram and/or elimination of dominated rows and columns to try to find a pure strategy Nash equilibrium. Otherwise, select p to equalize the expected values for C, and select q to equalize the expected values for R. 33

29 Monday, October 30 Worked on homework. 34

30 Wednesday, November 1 Discussed evolutionary biology section 8.2. 35

31 Friday, November 3 Discussed the Cournot duopoly section 8.3. 36

32 Monday, November 6 Discussed Nash flow and the application of the Brouwer Fixed Point Theorem to confirm the existence of a Nash equilibrium see section 9.4 and pages 328 330. 37

33 Wednesday, November 8 Review. 38

34 Friday, November 10 Exam #2. 39