Equilibria to Non-Cooperative Games

Today's lecture covers four somewhat related equilibrium properties used in economic applications of game theory. They include: (1) the folk theorem, (2) the Nash equilibrium, (3) mixed strategies, and (4) Sub-game perfection.

More Game Theoretic Terminology

Def: A normal or strategic form game is given by: (i) a list of players i = 1, ....I; a list of strategies Si that player i might employ; (iii) a payoff function which defines the payoffs realized by each player under all possible combinations of strategies. (See Kreps p. 379)

Strategies may be complex in the sense that they involve a sequence of moves or conditional play, or even involve random play.

Games in normal form are often represented with matrices (as we did above for the PD game) or represented mathematically with payoff functions for each player in each possible strategic circumstance. (As we did above for the Cournot Duopoly problem) A sequence of play can be represented as a vector of moves.

Every game in extensive form can be represented in normal form.

Def: A game in extended form is represented as a game tree with (i) a list of players, (ii) an assignment of decision nodes to players or to nature, (iii) lists of actions available at each decision node, (iv) and a correspondence between immediate successors of each decision node and available actions, (v) information sets, (vi) an assignment of PAYOFFs for each player at each terminal node (possible ending of the game) (vii) and probability assessments over the initial nodes and over the actions at any node that is assigned to nature. (Kreps, p. 363)

A game in extensive form represents game play as a sequence of choices made by game players.

"Simultaneous" choices are represented with restrictions in the information set. If one player does not know what the other chose at the previous "node(s)," then it is as if the whole sequence of decisions are made simultaneously by all players.

Games in extensive form are convenient ways to represent games where a sequence of moves are made (as in chess or checkers) as well as games where information changes in systematic ways as the game unfolds.

Any game in Normal form can be represented as a game in extensive form.

Def: A mixed strategy is a random strategy. Suppose that the range of possible actions (or sequences of actions) is S. Then a probability function defined over S is a mixed strategy.

Note that a mixed strategy does not have to assign non-zero probabilities to all of the possible actions that may be taken.

One can consider a pure strategy to be a special case of a mixed strategy where all the probabilities of actions (or sequences of actions) are zero except for one (the pure strategy) which is played with probability of 1.

In games with no equilibrium in pure strategies, like the "even or odd finger game," most people intuitively use something like a mixed strategies. That is, they vary their actions without obvious pattern. To be a mixed strategy, the pattern must be determined by a particular probability function (chosen by the individual players).

Sufficient Conditions for the Existence of Nash Equilibria in Non-cooperative Games

Proposition. Every finite player, finite strategy game has at least one Nash equilibrium if we admit mixed strategy equilibria as well as pure. (Kreps p.409 and/or Binmore p.320).

The proof relies upon Kakutani's fixed point theorem, which is a generalization of Browers fixed point theorem used in the General Equilibrium existence proof derived above. Here is a condensed version. (Presented in Kreps, page 409).

Let i = 1, .... I be the index of players, let Si be the (pure) strategy space for player i and let Si be the space of probabilities distributions on Si.

The strategy space of mixed strategy profiles is S = PIi=1 Si , that is the cross product of all individual mixed strategies.

For each combination of mixed strategies s = (s1, s2, ...... si) find each person's best reply function for player i given the other strategies, fi(s~i) .

Define f = (f1, f2, ..... fI) which is vector of best reply functions. Note that f is a mapping from the domain of mixed strategies onto itself.

It is upper semicontinuous and convex, hence by Kakutani's fixed point theorem a fixed point exists. This fixed point is a mixed strategy I-tuple which simultaneously characterizes and satisfies each players best reply function. That is it is a Nash equilibrium. Q. E. D.

A somewhat less general, but more intuitive proof is provided by Binmore in section 7.7.

In relatively simple games, it is often possible to compute the Nash equilibrium in mixed strategies. In more difficult games, like chess with a time constraint, one can prove that an ideal mixed strategy exists but can not calculate it. (Why is the time constraint necessary?) Fortunately, a good many economic games have computable equilibria with pure strategies (monopolistic competition).

Subgame Perfection

Definition: A subgame perfect Nash Equilibrium for an extensive game is a Nash Equilibrium of the game that, moreover, gives a Nash equilibrium in every proper subgame of the game.

Def: A proper subgame of an extensive game is a node t and all its successors.

A subgame perfect equilibrium is also an equilibrium for any subgame of the original game. That is to say, the choice called for at node "t" at the beginning of the game is the same choice that would have been chosen had the game started at node t.

Sub-perfect equilibrium, thus implies that a strategy is "self enforcing" in the sense that if such a strategy exists for each player, they will play out the whole series of (possibly conditional) moves called for in the strategy (plan).

Subgame perfection is an important property of self-enforcing contracts and credable commitments. Under a self-enforcing contract, each participant has an incentive to abide by contractual obligations that take place through time.

The theory has been applied to insurance problems, to labor markets, political constitutions, and international law (where there is no supra-national law enforcing agency).

It is clear that a strategy which is subgame perfect is a more credable threat than one that is not and requires some non-rational behavior. Consequently, equilibria strategy pairs under the folk theorem should be subgame perfect if they are to be plausible solutions.

[One method of making credible commitments is to post mutual bonds (or hostages) which are valued by each hostage giver more than by the hostage taker.

In the last round of the game, the parties trade hostages, each of which is considered less valuable to the giver than to the reciever (forexample, their spouses?).

The latter solves the last move problem, and induces cooperation in the second to last move to be undertaken even if it involves (small) losses for each party.

Another method is to assure that each round of the game involves dominant strategies (with mutual gains to trade). In this case there is no "hold out" or end game problem because it is in each person's interest to perform their "duty" in each period.]

Illustrations of Sub-Game Perfection

Mutual cooperation in a repeated PD game with a known end point is never subgame perfect.

Proof using backward induction. In the last round one is in an unrepeated PD game. Consequently the Nash equilibrium is for mutual defection.

If there is no cooperation in the last round, then one can not lose future cooperative advantage from failing to cooperate in the second to last round.

Similarly, if both players will not cooperate in the second to last round, there is no risk (of retaliation) to defecting in the third to last round and so on.

"Rational" players will never cooperate in a repeated PD game of finite length with a known end point. [This may be good news for markets but not for collective action.]

[Note that this may not be true of finite games with an unknown end point. Why?]

Consider the Centipede Game: a finite length game in which each player may choose to stop the game on his turn, but knows that scores will increase if play continues (long enough). Clearly at time T the game ends, but if any player would have done better stopping earlier he will stop at that earlier time, but that outcome causes another to end the game still earlier and so. Stopping on the first turn winds up being the only subgame perfect equilibrium in the standard form of the Centipede Game.

The Folk Theorem

The folk theorem applies to repeated games, especially games that continue forever or have an unknown (random) end point. Essentially the folk theorem says that patterns of conditional strategies exist which can assure essentially any feasible distribution of payoffs an equilibrium to the game.

For example, one possible equilibrium is cooperation in a repeated PD game. The intuition is the following: Suppose that in a repeated PD game both players announce that if the other one testifies, he will testify against the other in every successive PD game. This announced strategy, in effect, eliminates the "temptation" payoff in the off diagonal cell. The cumulative payoff from repeated play now exceeds the one time off-diagonal payoff followed by a long (possibly infinite) series of non-cooperative payoffs. So under these announced strategies, mutual cooperation is the best strategy.

With a suitable adjustment in probabilities, various equilibria in conditional mixed strategies can be found which result in average payoffs a bit higher than the mutual defection payoff up to the complete cooperation payoff.

The troubling thing about the folk theorem, is that it allows too many outcomes to be equilibria, so it has little predictive value. It essentially demonstrates that the equilibria to a sequential game depends on the announced (and believed) conditional strategies of the players in the game.

Review Problems

Stackelberg Game. Suppose that Acme and Apex are duopolists who have identical total cost functions, C = 100 + 10Q, and "share" the same market (inverse) demand curve: P = 1000 - 20Q.

Acme makes its output decision first. What output should it choose if it knows that Apex will simply maximize its own profits given Acme's output?

Is the resulting market equilibrium sub-game perfect? Explain.

Does the Stackelberg equilibrium differ from the Cournot equilibrium for this pair of firms? Demonstrate and explain.

Continuous Dealings. Suppose that Al uses George's garage for all car repairs. He tells George that if he ever finds out that he has been cheated by George that he will never return. Suppose further that, ex post, cheating can always be determined by Al. George gains $25.00 each time he honestly services Al's car and $50.00 if he cheats. If Al leaves before service is obtained his payoff is 0. Al receives $15.00 of consumer surplus if he uses George and $5.00 if he uses another garage (known to be honest, but a bit further away and more expensive). However, if George cheats, Al loses $15.00 (of surplus).

Analyze this as a one shot game. Should George cheat Al and/or should Al use George? Explain.

Now consider the setting in which the garage game is to be repeated ad infinitum. Is the game now subgame perfect in non-cheating by George and use of George's by Al? Demonstrate and explain.

How does the nature of the "garage game" change if the relationship will last at most 3 visits?