Hi again folks. So, let's talk a little bit about a folk theorem now for this kind

of repeated game. So we're in the case where there is a discount factor and

people care more about today than the future, or than tomorrow and so forth. And

we want to think about the expansion of the logic that we just went through in

some examples but see whether that holds in a general setting of repeated games. So

what's the folk theorem, what's the extension of two repeated games? so take a

normal form game, and so that, there's actually very many versions of folk

theorems, and we'll do a very particular one which has, I think, the basic

intuition behind it, and a fairly simple proof. So the idea is we looking at, some

nash equilibrium of the stage game. So. Take a stage game, find a profile which is

a Nash equilibrium of the stage game, and then also look for some alternative

strategy and here we have a couple of typos that should be an a prime. So, look

for some alternative strategy, A prime, such that everybody gets a strictly higher

payoff from A prime than A, okay. where A is a Nash equilibrium. Then there exists

some discount factor below 1. Such that if everybody has a discount factor above that

level, then there exists a subgame perfect equilibrium of the infinite repetition of

the game that has a-prime played in every period on the equilibrium path. So what

this is telling us is, the logic that we went through in those two examples of

prisoner's dilemmas, where we found the discount factor of either 1/2, or at least

7/9, etcetera. take any game find the Nash Equlibrium of that and find something

which is better than that which you'd like to sustain in an infinitely repeated game.

You can do the same logic that we did in those examples in the general case where

there'll be a high enough discount factor. That'll make that sustainable. Okay? And,

and basically, the proof the, of this theorem is, is very similar to what we

went through in those examples. So the idea is, is, you know, we'll play a prim

e, as long as everybody plays it. if anybody deviates from that, then we're

going to go to Grim Trigger. We're just going to threaten to play the Nash

Equilibrium a forever after, which is giving us a lower payoff than a prime. And

we just need to make sure that people care enough about the loss of the future to

offset the game the game from today. So in terms of the, the proof, checking that

this is a subgame equilibrium for high enough discount factors, what do have to

do? Well, playing a forever if everyone, anyone has ever deviated is part of a

subgame perfect continuation, if we If we ever have a deviation, because it's Nash

in every subgame. so we need to check, will anybody want to deviate from a prime

if nobody has in the past? And we can bound the gain. So an upper bound on the

gain is the maximum overall. Players and all possible deviations they could have of

the best, of the gain and payoff that they would get from that. So that gives us a

maximum possible gain. the minimum period, per-period loss, so this is the maximum

they can gain from today. We'll compare it to the minimum they could lose from

tomorrow. So, the minimum they could lose is, instead of getting a prime, they're

going to go to a. so that's that and, and take that, the, the minimum across

different players for this. And one question, sort of why this, the question

here is, you know, is, is, is really the minimum relative to the Nash equilibrium

or, or could they gain. So think about this a little bit. why wouldn't they want

to change from the Nash equilibrium in the future? Right, so the idea there is there,

there not going to be able to, to help themselves by trying to change away from

the punishment because that is a Nash Equilibrium so they're already getting the

best possible payoff, if the other people follow through with the punishment. So

we've got the maximum possible gain, the minimum possible loss. so if I deviate and

given what other players are doing the mox, maximum possible net gain overall is

I'll ga in the M today, but I could lose up to M tomorrow, in the future, tha I'll

lose at least m in the future. and this should be an i. Then we've got beta i over

1 minus beta i. And, so, if you go ahead and, you know, set this has to be

non-negative, sorry, has to be negative in order for players not to want to deviate.

So, what, what do we need? We need, the m is, is, less than or equal to this. So m

over m is less than or equal to beta i over 1 minus beta i. and that gives us a

lower bound on beta i. It has to be at least m over m plus m. So that's not a

tight lower bound in the sense that we've went with fairly loose bounds here. But if

everybody has a high enough discount factor, then you can sustain cooperation.

So, this is just a straightforward generalization of the examples we looked

through before through before. And it's showing us that we can sustain cooperation

in an infinitely repeated setting ss, provided people have enough patience for

the future. now there's many bells and whistles on this. one thing to think about

you can, you can sustain fairly complicated play if you, if you'd like. So

let's take a look at the game we looked at before. So, the prisoner's dilemma, but

now we've got this very high payoff from deviating. one thing you can notice is the

total of the payoffs here, the players together get 10. if they cooperate they're

only getting 6 in total. So here, actually playing this makes one of the players

really well off. So if they played this in perpetuity, they'd get 3, 3. Suppose they

try and do the following. They say in odd periods will play c, d. Right? So in

periods 1, 3, 5, and so forth, will play co-operate by the first player, defect for

the other. So, the second player's going to get tens in those periods, but then

we'll play, we'll reverse it, in the even periods. Right? So now, hopefully on

average, players are getting 5 each instead of 3 each. Right? So what we'll do

is we'll, we'll alternate and as long as we're con, continuing to abide by these

rules where we nicely do this then in, in the future we, as long as everybody does

this we'll continue to do it. if anybody deviates from this, then we'll just go to

defect, defect. All right, and you can check and see what kinds of discount

factors you need, and you know, are there different discount factors you need for

the first player, the, for the player that's getting the CD in the first period,

versus the second player? And so forth. So you can go through that. And actually this

kind of thing is, is something that, that people worry about in, in regulatory

settings. So, for instance you know, imagine that you have a situation where.

You've got companies, bidding for government contracts and they're

repeating, you know, they're doing this repeatedly over time. and one way they

could do it is to say okay look, we could compete against each other and, and bid

very high every day or, or have to bid, you know, to give them, the government, a

low cost every day, if there's a procurement, contract. but what they could

do alternatively is say okay, look, I'll let you win the, the, contract today. You

let me win it tomorrow, and we'll just alternate. And as long as we keep

cooperating we won't compete with each other, we'll enjoy high payoffs. but if

that ever breaks down, then we're going to go back to competition. So they're

situations where regulators worry, and in fact, there's some various cases that have

some evidence that, companies will tend to do this, to try and game the system, and

increase payoffs. So you can see the kind of logic in what has to be true in order.

For that to happen. Okay. So, repeated games, we've had a fairly,

detailed look at these things. Players can condition their, future play on past

actions. That allows them to react to things in ways that they can't in a static

game. It produces new equilibria in the game. folk theorems, partly referring to

the fact that These were known for a long time in kind of folklore and game theory

before they were actually written down. there's many equilibria in these things

and they're based on, on key ingredients. having observation of what other players

do and being able to react to that and having sufficient value in the future,

either by limit of their means, which is an extreme value or high enough discount

factor so that players really care about the future. Now repeated games have

actually been a fairly active area of research recently. There's a lot of other

interesting questions on these. What happens if you don't always see what other

players do? You only see sometimes, there's some noise in this things. What

happens if there's uncertain payoffs over time? Our payoffs are varying. so there's

a whole series of, of issues there. There's also issues about things like

renegotiation. So you know, the logic here has been, okay, if, if we anybody ever

deviates, then we go to a bad. equilibrium forever after. so suppose that happens.

Somebody deviates, and then we, you know, after about a few, a few periods, we say,

well, this is kind of silly. Why are we, why are we hurting ourselves? let's go

back to the original agreement. Let's forget about things, bygones be bygones.

So we can do better by just starting all over, right? Okay, well that's wonderful,

the difficulty is that if we now believe that if we deviate eventually we're going

to be forgiven, then that changes the whole nature of the game and changes the,

the, The incentives at the beginning. And so, incorporating that kind of logic is,

is quite complicated and another area of research in these. So repeated games are

very interesting. They have lots of applications. There's some interesting

logic which comes out of them. sometimes you can sustain cooperation, or better

payoffs than you can in a static setting, sometimes you can't. and we've seen some

of the features that affect that.