[MUSIC]

When the facts change, I change my mind.

What do you do, sir?

Well, this is a famous quote, not by me, but by John Maynard Keynes,

one of the leading economists of the 20th century.

So in this section,

we're going to consider something called Bayesian updating, and

this is really at the heart of what we're all doing throughout our lives.

We may not realize it, but we are updating our beliefs in light of new information.

Now of course, we're going to look at things from sort of a pure statistical

perspective, perhaps.

But nonetheless, we always learn from our mistakes, hopefully.

And we hopefully make better decisions as we go through our lives,

as we learn more about the ways of the world.

So, let's consider a very simple example to begin with.

I'll return to the rolling of a fair die.

So remember the sample space has six equally likely outcomes, those scores one,

two, three, four, five and six.

We derived its probability distribution previously and

let's suppose an event of interest is rolling a six.

Suppose we win a prize if that particular outcome occurs.

So immediately, from that probability distribution, we can identify that

the probability of the event A, where here we're going to define A as rolling a six.

That probability of A is simply 1 over 6.

Now that hopefully is fairly straightforward from the work

we've covered previously.

But let's suppose I give you a little bit of information.

Suppose I told you that when I rolled this die in secret, so

you didn't see what happened, suppose I told you that an even score occurred.

So armed with that information, are you able to update your beliefs, i.e.,

what is the revised or updated probability that a six occurs?

Well, let's break it down and think about it in terms of sample spaces.

So to begin with, before I gave you this extra piece of information,

you knew that there were six equally likely scores for this die.

And you came up with that 1 over 6 probability of rolling a six.

Now let's imagine you are now told indeed that an even score has occurred.

This allows you to revise or update what the sample space could be.

Because if I tell you that an even score occurred,

we can eliminate the possibility that the score was a one, a three, or a five.

The three, odd, possible scores.

Which means whatever the outcome was, it must have been either a two, a four,

or a six.

So there is still uncertainty.

We don't know for sure which outcome has occurred.

But we have been able to eliminate three of those possibilities.

Namely, one, three and five.

Well, given those six original outcomes were equally likely,

if we've now reduced this to just three possible outcomes, two, four and six.

They themselves are still also equally likely.

But now we only have three rather than six equally likely outcomes.

So continuing with our classical probability theme, i.e.,

consider the size of your sample space.

And then simply count how many of those elementary outcomes agree

with your event of interest where given six represents just one of the three

possible outcomes in this instance.

Then the revised or updated, or conditional probability of

obtaining a six, given that an even score has occurred would now be 1 over 3.

So, here we see our first real example of Bayesian updating.

Namely, we have an initial belief.

We receive some new information and we revise our beliefs.

In this case, in a probabilistic sense in light of this new information.

So now, I'd like to introduce the notation to represent a conditional probability.

So in the unconditional setting, we had the probability of A equal to 1 over 6.

Now, I'd like to introduce the conditioning,

this receipt of new information.

So if we let B denote the event that we get an even score,

then we can write the probability of A given B.

So this sort of vertical line, this bar represents a conditioning situation.

So now the conditional probability of A given B,

we've derived to be 1 over 3, a third.

So in this instance, the revision to the probability of A has been upwards because

knowing that an even score has occurred means it's more likely that we got a six,

than if we didn't know that information.

But don't think necessarily that revisions to probabilities will always be to

increase them.

Of course, they could go in the opposite direction.

Suppose I told you, instead of an even score, that an odd score had come up.

Now in fact, here, the fact that we get an odd score,

we can think of as the complement to event B.

If you likely, the opposite to be occurring,

because every score must either be an odd score or an even score.

So if it's not even, of course by default it must be odd.

But if you knew an odd score had occurred,

that must mean that it was either a one, a three, or a five.

And hence, there is no chance that you could have rolled a six in this instance.

So armed with this complimentary event,

if we wrote now the probability of A given the complement of B.

And of course,

this probability is now revised down to zero because it's an impossible event.

If it's an odd score, there was no way that you rolled a six.

So this idea of Bayesian updating is to collect new information,

hopefully relevant and useful information about the world, and

then update our beliefs accordingly.

So perhaps we should round off with a revisit of our now famous Monty Hall

problem.

Because I gave this to you as your sort of a first example

of decision making under uncertainty.

Well, this at its heart is involving Bayesian updating.

Now in the accompanying resources you'll find online,

I'll explain the theory in a little more detail.

But at the big picture level, let's just consider these revised probabilities.

So, remember those three original doors, A, B and C to which you assigned prior

probabilities of a third to the sports car being behind each of those three doors.

So those are what we call the prior probabilities.

The initial state of the world.

You then chose in our iteration of the game a door A, and I opened door B.

So my action of opening door B gave you some new information.

You learned something which you did not know previously.

So armed with this new information, and of course, hear useful information.

We're now in a position to revise and

update these probabilities that the sports car is behind either door A, B, or C.

Immediately, I think we can assess that the probability it's behind door B,

once a goat say, has been revealed.

Of course,

it must be a revision of a third down to zero because it is impossible,

given you observed the goat behind door B, that a sports car lies behind it.

We also said the initial choice of door A remained closed

because those were the rules of the game.

I said I would not open whichever door that you chose.

So if you chose door A, I was prevented from opening it.

So there was no new information about door A.

It stayed closed because you chose it, and the hence,

that precluded me from being able to open it.

So if there's no new information, and of course, the probability associated with

that particular outcome cannot change and so it remains as a third.

So the unopened door C, there was some information revealed to you there.

And of course, many people don't see this new information attached to door C because

they don't see any physical change to door C.

It was originally closed and remained closed unlike door B,

which you actually saw it physically being opened.

And it was revealed what was behind it.

And as we said, there were two possible explanations for

why I opened door B in the first place.

If the sports card did lie behind door C,

there's no way I'm going to reveal that to you.

And hence, with probability one, with certainty, I would have to open door B.

So that is one possible explanation.

Of course, the other possible explanation is that the sports car was behind door

A all along, and hence we have two goats.

One behind door B and one behind door C.

And hence, I would be indifferent as to which door I subsequently opened.

And I sort of randomize between those two doors.

Effectively toss a fair coin, let's say it's heads,

50% chance of that, I would open door B.

And if it was tails, 50% chance of that outcome, I would open door C.

So that action of door B being opened meant new information

was provided to you and it would allow you to update your beliefs probabilistically

here in the Bayesian setting.

So in the accompanying online material,

I give explicit details about how these probability calculations are derived.

But to begin with at the big picture level, just appreciate that in life,

as we learn new things and receive new information.

That the rational thing to do is to take that on board and

revise our beliefs about the world accordingly.

Perhaps just conclude with maybe another example.

When you're deciding what to wear on a given day, no doubt,

you've perhaps listened to the weather forecast.

And the weather forecaster cannot know for

sure whether it's going to be raining or sunny tomorrow.

So there's some uncertainty in their decision making,

some uncertainty in the forecast they provide.

Of course though, if you have a weather forecaster who has a good reputation,

they won't predict the weather correctly every single time.

No one is perfect, there's no certainties in life, as we said, just those deaths and

taxes of course.

But there's no certainty in weather forecasting, but if the weather forecaster

gets it right far more often than he or she gets it wrong.

Then you tend to trust the forecast they give you.

So before you listen to the forecast, the weather forecast.

You may have some initial thinking about what you're going to wear or

what activity you may do tomorrow.

And then you listen to the weather forecast and

if that is giving you some new information,

perhaps you listen to the forecast that say of a lot of rain tomorrow.

And perhaps you weren't anticipating that and hence,

you would take on board this new information, albeit imperfect information,

it's not a perfect forecast necessarily.

But then you may take that on board and adjust your behavior accordingly.

So if you are deciding let's say to go to the park,

thinking it was going to be a nice day tomorrow.

But then you tune into the weather forecaster who predicts torrential rain,

then no doubt, you'll sort of update your beliefs accordingly, and

revise your behavior.

So we'll say a bit more later on in the course about this receipt of new

information.

There are many applications of this we can provide.

We will say, an interesting one with regards to share prices and

how they would react to new information being received by the market.

So conditional probability's introduced here.

A major topic but it's great, I think, for

you to start to appreciate how we can change and

revise these probabilities as we learn new information about the world around us.

[MUSIC]