0:12

So hello.

We continue with our class on simulation modeling of natural processes.

And now I'd like to discuss our second module on Monte-Carlo Method and

this is what some people call the Markov-Chain Monte-Carlo, MCMC.

So the main goal again of the Monte-Carlo method is to sample processes.

And, here we will just study a process,

a stochastic process, which is actually sampling state space.

And we first wanna understand the property of this process and see if we can tune it.

I just needed to actually sample something you want.

Okay so we consider a stochastic process

1:02

which will move across state

space which I'm not specifying precisely here but I will give you example.

So what do I mean by exploring?

It means that if I have some position x in this state space I will jump

to position x prime at the next time step and

the probability to pick x prime to jump is given by

1:37

And each time I've done an attempt to jump somewhere,

I advanced the system time by one unit.

So that's the way time is evolving in this, and for

those of you who remember your math class, this is called a Markov-Chain.

So just giving a little example, you may have here

a physical system of particle, for instance, they are somewhere in space.

And basically, you may want to sample the space of the configuration

of this particle according to the physics they obeying to and

the ideas from this black configuration I could do modification so

I can decide to move for instance two particle to the white spot, okay?

And so that's typically a jump in my state space.

And the question is with what probability should I accept this move, okay.

And actually, I have, of course,

a constraint to accept or reject this change, is that I would like to

accept changes which correspond to a natural movement of this particle.

So you know that the gas is of course made of particles.

But that's certainly not drastic as my picture suggests.

They keep moving, but they don't explore all the configuration,

because there typically was a given temperature.

3:18

distribution, which is the energy of the configuration x, which depend on

all the position of the particle and the temperature at which the system is kept.

So the question is, is there a way to choose this transition probability?

So that I can move my system from one configuration to another

by still respecting the probability distribution imposed by physics,

for instance or another one that I know.

3:47

Okay, so let's do a little bit of math and the probability that our

exploration process it has position x in the state

space at time t plus 1 is basically probability it was somewhere else,

some x prime, at the time t, and then you jump from x prime to x, so

that's very simple probability theory for Markov processes.

4:13

So now I will give you a simple example of that.

So we'll consider one D space which is just a discretization of

the horizontal line, meaning that I may have a particle that can jump from

one cell to the next, left, right, or stay at rest.

Or we say that the discrete space is the symbol, Z, natural number,

and then I want to sample this state.

4:50

which probability I call W+ to go to the positive direction.

Or I can move to the left, or negative position as probability W-, or

I can stay at rest with property W0 and of course the sum of this thing should be 1.

So now, my equation here becomes with this reduced set of jump to

simply this one that only three possibility of evolving,

so if I'm position x, I time t + 1, I was prior position x- 1,

and then jump right, or I was at position x and I didn't move,

or I was at position x + 1 and I moved left, okay?

So, now suppose I wanna use this

stochastic process to sample the diffusion equation.

So you will learn more about diffusion equation in the coming weeks.

5:46

But, just for now, let me tell you that mathematically is partial differential

equation which is the time directive of the density row is the second spatial

directive of the this density time, some diffusion coefficient.

6:01

And you will learn in during week

three that you can discretize such equation and find a difference.

And it says simply that the evolution in time

is a previous value plus some combination of your left and right neighbors.

So there's just math translating this equation into a discreet form.

Okay?

But now, we know that our stochastic process has the following property

that tells us how the density probability increase over time,

and you see it's very similar to this thing.

So basically, you see that this

p of xt, if I want p to be equal to raw,

I have one contribution here and one contribution here.

So it mean that this coefficient 2 times this plus 1 should be equal to w0.

And the same, I can match this with this and this with this.

So this gives you exactly this condition, so provided that you

choose W+ and W- one as this quantity, and

W0 as basically the complement, so that's everything sums up to one.

You get exactly a process which reproduces the diffusion equation.

So it's a stochastic process, you have to run it many times,

you have to do statistics, but if you do so you will see that your particle which

jumps randomly will just be distributed exactly like as the diffusion equation.

What's also interesting in this approach is that you see this condition

that tells you that actually since you have probability

you have this condition that this quantity should be smaller than one half otherwise

you may create negative probability which of course is not very good.

It's interesting to know that this condition

is also the stability condition of this numerical scheme.

So you see there's a nice consistency between the two approaches.

8:07

Okay, so we can actually replace the solution

of the diffusion equation by a stochastic process of random walk.

Okay, maybe it looks to simple to solve the diffusion equation with random walk.

But you should realize in that in your random walk you can easily

add obstacles or aggregation mechanics are all kind of, you know,

other feature that might be difficult to insert in the differential equation.

So maybe the process is actually represented at the level of the particle.

It's easier to do it as a Monte Carlo simulation rather than

trying to solve a complicated diffusion equation.

8:52

So now, let me go a little bit more into the general case, and

I wanna take again the equation telling me how the probability of

my explorer evolved in time, so we already saw this equation.

And now, I wanna transform it in a very simple way.

So, basically I will single out the situation when x prime is equal to x so

that this term and of course the rest is everything

result x prime equal x, okay?

And then I just use this term that I replace by the fact that

the sum of all the Ws is 1, so I can really rewrite it this way.

9:36

Okay, and now I group these two sums into one, and I get this guy out of it.

So, this here, and so I have this equation which tell me how my probability

of my random explorer evolved in time and

of course again the goal here is to try to find the W so

that this p will be actually identity that you know and you want to sample.

10:04

And if you want to impose that p

is equal to some row that is known in a steady state, okay,

it's obvious that you need that all this term is equal to zero, okay, so

basically to have p equals rho you need to find Ws that will satisfy this equation.

10:35

this element, r0, which is exactly what we see here.

And that condition is called in physics,

detailed balance because it mean that the priority of being at x and jumping to

10:48

x prime and jumping to x the same as the probability of being at x and

jumping to x prime, so that's why it is called detailed balance.

And detailed balance will certainly solve this condition.

It's maybe a sufficient condition,

maybe not a necessary condition, but we'll go with it for now.

11:08

So the famous metropolis rule, the metropolis is this person who used

the Monte Carlo approach in the Manhattan Project.

He was interested to sample the Maxwell-Boltzmann distribution which

again, I am repeating, so it's the energy of the system divided by its temperature.

11:29

And he showed that this transition rule is a good one.

It obeys the detail balance and

it's known, it's well known as the Metropolis rule.

So, say that you go from X to X prime, with probability one,

if the the new energy you have is smaller than the previous one so

system is happy to go to lower energy.

Now if the energy of your new configuration is

higher than the previous one.

You can still go but with some probability which decreases because of this minus,

as the temperature is low, and as the jump of energy is big, okay?

But you'll still have a probability to go in a state of higher energy, and

at just the right way to sample this distribution from the theory we just saw.

So in practice, how would you do that in our system of particle?

12:22

You'd select a particle at random.

Let's say this black one here and

you decide to move it by a random quantity to this position here.

Now since you've changed the position of the particle,

you've changed the energy of my gas and I have a new energy E prime.

I will accept this change of the particle if

12:44

it corresponds to the metropolis rule, which in practice you can write this way.

You can just take a random number between 0 and 1 and

if it is smaller than the minimum between 1 and

this quantity, you accept the change otherwise you reject the change.

So this expression is just a way to express the equation I had before here.

Okay, and so what's the benefit of doing this if I

can sample my distribution of the gas?

Then I can compute some properties like pressure or

whatever physical property that the gas takes, and of course,

that's what was interesting for the Manhattan Project.

13:40

So if I wanna compute this quantity,

so that's my distribution at equilibrium here, okay,

times this transition rate, which his given by the Metropolis rule.

Okay now I can of course combine the two exponential in one and

I get just this, okay.

But this is nothing but what we call the density for

the contribution x prime times one if you want.

But one is exactly now the probability of jumping from X to expand to x

because E prime is bigger than E, so then the jump would be with probability one.

So you see that, of course, you obey detailed balance.

14:23

And for those of you a bit more curious, you have another way to satisfy

the detailed balance, which is called The Glauber Rules.

It's simply taking this as an expression.

And in the case of an equivalent system you get this as a transition.

And it's of course, also a base detailed balance.

14:43

So with this I would like to close my discussion of

the Monte-Carlo Markov chain approach.

And in the next module, we will discuss what's called as kinetic or

dynamic Monte Carlo method.

Thank you for your attention.