Let me give you a discretized version of the Brownian motion. It's easier to think about. Then it's going to be the same type of properties in the continuous time version. I'm going to introduce a process which starts at zero, that's just a normalization. The value of that process at the end of period t_k plus 1 is equal to the value at the beginning of the period plus square root of Delta t, z of t_k, where z of t_k are independent standard normal random variables. Square root of Delta t, that happens to be the correct scaling to get a continuous process in the limit when Delta t goes to zero. Delta t is the difference between t_k plus 1 and t_k. We can write this differently. If you look at the difference between W of some future time t_l minus W, today's time, t_k, you can write this then as a square root of Delta t summation of all these contributions which are standard normal random variables z of t_i. As such, it's a linear combination of normal distributions, so it's also normally distributed. All these guys have zero expected value, so this increment in the value of the Brownian motion is also going to have mean 0. The variance, well, this gets squared when you compute the variance, so it's going to be Delta t. Here, these guys are independent, so the variance is going to be the sum of the variances. Each one has variance 1, so it's just going to be how many terms in the sum, it's l minus k terms in the sum times this squared times Delta t. You can also write this as t_l minus t_k. The variance of the increments of the Brownian motion over a time period is exactly equal to the length of that time period, t_l minus t_k. This is a particular case of a random walk. A random walk by definition is a process like this where you always add to the previous value some identically distributed independent component and keep adding that. Here, it's a special case where you add the normal distribution, so it's called a random walk. Now, it turns out that if you let Delta t go to zero, this converges. If you do it in a particular way, it converges to a process which we will call the Brownian motion process. Why is this a benchmark process? Well, if you remember your probability theory by central limit theorem, many things in nature are approximated by the normal distribution. Those things which are effectively a sum of a lot of small independent components identically distributed, then the central limit theorem tells you that sum converges to the normal distribution. It is natural to start the benchmark model with a normal distribution. It's the most prevalent continuous distribution in applications. Motivated by this, we have the following definition of the Brownian motion process. It's a process in continuous time. W is the notation that I'm going to use. W stands for Wiener, which is one of the mathematicians who developed the theory. We are using b for the bank account. Sometimes people use z. I'm going to use W for the Wiener process, but that's my Brownian motion. The first property that is required is the W of t minus W of s. The increments, the change in the value of W over an interval s to t, has normal distribution with zero mean and variance t minus s. That's the property 1. We are also going to require that the process W has independent increments. What does that mean? If I look at increasing sequence of times, then if I look at the changes without the Brownian motion across those periods, W of t_2 minus W of t_1, W of t_3 minus W of t_2, so on W of t_n minus W of t_n minus 1. Those, as random variables, are independent. That's a required property. What does it mean in terms of a financial interpretation? It means that how much the stock price changes from today to tomorrow is independent of how much it changed from yesterday to today. This is not always close to what is true in practice. It is sometimes more true, sometimes less true. Usually, there is some at least short memory in stock prices. In financial asset prices, there is some short-term dependence. However, in our perfect benchmark model with the Brownian motion, these increments, these changes in stock prices across different time periods would be independent. That's this property. This is just a normalization, W of 0 is 0. Then the final property is that the paths that so-called sample paths W of t are continuous functions of time. Basically, what is this in terms of a probability theory? Basically you have random draws from the set of continuous functions. You randomly choose continuous functions in time, and each outcome is one continuous function in time. That you can think of this stochastic process as a random variable which takes values in the space of functions. Each outcome is one continuous function of time from 0 to infinity. There is no jumps in Brownian motion models. If you just use Brownian motion as the random factor, there are no jumps, the prices are continuous. There are model we will mention later on a process which occasional jumps. There are levy processes models that people use for option pricing, which are in continuous time but are not continuous processes. The time moves continuously, but the process jumps all the time a little bit. Which is what's happening in practice that you have small jumps, but in the Brownian motion Black-Scholes-Merton model, there will be no jumps and the prices will be continuous. This is the definition of the four defining properties of Brownian motion.I mean you can define anything. The mathematical question was 100 years ago whether such a thing exists, and this is where it becomes difficult mathematically, there is no easy proof, but you can prove that there exists. You can construct a process like this with these properties. One proof would be basically using the previous slide, discretized and then take limits and prove that the limit exists and satisfies these properties. There are different ways to prove it, but that has been done by mathematicians and we're just going to take that for granted. Brownian motion process exists and it satisfies these properties. Here is an Excel simulation. For every simulated path, you will get a different path. This is one particular path using that slide from two slides ago with a discretized version of Brownian motion you just simulate independent standard normal random variables and you keep adding that term to the previous value and you get something like this. You can see that it oscillates around 0. It has mean 0, but it moves irregularly, it's a normal distribution at every point in time, so in principle can take any value, but it's also continuous in time, there are no jumps. This is one sample path, one possible history of Brownian motion. Some basic properties. One curious property mathematically, which will also create difficulties when we try to define integrals with respect to Brownian motion is that these paths as functions of time are nowhere differentiable. In this picture this is discretized, so it's not quite true, but actual Brownian motion you cannot really define a tangent anywhere. It's so irregular that there is no derivative at any point in time, basically cannot define a tangent. Irregular there is no derivative anywhere you have infinitely many continuous functions. Every outcome is a continuous function like this. None of which has a derivative anywhere, mathematical derivative, it's not differentiable. I'm not going to prove that one indication that things go wrong if you try to take a derivative is described here, and you look at W of t minus W of s squared. It easier to work with squares over t minus s squared. In the limit, if this was deterministic function which is differentiable when s goes to t, this would go to the derivative of squared. Change in the function over change in the argument converges to the derivative if the derivative exists, and then you square it, and let's take an expectation, because this is easy to compute. Why? Because I know the expectation of the numerator is the variance. The mean is 0, so the square moment is the variance. I know what the variance is, the variance t minus s by definition. I have t minus s over t minus s squared, which cancels and I'm left with 1 over t minus s. When s goes to t, this converges to infinity. Something is going wrong if you try to take a derivative and square root. Something which we should convert to derivative, you square it and take the average, in fact, it explodes; it goes to infinity. The Brownian motion is so irregular, this thing goes to infinity. It's also a Markov process. Markov process by definition or Markovian process is a process for which the distribution of the future value conditional on the information up to today [inaudible] depends only on the today's value, but not on the past value before S. The Markov property says, in terms of knowing the distribution on the future of the stochastic process, you can forget the past. You only have to know where you are today, and you only have to know Ws, you don't have to know the history before S. It can be shown that the Brownian motion is a Markov process, and one especially important property for us. The next property is that Brownian motion is a martingale. Conditional expectation given the information of the history of Brownian motion after time s of the future value Wt is equal to Ws. The best predictor of Brownian motion of the future is today's value for any s less than t. This actually we can prove from the definition of the Brownian motion. Let's do it. There's a couple of convenience tricks. Tricks which are convenient in many situations. Let's compute the left-hand side conditional expectation of Wt. But actually, I can write this also in this way, conditioning on the history up to time s, it's in fact by the Markov property. It's in fact the same as conditioning on just Ws, not the whole history, but just on today's value. This notation is a conditioning and this vertical line here means given this information, conditional on this information which in this case is today's value Ws. What do we know about Brownian motion? We know something about increments. I'm going to introduce an increment here. I'm going to subtract Ws and add Ws. I'm adding 0 because I subtract Ws and I add Ws, and then I split the expectation into two parts. The reason for doing this is because I know something about increments of Brownian motion. I have this E of Wt given Ws, is equal to E of Wt minus Ws given Ws plus E of Ws given W of s. This then, let's look at this term first. One, what I know about it, is that this is independent from previous increments. You can think of this as W of s minus W of 0, and because W of 0 is 0. So you can think of this, Ws minus W 0. This information here is independent of this. Computing expected value, conditional on something which is independent and doesn't affect this, is the same as computing the expected value without conditioning. I forget here and I go to the next line. I forget about conditioning. Because the conditional information is independent of this term here, therefore, it doesn't make any difference whether I condition or not. This I know what it is. It's 0. Brownian motion has 0 increments, has 0 mean so this expectation is 0. The second term, expectation of Ws given Ws. Well, conditional on Ws. I know Ws, it's a constant conditional on this information. I can forget about averaging. It's known, it's conditional expectation Ws given Ws is Ws because I know Ws. Therefore, I can just replace all of this with Ws. This was 0, so I only have Ws and that's my martingale property. Expectation of the future given today's information is the today's value. I'm showing you this not just to prove this property, but these are two standard tricks when you try to compute conditional expectations. You try to split into a term which is independent of a conditional information, and the term which is completely dependent, completely determined by the conditional information. This term is independent of this information, this term is completely determined by this information. If you can do that, then it's easy to compute the conditional expectation. You cannot always do that, but when you can, then it works nice. These are two or three properties which are going to be convenient and useful. This set of slides, we introduced Blind emotion is going to be the main building block for the content classical benchmark continuous time, Merton Black-Scholes model. That's it for this set of slides.