Hi, Welcome back. In this set of lectures, we're talking about randomness and random

locks. What I want in these very short lectures is just unpack a little bit of

what I mean by randomness. Because randomness means different things to

different people, and oftentimes when you see randomness in a model It has different

conceptual underpinnings. It means different things to different people. So

let's talk bit about what's meant by the concept of randomness. Now to start, we've

got to start by talking about a probability distribution. Remember we

talked about a probability saying here's our set of possible outcomes. Along here

and here's the probability. [sound]. [sound]. They occur, if they are equally

likely, like they just were. In the example we saw of the path dependent

process, then you're gonna get a distribution that looks like this. But

more commonly, we see distributions that are normally distributed and we get this

nice bell curve. Right, remember the bell curve when we talked about the [inaudible]

theorem. So when we talk about randomness, we're gonna care about what the

distribution of that randomness is and also where it comes from. And that

randomness can have lots of shapes, alright. It can be uniform, it can be a

bell curve, and it can even be a long tail as we saw on that distribution on networks

graph. So, lots of different distributions out there Lots of different forms of

randomness. Now when we write randomness in models, what we often do is the

following concede. We say, instead of just having some value X, there's also a little

error term, epsilon. So, there's an X term, which is the value we're concerned

with and there's some randomness or error that gets in our way. We want to talk

about where that error can come from, were the randomness can come from. In models,

and so many models, whether you look at economics, sociology, physics. Biology,

engineering, You're always going to see these epsilon terms. They're everywhere.

Where they come from, what's the reason for it? Well, one reason is just noise or

measurement error. So if you think about measuring the luminosity of a star, let's

say. If you look through this telescope and you get some measurement. Well,

there's different levels of ambient light, there's different humidity. There's maybe

a little bit of dust on the glass. All those things are gonna contribute to

creating small deviations in actually getting the two measurements. So instead

of getting, if the true luminosity's L, you might get L+epsilon, or L-epsilon.

You're gonna be off by a tiny bit. So noise is one reason Just, variation up

there in the environment. Second is. Error. So suppose you run an ice cream

store and you want people to, you know, dip ice cream scoops so that they weigh

exactly four ounces. So each ice cream cone has exactly four ounces of ice cream

on it. Well, different people are gonna be dipping, the ice cream's hard and soft.

People are gonna make mistakes. So instead of selling exactly four ounces of ice

cream in each cone, some people are gonna get big cones, some people are gonna get

small cones, and that's just how it's gonna be, 'cause people make mistakes. So

error is another reason you see epsilon. Remember we talked about six sigma early

on. Well, six sigma is about reducing the size of that error. But even if you

[laugh] [inaudible] as much as you can there's still going to be some and that's

another cause of randomness. Third cause of randomness is uncertainty. So suppose

you undertake a huge project, like building Big Ben, for instance. Well, what

you have is you have. An estimate of how much it's gonna cost to build that

project. But that's just an estimate. What's actually gonna play out in reality

is gonna be different because some materials may cost more than you expected,

some may cost less, there may be problems along the way as the project unfolds. And

so as a result, instead of it costing what you expect it to be, there's gonna be some

uncertainty because the world may change as you're moving along in the process. So,

therefore, uncertainty i s another reason why we see randomness in models. Fourth

reason is complexity. [sound] Remember how we talked in our models how systems can go

to equilibria. They can be periodic. They can be random. Or they can be complex. The

thing is, remember we talked about these processes. In which lots of little

interacting things. Well those can produce all sorts of interesting patterns that are

complex, that you can't really predict, we can't know. So if the world is really

complex, we may not know what's gonna happen. So we say, okay, well x is our

best bet, and because I don't really know what's happening, I'm going to throw in an

error term just because that'll correct for maybe what I get wrong. So I know I'm

not going to be exactly right, so there's going to be some error. So complexity and

uncertainty can actually get modeled in the same way, it's just a plus epsilon

term. Now there's going to be problems with doing that. We'll see those a little

bit later in the course, but for the moment. It's not a bad idea to think about

just, if you're not sure what's going to happen, whether you're uncertain or the

process is complex, by throwing in some sort of error term. And finally, epsilon

can be that error term that randomness can occur just through capriciousness. People,

you know, people are hard to predict. So if I'm writing a model of people, I don't

wanna say, I know what these people are gonna do. Instead, I might say, well, you

know, they're probably gonna do this, but who knows. You know, they're people.

They're crazy. They might do anything. So we put in a little bit of an error term,

All sorts of reasons why things may not go as we expect. There can be noise, there

can be error, there can be capriciousness, there can be uncertainty, there can be

complexity in the underlying process. So when we think about these models, these

random models that we're gonna study, there's all sorts of things that can come

into play to make the outcome not be what we expect, but to include little error

term. And that error is go nna introduce things like luck, that make it really

interesting when you think about why is someone successful, and why is someone not

successful? Alright, thank you.