Okay, let's go through the same exercise but, now instead of a coin, let's talk

about a die. Here, the possible values that the die can

take are one, two, three, four, five, and six.

We're going to assume that the die is fair so all of the numbers have probability

one-sixth of occurring. What's the expected value of that die

roll? So, we denote the expected value again by

E[X]. Here we have one times the probability of

getting a one one-sixth plus two times the probability of getting a two one-sixth,

and so on. And if you add them all up, you get 3.5.

Again, this makes perfect sense. You have six bars of equal height.

One at position one, one at position two, one at position three, one at position

four, one at position five, and one at position six.

And, of course, if you were to balance that out with your finger, you'd pit it

right in the middle of those numbers as 3.5.

That covers discreet random variables, at least two examples of discreet random

variables. Let's talk about how you do continuous

random variables. Well, again, the definition follows

exactly from the physical definition of center of mass and in this case, the

expected value of a random variable x is the integral from minus infinity to

positive infinity of t f(t) dt. And here t, is just the dummy variable of

integration. Pretty much the same formula.

Here, if you omit this value t right here, then the integral just works out to be

just one again because f is a probability density function.

So, it's putting the t there that turns it into expected value.

If you wanted the expected value of x^2, then it would be integral minus infinity

to plus infinity t^2 f(t) dt. And this just borrows from the definition

of center of mass for continuous bodies rather than the center of mass of a group

of discreet bodies exactly from physics. And again, it's just a description or it's

a useful summary of the density function f which is a complicated functional

construct. It reduces it down to one number, which is

a property, which is what we kind of think as some, at least one of the definitions

of the center of the density f. Let's go through an example of calculating

the expected value of a particular kind of random variable, one that we have not

encountered before. So, let's think of a very simple density.

The density is such that it's zero below zero, constant at the value of one between

zero and one, and it's zero above the value one.

So, let's mentally just real quick verify that this is in fact a valid density.

Well, this density is exactly a brick. Starting at zero, ending at one, with

height one. You don't even need Calculus to verify

that this integrates to one, because it's just a square.

Its area is the, the length of base, which is one, times the length of the height

which is one. One times one is one.

So, it's a proper density. It's also positive everywhere because it's

zero below zero, zero above one, and the value one between zero and one.

So, it's positive everywhere. So, it's a valid density.

This is in fact, actually, an extremely important density.

It's pretty simple one, but it's an extremely important density, and it's

called the standard uniform density. It sort of represents the idea that any

value between zero and one, is equally likely.

It's, so imagine if you were to, to drop a pencil on a line between zero and one.

And you were dropping that pencil in such a way that it was equally probable to land

anywhere between zero and one, this density would represent that process.

Well, let's go ahead and calculate its expected value.

Now again, we already know what the answer is from the geometric argument.

It's a brick that starts at zero and ends at one.

The answer has to be 0.5 because if you wanted to balance that brick out, you

would have to put your finger right in the middle.

So, let's just verify that calculation. Here, our expected value of our uniform

random variable X is the integral from zero to one.

And again we have X, right, in the previous slide I had t, but let's just use

x in this slide times the density, which in this case, is just the constant one, so

I just didn't write it down, then dx, the variable of integration.

And so that integral is x^2 / two, evaluated the bound zero to one, which

works out to be one half, exactly what we would have thought.

Incidentally, I just want to remind everyone of this notation.

The capital X in the expected value represents a conceptual value of the

random variable. Whereas, the little x, in this equation,

represents the dummy variable of integration.

A value that's actually getting numbers plugged into it.

So, it wouldn't have mattered for this little x if we had used x, or t, or z, or

w, or whatever. However, it's important that we kept it at

capital X. That's what we've assigned this random

variable, the value capital X. So, it's small point but keep that in

mind. So, I wanted to talk about something that

always comes up in my in-person classes when I talk about this.

So, here we calculate the expected value of x.

And I, told you that expected value of x squared, well, you would just calculate

integral from zero to one, x^2 dx. And a question that always comes up in

class is, well, wait a minute. I could calculate expected value of x

squared that way or I could figure out what the distribution of the square of a

uniform actually is, right? A uniform random variable is a random

variable representing some process. So, the square of that has to be a random

variable and it, itself must have a density.

So, I should be able to calculate the expected value from that density and it

should be the expected value of y, let's say if y is x^2, do I get a different

answer if I figure out what the density associated with the square of a uniform

random variable is? Do I get a different answer if I calculate

the expected value that way? Or if I calculate the expected value this

way, by just putting a square into the expected value calculation from the

original uniform density? At any rate, you'll be happy to know that

you get exactly the same answer. And in fact, I would also add that it's a

lot easier to just put the square in this integral equation because we automatically

know, off the top of our head, what the density of a uniform random variable is.

We may or may not know what the square of a uniform random variable is.

So, this should give you some basic examples of calculating discrete and

continuous expected values. In the problems, you'll have both harder

and easier examples to work out, and in the assessment we'll have harder ones.

In the next section, we'll talk about rules that expected values have to take.