they take to I minus H of x matrix over sigma squared, and I multiply it.

So that's my A matrix from my notation before.

And I multiply it times

the variance matrix from before which was I labeled sigma.

So that's sigma squared I, okay?

Sigma squared I, so

that is equal to I minus H of x over sigma squared times sigma squared I.

That's equal to I minus H of x.

We've seen on many occasion that that's idempotence.

And let's go through an argument about the rank.

So the rank of this matrix,

the rank of I minus H of x.

So the rank of a symmetric idempotence matrix is the trace, okay?

So this the rank equals the trace.

So that's the trace of I, the trace of H of x,

which is x, x transpose x inverse x transpose.

Okay, so the trace of this I.

Remember, this is an n by n matrix.

So that's n, the trace of I is n.

And then in the trace of this, I can do trace, because trace AB is trace BA.

I can do trace of x transpose x inverse x transpose x.

So this is equal to n minus the trace of a p by p now,

identity matrix which is equal to n minus p.

Now that's the rank of I minus H of x.

So the rank of I minus H of x over sigma squared is the same thing because I just

multiplied it times of scalar.

And so what we get according to this result is that our residuals,

e transpose e divided by sigma squared is exactly chi squared n minus p.

And so another way to write this out is n minus p times S squared or

various estimates divided by sigma squared is chi squared n minus p.

And notice has a special case of this, we get

the instance of the ordinarily chi squared result for normal data with just a mean,

that's the case where we just have an intercept in our linear regression model.

And this just simply proves that that is chi squared, and we get n minus 1 degrees

of freedom exactly like we show in an introductory statistics class, okay?

This is a very handy result for proving very general

chi squared results for general quadratic forms.

And we'll find it very useful throughout the class.