We call this Swiss data set, so here is [SOUND] the first couple of variables, and I wanted y to be [SOUND] the fertility variable, and I'm going to let x be everything except the fertility variable. I'm going to define x1, let's, I want an intercept in there, so. There now, x has an intercept. And I want x, make sure it's a matrix. And let me let x1 = x[,1:3], and let me let x2 = x[,4:6], the final four columns. So first of all, let's look at the solve, There is the estimates, the data that I get if I numerically solve the equation, the normal equations. And now let's show that these are equivalent if I take the residual having removed x1 from everything and then take x2. So if I do solve(t(x1) star x1, t(x1) star y), if I do that, that's going to give me my regression estimates for just x1 fitted as a predictor, and y as the outcome. And so I want those times x1, I want the residual. So I want y- x1 times that. Oops, and I want to solve that, set that to a variable. Let's call that ey, the residual of y have to remove x1 is called ey. Now ex2, right, I want to be the residual of x2 having regressed out x1. So that's going to be x1 and then, Okay, so it's x, let me just double check, so it's x2- x1 times x1 transpose x1 inverse x1 transpose times the, what I'm treating as the outcome in this case, is x2. Okay, so now, if I then, let's take dim(ex2), okay, for that's the right dimension. And so now, my claim is if I take the regression with ey as an outcome and ex2 as a predictor. I should get the same regression estimates that if I did the full multivariable regression with x1 and x2 included in the model, okay? So now I want to do solve ex2, ex2, ex2, ex2, ey and then, I forgot my transpose there, okay. And then I get these three coefficients. Now let me compare that back to when I got y by itself. Okay and notice they're identical, -0.87, -0.87, 0.1, 1.07. And what this is highlighting is that if I take my x2 column and if I take my x2 matrix and I regress x1 out of every column. And if I take my y factor and I regress x1 out of it and then, I do regression, treating those sets of residuals as the outcome and predictor respectively. Then I get the same answer if I just fit the whole thing and that really highlights how it is that multivariable regression is doing adjustment. The coefficients for x2 are adjusted in a sense that the linear association with x1 has been removed from everything before considering x2. Okay, and conversely, the same thing holds true for x1. The coefficients for x1 are adjusted for x2 in a sense that x2 has been removed from everything before you consider the regression of x2 on y, okay. So that, to me, this really highlights in what sense regression is doing adjustment.