And next we have to construct our gradients, right? So, for gradients of our trial solution and weighting function, we have u h comma i in general, 'kay? This is now again, sum over A 1 to number of nodes in the element, NA. Okay, now we're going to write N A as a we're going to write the gradient of N A, right? And the way we can do it here is using the chain rule, right? So, what I'm going to do here is write it as N A comma xi1, okay? Xi1, oh, I'm sorry. N A comma xi1 times derivative of xi1 with respect to x i. Plus N A comma xi2, derivative of xi2 with respect to x i, okay? Again, remembering that here that i equals 1, 2, all right. Now something to observe here is that I claim that this way of writing it works for bilinear quads or for linear triangles, okay? Even though we said linear triangles have an additional, can be written as, depending upon an additional coordinate, xi3, right? Because, of course, what we're doing in here is sort of rewriting, xi3 as depending upon xi1 and xi2, right? That's what's going on here, okay? So let me say here that rewrite. NA xi1, xi2, xi3. Using xi3 equals 1 minus xi1 minus xi2 for triangles. Okay? All right, basically there are only independent coordinates, of course in 2D, so that's all that we need to compute. All right, so that should be straightforward. Okay, that brings us then to to the question of how do we get how do we compute those derivatives of xi1 and xi2 with respect to x i? All right? How do we do it? By exploiting the mapping of the geometry, right? So what we do is we use the, the fact that we have an isoparametric mapping. All right. And you recall what that means, right. That means that x, i in element e, all right, can be written as a function of xi1 and xi2. Always by writing it as sum over A, NA x A i e, all right. Just as we did before, okay. And in fact we've even drawn on the, in the first or second slide of this, well on the first and second slides of, of this segment, we've, we've shown that mapping, okay? All right, what this lets us do then is write out partial of x i with respect to xi1, and partial. All right, so, well, let me write it in general, write it in this fashion. Partial of x i, with respect to xi capitalize I, all right. And we know exactly how that works. It's now sum over A, running from 1 to number of nodes in the element, NA comma xi I, x A i e, okay? And here, too, we have i equals 1, 2, and capital I also equals 1, 2. Here, too, I've used the fact that even if we were working with triangles, xi3 would be written in terms of xi1 and xi2, right? There, there are only two independent coordinates. All right, and this is what then allows us to write J, all right, the Jacobean of the mapping, okay? So J is now, of course, just partial of x1 with respect to xi1, partial of x 1, with respect to xi2, partial of x 2 with respect to xi1, partial of x 2 with respect to xi2, all right? So, now this is a little different from 3D. Well, it's, it's different from 3D simply because we work in two dimensions, right? But it is what you would expect, which is that the Jacobean of the mapping is a. Tensor that takes two-dimensional vectors to two-dimensional vectors. Alternatively viewed as a matrix, it is a two-by-two matrix, okay? And this implies for us, then, that J inverse, which is actually very easy to write because it is just a two-by-two. And we know there exists an analytic formula which is actually very easy to write up, right. We know that we can, we cannot therefore explicitly compute the inverse, right. But that inverse by definition is this. All right. And, why this matters is that these are the terms that are going to be used in our chain rule to compute derivatives, right, of our functions. Right. I just rewrote the formula for computing the gradient of u h. There is a sum implied over capital I, and you remember the capital I runs over 1 and 2, okay? And having written out the inverse of the Jacobean gives us those terms, okay? So that's it. Now we can go back to assembling all the integrals that we need to compute, right? Now what we can do is compute. Element integrals. Okay? All right. And remember that the way we go about that is to say that integral wh comma i jh i with a minus sign d A is sum over e, integral over omega e w h comma i j h i d A, right? Well, we know how to compute everything inside there, okay? When we write this out in detail, what we get is integral over omega e, w h comma i, j h i, dA. In one fell swoop I am going to write the, the final expression, okay? So, we know that this can be written as minus sum over A and B, okay? A and B running over 1 to number of nodes in the element in the general case. All right. We get here c A e. We get an integral. Now, I'm going to straight away write this integral as an integral over our parent domain, okay? So, it will be xi1 or, or, I guess xi2 equals minus 1 to 1, xi1 equals minus 1 to 1, okay. Inside here I'm going to get NA, all right? I will get NA comma xi I partial of, well actually, we don't even need to write it that way. We can write this directly as N A comma i, right? Where N A comma little i simply means that we know how to compute the gradient with respect to physical coordinates, right? We looked at that on the previous slide, right? We know how to compute those guys, right? That times right, we get a minus kappa i j. Now to compute u h comma j we will pick up an N B comma j here, okay? We will be integrating this over d A where if you look out here, we have d A as the elemental area as it, it is an element of omega e in the physical domain. But here because we're integrating over the parent domain, we pick up here, determinate of J, right, which also depends upon xi, upon the vector xi, right, upon both coordinates. This now with an, multiplied by our elemental area in the parent domain, which is that one. Okay, I'll put parentheses here, and coming out here is d A e, sorry d B e, all right? [SOUND] Simple as that, okay? Okay, we know now that this is going, when we carry out this integral we know how to carry out this integral. We've looked at numerical quadrature, okay? So we know how to carry out this integral. It is going to give us, finally, we know it is going to give us K bar A B. The AB entry of the K bar matrix for element e, right? And if we were doing heat conduction, we would call this the conductivity matrix. Okay? Just like that, we go ahead, to compute now integral over, omega of, w h f bar, okay? Integral over, integral over omega of w h f bar d A, right? Okay, integral over w h f bar of d A, which is again, sum over e, integral over omega e, w h, f bar, d A. Okay, where, now, integral over omega e, w h f bar d A equals sum over A, c A e, integral minus 1 to 1, minus 1 to 1. We may call this one xi2. Call this one xi1, okay? And here we get N A, no derivatives f bar. Now we need to integrate over d A but over the elemental area in the parent domain. So once again we just get the determinant of J. D xi1, d xi2, all right? Okay. So, one thing that I should point out perhaps on the previous slide. Our, our tier is that when we look at these at the summations over i and j, just remember that i comma j here equals 1 comma 2, all right? Okay. Okay, so we come back here. So that's how we know, that's how we need to compute this particular term. All right, and what we will, what we know also is that when we look at that term, it is what we will call, believe we call this F bar internal A for element e, okay? And the very last one is the now, the, the integral over the Neumann condit, over the Neumann boundary, all right. So we have integral over partial omega sub j, w h, j sub n, d little s, okay? This one actually takes a little work simply because we need to be careful about how we do the how we pick the degrees of freedom that contribute to that particular boundary here, right, from each element. And also just recalling for ourselves what we will do with doing the integral along the curve that ds implies, okay? So we'll come back and do that in the next segment.