So far in this course, you have learned that the weighted least square solution to estimating total capacity is biased, but that the ideal weighted total least squares solution demands too much computation to use it in a practical battery management system. The last two lessons, though you learned about a proportional uncertainty version weighted total least squares that I call TLS or total least squares, that actually is feasible to implement in a battery management system. But unfortunately, this is very restrictive. It requires that the uncertainties on the X and Y measurements be proportional, and that's not necessarily true and practice. So, we desire to find a solution that approximates the ideal weighted total least square solution that allows these uncertainties to be non-proportional. We also desire a feasible solution that can be implemented in a recursive way in a battery management system using finite memory, and constant computation. It will take us a little while to develop this new solution. In this lesson, we will begin to do so by looking at some geometry. This may seem a little bit unlikely, but it is the geometry of the weighted least squares solution that allows for an efficient solution, and the geometry of the general weighted total least squares solution that does not allow for an efficient solution. So, we're going to make an observation from this geometry that leads us to understand why the total least square solution gives us something nice, but the weighted total least squares solution does not. Then, based on this geometry, we are going to propose a new geometry that approximates the weighted total least squares solution that we desire to find, but has some of the nice properties of the simpler total least squares solution that gives us a recursive solution and so forth. So, this lesson looks at geometry, and in later lessons this week, you will learn how to use this geometry to develop the solution itself. The illustration on this slide describes the geometry of the solution obtained by both weighted total least squares on the left, and total least squares on the middle. It also shows a proposed geometry for a new solution on the right that you will learn about this week. This figure is going to remain on the top of the slide for everything that we look at today, and we'll proceed from left to right, I'm discussing it. Let's begin with the illustration on the left. This shows the geometry of the weighted total least square solution. In this figure, the measured data pairs, X and Y are shown as filled circles with their corresponding confidence intervals as error bars as you've have seen a number of times already. Something is new in this figure though, and that is, that I show you the transformation between the measured data pair, and it's mapping on the total capacity line. Remember that the data pair are denoted as lowercase x and y, and the mapping on the line are the uppercase X and Y. So, the dash line shows you the transformation that creates this relationship between the lowercase x, y and the mapping uppercase X, Y on the line. Remember that we solve this using Lagrange multipliers, and that was what gave us the constraint that optimize the cost function. The important thing about the weighted total least squares geometry is that allows every point to have different confidence intervals in each dimension. So, it's perfectly general, perfectly arbitrary, and that was desirable about it. Continuing to look at the figure on the left, we see that the distance between a measured value of lowercase X and its mapping uppercase X is not necessarily equal to the distance between lowercase y, and its mapping uppercase Y on the line. These distances depend on the respective confidence bounds on the measured lowercase x and y data pair. So, if the quality of the X measurement is better than the quality of the Y measurement, then we trust it more, we think it's closer to the line. So, its distance to its map on the line should be shorter, and the distance from Y to its map on the line or to put it a different way, if the quality of X is poorer than the quality of Y, then its distance to the map on the line should be greater than the distance from Y to its map on the line. So, there are inconsistent differences between distances between the x and y data points in their mapping on the line depending on the relative quality of the measurements as indicated by their confidence intervals. So, the dotted lines that join a data point to the mapping in the line will intersect the line at arbitrary angles because these distances can be arbitrary. Now, let's consider the middle figure. This is the geometry of the simplified method that I call total least squares. In particular, we're going to consider what happens if the uncertainties of X and Y are exactly equal. If that's the case, it turns out that the dotted lines that join a measured data pair to the mapping on the line intersect that line at a 90-degree angle, that's a perpendicular intersection as you can see drawn on this figure. So, the distance between a measured value of x and its mapping on the line is equal to the distance between y, and its mapping on the line. That's what forces this mapping to be perpendicular to the total capacity equation. That is a very special case of even in the special case that we call TLS because I've restricted here the mapping, the uncertainty between X and Y to be exactly equal to each other, and we earlier talked about them being proportional and not equal. Well, if they're proportional and not equal, it changes the geometry of the problem. But this really doesn't undermine the solution significantly. So, imagine that all of the error bounds in X or were narrower than I've drawn, and the error bounds on Y are exactly the same as what I've drawn. In order to change that solution into the figure that I've drawn, I simply rescale X. Then, if I rescale X by multiplying by some proportionality value, then the error bounds also get multiplied by that proportionality value. So, it's possible to find a way to rescale X always so that the proportionality of the re-scaled value of X, the error bounds are the same as Y and that we can get this geometry. So, if that were the case, what I would do is I would rescale the data points either by scaling X or Y, actually I usually rescale Y, and so that the error bounds of x and y then are are the same. Then, we have this geometry, we solve this geometry that's shown on this slide, and we take the solution and we scale it back to the original problem. You'll learn more about that later this week. So, the left frame shows the case, that's the ideal solution but it is not feasible to implement. The middle frame shows a case that is ideal only in certain very specific cases, but it's quite simple to implement. The reason it turns out to be simple to implement has to do with its geometry with this right angle intersection of the mapping between the data point and its location on the line. So, what we're going to do is we're going to approximate the geometry of the left frame with something that's similar to but a little different from it. This new proposed geometry has the nice geometric properties of the middle frame, and that's going to lead to a nice simple solution. This new solution is going to be called the approximate weighted total least squares or the AWTLS solution. At this point, the figure looks very complicated, and we're going to spend some time in the next lesson going over many of its details. Again, the main observation to make at this point is that we are going to force, we're going to restrict the transformation between a measure data pair and it's mapping on the line to be perpendicular just like it is for the middle figure on this slide for the TLS solution. When we do this, the result as a recursive solution that is feasible to implement on an embedded system. With this new geometry though, we are still going to weight the distance between X and it's mapped differently, then we weight the distance between Y and its map. This is going to give us a better total capacity estimate than the middle geometry, the TLS solution when the uncertainties of X and Y are not proportional for every data point To. Summarize then, the weighted total least squares solution maps a measured data pair of X and Y to a point on the total capacity line with a relationship that has generally non-perpendicular to that line. This is the ideal mapping. This is what we would love to be able to do, but it's not practical to implement. Earlier this week, we talked about a TLS solution that instead maps from the measured data point to the line using a very particular geometry that here we have shown is either perpendicular or could be scaled to be perpendicular. The TLS solution is not as general as the weighted total least square solution, and therefore, will not generally give estimates of total capacity that are quite as good, but in certain cases, for example the condition for which we derived, it is exactly equal to the WTLS solution. So, if the error bars on X and Y are exactly proportional for every measurement, then the TLS solution is exactly equal to the weighted TLS solution. Very importantly, the TLS solution is practical to implement. So, we're going to use the observation of orthogonality or this right angle intersection to propose a suboptimal mapping from the data pair x, y to the line and this mapping will be perpendicular. But unlike the TLS solution, this proposed new method also allows different weighting of the uncertainties between X and Y. At this point, I've only shared with you very high level details of the geometry, not discussed them, and a lot of detail. But we're going to use the geometry from this figure as we proceed beyond this point. So, let's move to the next lesson where we will define a cost function that we will optimize, and this cost function is based on this new geometry. This cost function we call the approximate weighted total least squares or AWTLS cost function. That is something that is going to be probably the best solution that you will see in this course.