In the last lesson, you learned that we are going to use the full enhanced self correcting cell model, starting at the present estimate of the full models state, to be able to find voltage based power limits. When we're looking for the maximum value of discharge current, we're looking for the value of current that makes the future voltage predicted by this voltage equation minus vmin equal to 0. And when we're looking for the maximum absolute charging current, or minimum signed charge current, we're looking for a value that causes the future voltage predicted by the voltage equation, delta T second in the future minus vmax equal to zero. So in both cases, we're looking for a zero location in a nonlinear function. Such a zero location is called a root of a nonlinear function. We're looking for the root of a nonlinear equation. There are different algorithms that can be used to find the root of a nonlinear equation, but the one that I share with you is very simple. And it's almost always one of the most effective methods also. And this method is called the bisection search algorithm. The bisection search algorithm looks for a root of a nonlinear function where we know before had at least one root lies within certain interval. So one end point of the interval is label x1 and the other end point of the interval is labelled x2. Since we're looking for a zero crossing, we can know for certain that there is at least one zero crossing inside of an interval if the sign of the output of the nonlinear function is different when the function is evaluated at input x1 and when the function is evaluated at x2. So a continuous function, having different signs at two points, must pass through zero in the middle somewhere at least once. The bisection algorithm iterates multiple times by dividing the search interval into smaller and smaller pieces until it is satisfied with the resolution of its result. So every iteration, what it does is it takes the two endpoints and it averages those to compute a midpoint. And then the function is evaluated at this mid point x value. The actual value of this evaluation turns out not to be critical because all we care about is the sign of the result. So if the sign of the result is equal to the sign of the function evaluated x1, then I replace x1 with the midpoint value. Because I'm trying to first cut down the size of the interval that I'm looking in for the root location but I also want to maintain that the signs of X1 and X2 are different so that I know that the zero location is between them. So if the function evaluated at the midpoint and then the function evaluated at x1 of the same sign, then I cut down on the interval by replacing x1. Or if the sign of the function evaluated at the midpoint is equal the sign of the function evaluated at x2, I replace x2 with that midpoint. And, by doing this, we are guaranteed that at least one function root remains between x1 and x2. But notice that the distance between x1 and x2 is divided in half. So our uncertainty regarding the location of a root of this nominal equation is divided in half every time we have one iteration of the method. And so the resolution of our estimate is improved by a factor of two for every iteration through the loop. We keep on repeating that as basic iteration until the resolution of the root is accurate enough to meet with whatever specifications we might have. So each time we compute one iteration of the bisection algorithm we bisect or we divide the uncertainty of the root location by a factor of two. At some point, the uncertainty is smaller than some design threshold epsilon. Which is how close I need to find that root value. For example, if I need to find the maximum current to within a range of 1 ampere, then I set epsilon equal to 1. It turns out that we can compute, quite easily, the maximum number of bisection steps or iterations required to achieve any desired resolution. So if x2 and x1 are the endpoints of the interval before I begin the bisection function algorithm, then I take their absolute difference and I divide that by epsilon. And then I take the logarithm based two of that quantity in order to find the number of times I have to divide that interval in half to achieve that resolution of epsilon. But since I need an integer result I need to round that result up to the nearest highest integer. So that result is the maximum number of iterations required by the bisection search to achieve any desired given resolution. And that is really all that the bisection algorithm is. And for the remaining part of this lesson, I'm going to show you how to implement the bisection algorithm in active code. So this code has four input arguments. The first argument is a function, h, and that's the function in which we are seeking to find a root location. The second and third inputs are the two endpoints, the x1 and x2 endpoints of the interval. And we know beforehand that this interval must contain a root. The fourth input is the tolerance value or the epsilon value that is the desired final resolution. Inside of the code, the first line computes the maximum number of bisection iterations required to meet this tolerance using the equation that we've already looked at on this slide. The second line then computes a variable dx to be the width of the The uncertainty interval when the function is first invoked. Now this particular code is optimized by assuming that the nonlinear function evaluated at n point x1 is always negative and that the function evaluated at x1 + dx is always positive. So if the opposite condition is true using the value of x1 provided by the user, then it simply switches the role of x1 and x2 by multiplying dx by -1 and setting x1 equal to x2. So at the end of that conditional statement, we're guaranteed that the function evaluated at x1 is negative. And the function evaluated at x1 + dx is always positive, noticing that dx might be a negative value. So we're now ready for the bisection iteration. The function evaluated at x1 is negative, the function evaluated at x1 + dx is positive, and we have computed the maximum number of required iterations. This slide contains the rest of the bisection function. You can see that it's really principally a loop for jmax execrations. Every iteration we begin by dividing the uncertainty with dx by two, and we compute the midpoint by adding x1 to this uncertainty with. We evaluate the nonlinear function at the midpoint. And then we query its sign value. If its sign is negative, then we want to replace x1 with x mid. If its sign is positive, then really we're replacing the x2 with x1 plus x mid. But we've already done that effectively by computing dx such that x1 + dx is equal to the midpoint. So we don't have to do anything if the sign is positive. And we continue looping for the jmax iterations. And when we exit the loop, we are guaranteed that the root is between x1 and x1 + dx. And our best guess to the location is halfway in the middle of those end points. And so we return to the user an estimate of the root location that's equal to x1 plus half of the width, dx. That's all there is to bisection algorithm. Here, I share with you an example of how to invoke or execute this algorithm. The function that's passed to the bisection algorithm as the first argument could either be a function file, like function.m. Something like that or it could be an implicit in line function. And here I'm using an implicit function. I'm defining h to be a function of x such that it computes the the value x cubed. Then I invoke the bisection function. With this inline function h looking for a zero crossing between x= -1 and x= 2 with a resolution of 10 to the power -5. We know in this function that the actual root is x= 0, and in this particular example the function returns an estimate of the root equals to about -9 times 10 to the power -7. So this estimate of the root turns out to be imperfect, but it's actually accurate to the specified tolerance. So you can experiment with this method and try different tolerance values and find that you can make the tolerance smaller and get a better root location estimate. Or you could actually use different endpoints, since sometimes that also, just by random chance, not quite random, but semi random chance, improves the root estimate also. To summarize this lesson, we need to define a non-linear search algorithm that will find a root of a nonlinear equation in order to solve for a value of battery cell input current that will cause the cell to encounter a pre-specified future voltage limit when we're using a full cell model and we could not do that before. The enhanced self-correcting cell model is an excellent candidate to use. And it turns out that it's linear enough that this simple bisection search method works quite well. In this lesson you've learned how the bisection algorithm operates, and you've also learned how to write bisection code in Octave. I also shared with you an example of how to evoke this bisection code to find the root of a simple nonlinear equation. And in the next lesson, you will learn how to apply this bisection method to a full battery cell model in order to find the power limits.