In this session we're going to apply much more of what we've learned about high order functions in one coherent example. The task here is to find fixed points of functions and the application of the task will be to our well known square root example. So what is a fixed point? A number x is called a fixed point of a function f if f of x equals x. Let's see that in a graphical example. Let's suppose we have the function that maps x to one plus x over two. So the graph of that function would look like this. And that would be the diagonal. And that would be one. And the fixed point would be where the diagonal hits the graph of the function. That would be two. Turns out that for some functions we can locate the fixed points by starting with an initial estimate and then applying f repetetively. So, we start with x, then we compute f of x, then f of f of x, and so on. And for these functions at some point the value will not vary any more, or the change will be sufficiently small, so that we can declare that we have approximated the fixed point. So for functions that satisfy that property, we could write the following fixed point algorithm. I will go into the worksheet and start it directly. So that algorithm is if you look at it closely it's a closed derivative of what we did for square root. We have essentially an iteration method that test whether our guess is close enough to the, to the result and if it's not then we iterate and beyond the improve methods that we had in the square root would now be this f method that we pass as a parameter. So let's try this method out. For our function that maps x to one, one plus x over two. So let me write fixed point. X to one plus x over two. Initial value one. And you would get something that's very close to two, as expected. So now that we know how to do fixed points, can we somehow get new insight in the, our square root function? Previously we gave you Newton's method essentially as fact. That's the sequence that you have to use for your iterations. Can we somehow derive that sequence? When in fact, yes. So here's a specification of the square root function. What is square root of x? Well, it would by a number y, such that y squared equals x. That's what it means to be a square root. Now that we can, now we can do just simple algebraic manipulation. So, we divide both sides of this equation by y, so we get square root of x is the number y such that y equals x divided by y. But what that means is that square of the x is the fixed point of the function y to x over y because if you put y in and you get x of a y and x of a y is the same as y then that equation is satisfied. So a solution of the equation is the same as a fixed point of this function again. Now, with that knowledge we can try to revisit our implementational square root. Make it simpler by just plugging it in into our fixed point find a function. So, that would give square root of x double equals fixed point of this function that maps y to x over y, and some initial value, 1.0. Let's try that out in the worksheet. So we would have def square root of x double, equals fixed point of y maps to x over y and one as the initial value. And let's try with our square root of two. And we get, unfortunately, an infinite computation, this doesn't converge. Why not? What happens? Well, to find out more we can use print run as a debugging help. So let me just add a print run statement at a good point. So, I guess a good point is in the iterate function, where we could say, for each iteration step. We want to write what the current guess is. So we write guess equals, and then current guess here. So if we do that, then we see that. Our guess, oscillates between one and two all the time. And if you do the computations for square root, I encourage you to do that and you will find that yes indeed if you trace the execution using the substitution model then that's precisely what will happen. So, how can we do better? One way to control these oscillations that we were seeing is to prevent the estimation from varying to much and one way to do this is by averaging successive values of the original sequence. So instead of going 1,2, 1,2 we take the average of two successive values that would give us 1.5 and that would set us on the right path to convergence. So let's try that. So what we want to do is, instead of saying x over y, we want to take the average of the two values. So that would be y plus x over y divided by two. And we get back to the square root functions that we've seen in the first week. In fact, if we expand the fixed point function fixed point and plug in the definition of this average temping function of, for, for square root, then what we will find is that we get back a very similar square root algorithm than the one we've seen last week. So it's no surprise that we get the same results back. So the previous examples have shown that the expressive power of a language is greatly increased if we can, can pass, functions as arguments. I'm going to show you next that it's also very useful to have functions that return other functions, and I'm going, going to show you that with fixed point as the example. Let's go back to our observation that square root of x is a fixed point of the function that map y to x over y. We have taken that observation, we have seen that square root doesn't converge that way, but we could make it converge by averaging successive values. This technique of stabilizing by averaging is general enough to merit being abstracted into its own function. Let's see what this function would look like. It would have the function average down. It takes an arbitrary function from double to double. It takes the value x of type double and it then performs the, it computes the average of x and f of x. Let's do an exercise. Given fixed point and average damp, can you write a square root function that uses both functions. So, let's see how we would solve this example. I have given you the average damp function that we developed on the slide. What would square root be now? So square root would be a fixed point of a function that maps y to x over y with initial value one. But that was the one we did that didn't converge. So, what we do is we use average damp. On that function here. And if we run it, we can, we get the expected square root result. So what average stamp is, is it's a function that takes a function, namely this function that is at, at the root of the square root specification, and it returns another function, namely the function that is essentially the same iteration but with averaged stamping applied. So it's a function that takes a function to another function. If you look at this definition of square root, then I think you would agree that it's probably very difficult to write something that is clearer and shorter than this very definition. It came from the specification of what it means to be a square root. We said we derive that the solving the equations for square roots means taking the fixed point of this function. We determined that we needed to take averages to dampen the oscillation, so we use this average damp function in the middle, and we are done. So in summary, we saw last week that functions are essential abstractions. Because they allow us to introduce general methods to perform computations, as explicit and named arguments in our programming language. And this see, week we've seen that these abstractions can be combined with high auto functions to create new abstractions. So, that's a very powerful way to compose and abstract programs. And as a programmer it's always good to look for opportunities to extract entry use. It's not true that the highest level of abstraction is always the best one. Sometimes it's actually better to stay a little bit more concrete. But it's important to know the techniques of abstraction so that you can use them when they're appropriate.