So now, as Bryan was saying, well in linear systems if we had integral, we can actually make this quite robust to uncertainties. because then if you have a steady state error, and you're integrating that state. If that state doesn't go to zero, you're integrating something finite. Over time it goes bigger and bigger and bigger and bigger. It's essentially what integral does. So it's a great measure to kind of, it doesn't, it's not quite what people would call a. Adaptive control, but it is a way that the system learns somewhat, to what is this disturbance with integral terms, and it will go there. So I'm going to show you now, for linear systems, it's fairly easy to add those terms. And then, you look at the root, the new characteristic equation. Make sure your roots are still on the left hand side and off you go. With the [INAUDIBLE] theory, this gets a little trickier. because we don't just want to do this for linearized close to dynamics. I'd like to have a control that's globally nonlinearly stabilizing on the whole system. So, the trick is instead of just doing the integral of the attitude error. I'm going to the integral of k times the attitude error plus the inertia times the error angular acceleration. Who came up with this? Professor at A&M actually kept the paper on this stuff. I modified them to work with the MRPs. And so, I don't know how he came up with this one. But I think it's inspired through the sliding modes. Some of you have heard of sliding mode controls. They're a way that you have this extra measure of a sliding surface. That's a measure of states and rates, and then you guarantees some stuff up there. It's kind of inspired by that a little bit. So in a linear system, we would just have this, K sigma integrated. Here, we're going to have both. because otherwise, we can't get our controls analysis to give us the v dots that are really negative-definite, or negative-semidefinite. If you just put feedback on this, it might seem stable, but I don't have any analytic guarantees it's going to be stable. You'd be amazed at how much control design is done ad hoc. Just throw me feedback, let's see what happens. And there's no one else. Here we want to have rigorous analysis of what happens there. So we have the z measure, so we now make, the z becomes basically an extra state. That we want to make sure is driven to zero without external torques. So we shouldn't impact that. Function as before. We had our rate measure for tracking, our additive measure. And now, I'm using this symmetric posidefinite k matrix and just z transfers with a gain. That's it. Yes. Question? No. Okay, so that's the new Lyapunov function. This stuff, we've just done. So now, you've already seen that math. I'm just going to show you the differential math on top of that, another layer. So if we differentiate this, it turns out you can factor out not delta omega, but you can factor out delta omega plus z. Delta omega's happen here, here, and we differentiate this one. That's how many terms are in here they appear. And I'm going to set this now equal to this negative semi definite function again. And the sigmas don't appear so you know right away it's negative semi-definite. But you can also have a delta omega that's equal to minus KI times Z which also drive it to zero. So it's negative, or zero. So it's negative semi-definite, which gives us stability. And to do that, you make this control equal to this, and that's what will be plugged in. So compare to the earlier control, you regain the proportional rate feed back. There's the same feet forward compensation on some parts feedback dynamics on the linear part with feedback compensating the known torque. All of these is the same as the control we have this morning when we derived that. The new term that appears is this one. That's going to give us the integral term that works, but we can still guarantee global stability. To compute the Z, you can see it's the integral of attitude, okay, but it's the time integral of a time derivative. That works for the derivation, but it's kind of a tedious thing to implement. You don't want to take signals differentiated and then integrated again. Why do you do all that? It seems like very much a wasted effort, right? So this part here, to implement it you can actually just integrate this. This is a constant. The integral of that is this at the current time minus at the initial time. So you can rewrite the z variable like this if you want to. And that's an easier implementation. So now I put that definition in there for z and this is how the stuff rewrites. I still have proportional here, I have the rate feedback here. But it's not just the P matrix, the KI term impacts the rates here. So from a performance point of view. This gives you your feedback. Which K comes in, P comes in, and KI comes in. This is the term for the initial rates, which you could include. And the rest of it is the classic control that we've had. So you can see with the gains and performances, it's not just proportional and rate gains and interval gains. With this setup, it kind of mixes. But it allows us to still guarantee global stability. So that's a modified, it's like a pid non-linear control that you would have. So with the v dot that we set for this system, we had this function. So V-dot will go to 0 when this bracketed term goes to 0, because P is a symmetric positive-definite matrix. So okay, we know delta omega has to become minus KI sigma at some point. Then what we can do is you can use Mockito chain. Take the second derivative of this, take the third derivative of this. Evaluate it on this set. A bunch of algebra later. You end up with this expression. Which is pretty cool, because that is negative, semi-definite and negative definite in terms of the sigmas. And that's nice because this actually tells me I'm quite sure what happens to this combination, but here my sigmas will go to zero. If your attitude states go to zero. You can actually also argue the smoothness that the rates must be zero. This is not a weird system a lot of chatter or infinite jumps that can happen. This is a nice continuous control. So if my states, if that happens and this must go to zero. And if this part is zero that means the interval measure at the end would also have to go to zero. But this is without external disturbance. We know everything. So this just shows you the system would actually be globally asymptotically stabilizing if we know everything that's in there. What we want to do next is what if we have an unmodeled torque? So you go through the same stuff. You add that one delta l, it's just one extra term that appears in the equation. And as before we had minus p omega squared and then a p dotted with this, which is written slightly differently. The same arguments can be held there. And, Yeah, what we can conclude here basically is that this product or this sum of states has to be bounded. Because if this is a bounded influence at some point the quadratic particle always over rule the first order part. You can't just be spinning up to infinity. So [COUGH] if that is bounded, especially if z is going to be bounded. That means the only way z can be bounded is if this part term here eventually goes to zero. If these states don't go to zero, then your integrating off to infinity they would grow up to be infinite. So, enforcing that z is bounded, actually tells me sigma has to be bounded. And therefore also the Delomega has to be, actually goes to zero. Which is a nice result. So now with unmodeled torques, I can actually argue that I will have asymptotic error as long as the torque is constant. That was one of the requirement that we have. If it's not constant, then doesn't quite work. But, the z will not go to zero with unmodeled torques. So, following the earlier steps, you can do this, plug it in into the exclusive dynamics. Evaluated on these sets. And we know del omega's going to go to zero. In the end, you can predict now what the z variable is going to be. If this is some unmodeled torque, I know what the inertia is, I know what this gain is. This is what that value will convert to. So your integral term, as it builds up, it actually, in fact, learns what that external torque is. And it brings you to zero because at zero it has to compensate. Something's pushing me one direction one Newton meter. This integral term has to push back one Newton meter, otherwise you wouldn't stay in place. So it's a way of doing a simple form of doing adaptation. It's learning with an external forces. Which is nice, so if we wanted the same simulation we had. That's the same external force. We want to derive this to zero. Before the rates went to zero, but the attitudes didn't go to zero. Now by using this new control with the z interval term in there, this large departure tumble and recovers, comes down. You can see our attitudes do go to zero, which is nice. And from the analysis we're predicting here that. I'm using a single inertia here, so delta l over iki should give me these z values. That's where this integral term should converge to. And I believe yup there we go. There's the z's plotted out and they actually match the 1.6, 3.3 and minus 3.3. So the numerical results match up very well with what we analytical predicted, this is going to be the off set. And the control as expected doesn't converge to zero here. It can't. because when its holding the heading, there's always a torque that's pushing you out of that heading. So, it has to converge to a non-zero torque and that can be a challenge. If you do this for example with reaction wheels. What do you think Brian? If I'm using reaction wheels to create this torque and you have to have a non-zero torque all the time just to hold the heading, any problems you can envision? >> Couple. >> A couple? [LAUGH] Give me one. >> I know your reaction wheel is always going to be changing or moving or changing its omega vector. >> Yep, let's ignore the gyroscopics. So if I'm putting a reaction with torque on there, I'm spinning up the wheels more and more. I need continuous torque, that means I'm continuously spinning up. At some point, they will literally fall apart or break at these maximum speeds that you can reach. So, these kinds of controls are great but let's say we use thrusters, no gyroscopics. I'm just using thrusters and I have to continuously thrust to generate this torque. >> Run out of fuel. >> You run out of fuel, right? So you have to be very careful. This is great, if you have to have precision-pointing, and you want disturbance rejection. Orbit insertions is a big one, if you're doing trajectory correction maneuvers. I'm burning for 15 minutes, and I want to be really careful I'm burning in exactly the right direction. Maybe for that time period this makes sense. And gives you, because if that thruster works stronger than you think, you want to be robust to uncertainties. And if it's always running 10% hot, you get this constant offset. This would be a perfect control to make it quite robust. But you wouldn't do this Kepler, staring at stars for months and months at a time. Those wheels would be spinning so quickly, the fuel would be depleted very, very quickly. So, this is nice math. Keep in mind the realities of flying in space and the resources that are very limited. So good. This is now, we've shown stability, we've shown global stability. We've also talked about asymptotic stability. And we've talked about unmodeled torques. That without the integral feedback you have steady set offset errors which we can predict. Maybe you can live with that. Pick the right gains as you say and you can control it to some degree. But you, with unmodeled airs it's only Lagrange-stable. If you want to have Lyapunov stability, you have to go something that's smarter, like an integral term. I showed you how you could implement that.