Hello, and welcome to Glue Lecture Four. I'm Smriti Chopra, your instructor for the Glue Lectures, and let's get into it. So this lecture is titled Controllability and Observability, because this week, with Dr. Egerstedt, you guys learned about what is a system and then more importantly, how do you see if it's stable, unstable? And if it's unstable, how do you stabilize it through controls? But before you can do that, you need to see if the system is controllable. And let's say you did stabilize it. Now you want to see if it's observable, because, you know, you don't have state information, so you need to somehow bring in output, etc. And then we see how you put everything together. So in this lecture, we're just going to go over this one big example clearing all these concepts for you guys. Okay? So let's start with system stability. Let's say this is my robot, and just an arm is all that we're going to consider, which is your theta 1 and theta 2 to join the angles of this arm, two angles. And I'm going to see that my state is this guy here, which are the angles and their velocities. And my A matrix is this. This is how my state relates to its derivatives, right? This is given to us. So this is what our system looks like, this guy, by this equation here. There you go. So now what are we going to do? We're going to check if the system is stable. Does it blow up, or as time goes to infinity, or is going to be stable on its own? And the way to do that is by checking the eigen values of A, right? So in Matlab you can simply type eig A, or you can sit and find out the eigen values yourself. Either way. Here we just put it in Matlab, and we see that all eigen values of A are 0. And we know from our lectures that when the eigen values are all 0, that means your system is unstable. So we found out that for this system, this particular x and A, our guy is unstable. Okay, so that means we need to introduce control, right? But how do you introduce control? We know this pretty equation here, x dot equal to Ax plus Bu. But what does it really mean? What is B? What is u; u of course, is our control signal. Well, we need to first find out what it is that we can control about the system, let's say through putting motors or through putting actuators, etc. So we're going to assume that we can control the acceleration of my first joint angle, theta 1. And if we do that, then we see that our B matrix turns out to be this guy here. Why? Because now, when you put B here in this equation of yours, you'll see that the input u shows up next to theta 1 double dot. That means you can influence the acceleration of my theta 1 double of my theta 1, right? That's where my input shows up. So that's how you generate your B matrix. Now that we have this, great. But can we, with this particular choice of B, actually control this guy or not? That is, do we see if it's controllable of not? And for this we have a very simple test, which is you create this matrix. This is the controllability matrix, right? And then you simply check the rank of this matrix. So here we going to make this matrix first. And you guys should be comfortable with finding matrix multiplications, etc. Not for maybe such big systems, but for let's say a two by two matrix, you should be able to do it. If you want we can really fast go over one example. For example, this is your A, right: 0000, 1000, 0000, 0010. And then you've got, I'm going to find AB. So let's put B here, which is 0100. And now, because my A is a 4 cross 4, and my B is a 4 cross 1, my resulting matrix is going to be a 4 cross 1, right? And let's multiply quick: 0 times 01000. So this guy, because there's a one that lines up at the same place. We're going to get a 1 here, which is here too, right? And then there are no 1s that line up, so you get a 0, 0, and 0. So you see how we got this guy here, which correlates to this vector in the controllability matrix. So you should be able to do AB, A square B, etc, quite easily, for at least 2 cross 2 matrices. And now we're going to check the rank of this guy, and it turns out that the rank here is 2. But we have this condition that the rank needs to be full rank, or the matrix needs to have full rank in order for it to be controllable. That means it should have all linearly independent vectors, right? And what do you mean by full rank? Basically, your rank should be equal to n, Where n is the number of states that you have. Here n is 4, right? So clearly, this guy has rank 2, not equal to 4, and that's why this guy is uncontrollable. And in case you don't want to sit and, you know, compute rank, etc, of all these things, you can even do this in Matlab. And this is the code for it. You create your A matrix. Create your B matrix. Then the CTRB function will give you the controllability matrix gamma. And then you just do rank of this guy and pops out 2, which is not equal to 4, we know that, and so it's uncontrollable. Okay? All right, perfect. So now what do we do? It's uncontrollable. Should we just give up? No. We need to introduce more controls somehow, right? And then we saw earlier, how do you introduce control? It's by putting actuators, putting motors, seeing what else you can control. So here in this case, let's say now we can control theta 1 double dot, and theta 2 double dot, the acceleration of theta 2 as well. And of course, then what will happen to your B matrix? This is how the B matrix will look, right? And to double check again, put it here. See if, in fact, u shows up for both theta 1 double dot, and theta 2 double dot, and then we know that u is influencing these accelerations, right? Okay, so now we have this new guy, and we have to check if this is controllable or not. And we do the same thing again. We create the controllability matrix. And we're going to do this in Matlab, because we don't feel like computing everything. And this is the code again for it. Again, very similar to what we did earlier. This time your controllability matrix is going to be much bigger, because you have two vectors here in B. So your AB is going to have dimension 4 plus 4 times 4 cross 2, which gives you 4 cross 2 instead of 4 cross 1. So each guy here is going to be your, you know, B, AB, A square B, A cube B. So you see, now your matrix has expanded. And anyway, now you find the rank of this guy in terms of this rank is 4, which is equal to the number of states we have, and so we are full rank, which is great, because that means we are controllable. Okay, so now that we are controllable, for this particular system, what does it mean? It means that I can use u to make my original system, which was x dot equal to Ax, unstable original system, stable, right? And how do we do that? We use state feedback. That is, we say, u is equal to negative Kx. And now we're going to try and design our K so that we can force the eigen values to make sure that this system is stable, right? Okay, let's go over this example, because finding K for this huge 4 cross 4 matrix, etc, is really cumbersome. We'll just use a simpler system. Let's shave off one angle completely. So let's just think of our state as theta 1. And theta 1 double dot, theta 1 dot. So we've just gotten rid of theta 2. No problem. So we have this new guy here, but again, now we have to do this entire charade again. First we need to find out, is this guy stable or not? And I'm just going to give you the answer. You should do it yourself, just to convince yourself. But yeah, this guy is unstable. So I'm going to introduce control through this plus Bu thing. And again, I'm going to say I can control the acceleration. I'm going to get this as my B matrix. And now, is this controllable? Because that's the second question you ask, right? And I'm going to tell you it's controllable. You guys can find out on your own. Just make the controllability matrix. Do it. It's good exercise. And now I'm going to come back to, okay, state feedback. We're going to design state feedback, but for the much simpler system, so we can actually work it out ourselves instead of putting it in Matlab. Okay, with that, so how do we do this? We know that our system is controllable, and we have this guy, u equal to negative Kx. So let's put u into this equation of our x dot, and we get this new guy here, right, x dot is equal to this big guy here. Okay. So we just put the values inside. A, this is A. B, your K is this. 1 cross 2 matrix, k1 and k2, two gains, and what you finally get after solving this entire thing is this new guy here, or simply A dash, right? So you have this new A dash, where you've put in your control and everything, which is this matrix here. And we all know that if you are given a system x dot equal to Ax, or A dash x, or whatever, to see if it's stable or not, you check the eigen values of A or A dash. In this case, we have control over k1 and k2, so we're actually going to force the eigen values through our k1 and k2 to B to make sure that the system is stable. And if you remember, the eigen value's one condition is that the, the real part of the eigen value should be in the left half plane, right? So we're going to force, through our k1 and k2, the eigen values to be in the left half plane of the world. Okay, with that so how do you find the eigen values? So in the first slide we pretty much just put it in Matlab and said eig A. But really, how you do it is, you find this guy here, the determinant of A dash minus lambda I, which is really, if you follow it, turns out to be this. So basically what you're doing is, you're saying A, and then you have, so when you actually solve the system, you're just going to get minus lambda 1, 1 minus k1, minus k2, minus lambda 2 as your final matrix. And then you find the determinant of this guy and solve it, and you'll get this equation right here. Okay. Another exercise that you guys should be doing on your own. And it's good stuff. Okay. So now that we have this characteristic polynomial, what we're going to do is we're going to pick our two favorite eigen values on the left half plane. And let's say we pick negative 1 and negative 2. So now what we're going to do is, this is what we got. And this is what we want, right, which is just this guy here. So we are just going to simply compare these two guys, and get our values for k2 and k1, just by simple comparison of two polynomials. Because what happens now is that, with this particular choice of k2 and k1, my system, with this particular choice is going to actually be stable. Just one quick thing. So yeah, when you put this k2 and k1 here, in this guy, right, in this matrix, now your A dash is definitely going to have this eigen value. This is what it really means. And because we chose these eigen values to be in the left half plane, we know that this new system is going to be stable. Okay. So yeah. There you go. We have state feedback on a simpler system. We chose our key, and now we know that this guy is stable. So, woo-hoo. But now, we come to another annoying thing, which is that we don't know our state. We are very happy saying that, oh, you should be negative Kx, but we don't have this x. What we instead have is x hat, an estimate of our state. We don't even have that, but this is what we are using really. And now we need to find this estimate of our state. And for that, again, now we have to introduce this concept of output. What can we see? What are our senses? What can we measure? That's why the whole y matrix and the whole thing comes in. Why? Because now we say, assume we can see this guy here. Theta 1. So, accordingly, my C matrix is going to be this, and my y is equal to Cx. Again, why? Because when you put the C guy inside here, you'll see that your y becomes simply theta 1, and that's what we can see, right? So we choose what we can see, we choose the senses we have, and then we create C, and accordingly, get this y equal to Cx thing. But now we come to this other whole thing of, is, is this observable? Is this new system now observable? The new system being these guys. Is it observable? Which means, that can I, in fact, estimate my state, x hat, based on this particular C matrix. And for that, you have a very simple test, very similar to the controllability matrix. This guy is called the observability matrix. And because we have just two states here, this is going to be a short little matrix. And you find out what it is which, you guys can. And then you check the rank of this guy. This rank is actually equal to 2. Is it full rank or not? Well it is, because our states, our number of states are, sorry, 2, right? Here. So yes. It is in fact full rank, which means that this guy is observable. Perfect. That is great. So yay. But what does it really mean? Again, just to reiterate, what it means is that from lectures, you guys remember that you can write the dynamics of your estimate of your state in this form. Where L is just some gain matrix, very similar to your K matrix for controllability, right? And by saying this, what it really means is that if you were to write down the error dynamics given this guy for your state dynamics, you get this matrix here, which looks very similar to A minus KB right? For, if you guys remember from your earlier controllability analysis. So very nice. Why? Because now we can choose L the same way that we chose K, to make sure that the eigen values of this guy here force the error to become stable, or in fact, what that means is basically force that the error does not blow up. It, as time goes to infinity, the error actually diminishes to 0, and that's what really observability means. Can you in fact find an L that will make sure that the error goes to 0, which means that my estimate will become equal to my state. Okay, perfect. So for the quiz, it's important you guys should be familiar at least with what is really going on, what is the dynamics of my estimate, error, etc. Just be comfortable with understanding what it is that's happening. And now all together, execution, because this is the most important part, right? We have got these little blocks everywhere, controllable, observable, blah, blah, blah. But we really don't know how to put it all together. And that's what we're going to do now. So this is our system. X dot equal to Ax plus Bu, y is equal to Cx. With this particular choice of x, A, B, and C, which we've been solving all this while. Okay, and then we have found out that x dot equal to Ax on its own is unstable, and then by introducing B and C, the system is controllable and observable. Great. Controllable means I can find a K to make sure that my system becomes stable. Observable means I can find L to make sure that I can in fact estimate my state, right? So let's put it all together. So I wake up at this time t equal to t nought where I'm at some x equal to x nought that I don't know, because I don't know my state. But I'm going to estimate my x had to be some x hat nought. Right? Just something. And I'm going to start my loop with DD increments. So I started d not, and I'm going to start incrementing time with dd increments. Okay, and I'm going to read the output. Because I don't have my state information, all I have is the output, so I'm going to read what my output is giving me. Okay, then using that, I'm going to compute my control, u equal to negative Kx hat, not using the output, using the estimate, the initial estimate that we had. Simply. Okay and, remember that you've already designed K and L. Okay, so you compute your output, sorry, compute your control, and then you send this control signal to your system, right? And then you update x hat using your dynamics, where x hat was this guy here. And this is where you use your output, what output you read. Right? Because for control you need x hat, so now obviously you need to see where is your new x hat, and that you're going to update through this guy here. And just remember, when you're updating your dynamics, all I have here is x hat dot. How do I find out what my x hat should be the next time instant, you know what, you're going to use this Oiler's approximation, which we all studied together in the Glue Lecture One, I think. And of course you've been going over it with Dr. Egerstedt. Basically, your next step is your previous step, plus dt times your dynamics. Right? So that's what we're going to do. We're going to find out x hat dot, and then we're going to just say our next guy is our previous guy, which was this estimate here, plus dt times x hat dot. And that'll give us our new guy. And now we're going to repeat. So now when we come back, we can read the output again, compute the new controls and the control update x hat, etc. This is how we put all of this stuff together, for execution, in your code, etc, whatever. Okay, and with that, check the forums, good luck with Quiz Four. All the best.