0:00

So in the previous lecture, we did a lot of technical push ups to end up with end

Â up with a description of what the solution to a general LTI system is. the reason for

Â that is, is that I really, really enjoy rule, even though I do. but that it will

Â actually help us to characterize what these systems are doing. And today, I want

Â to talk about stability, because as you probably recall when we did a control

Â design, first order of business is to design controllers so that systems don't

Â blow up. If they blow up, there's nothing we can do about it. The quad rotors just

Â fall out of the air. The robots drive off to infinity. The cars smash into things.

Â We don't want them to blow up, because the deciding objectives are almost always

Â layered in this sense. First order of business is stability. Then we want to

Â track whatever reference character or reference point we have. We also want it

Â to be robust to parameter uncertainties, and possibly noise. And then we can wrap

Â other objectives around it, like when you want to move as fast, quickly as you can,

Â or use as little energy when you're moving, or things like this. But,

Â regardless of which, stability is always the first order of business. So let's

Â start with scalar systems, no inputs. So only the A matrix now, in this case x dot

Â is little ax, which means that it's scalar. Well then the solution x of t is e

Â to the a, we said t minus t naught x of t naught. Here I simply picked t naught to

Â be equal to 0. So this is the solution. Okay, lets plot what this solution looks

Â like. If a is positive, then x of t it starts nicely and then pabaah. Its, its

Â blowing up as far as I can tell. So if a is positive this system blows up. Well, if

Â a is negative, then e to the at, this is a decaying exponential. So we get x to just

Â go, , nice down to zero. What happens if a is zero in between these 2? Well, then you

Â have e to the zero t, which is 1. So then, x of t is simply equal to x naught. x

Â never changes. So here, it didn't blow up, but it didn't

Â actually go down to zero. And in fact, what we have its, its really a sitution

Â where three possible things can happen you blow up, you go down to zero, or you stay

Â put. So let's talk about these three cases. The first case is what is called

Â asymptotic, stability. So the system is asymptotically stable if x goes to zero

Â For all initial conditions, so this fancy upside down a, is known as the universal

Â quantifier. All we need to know is that when we see and upside down a the way we

Â pronounce it is for all x nought. So asymptotic stability means that we go to

Â zero and almost always what we want to design our system so that x actually goes

Â to 0 no matter where we start, that's asymptotically stability and as you

Â recall, in the scalar case, a strictly negative corresponds to asymptotically

Â stability. And then we have unstability, instability where the system being

Â unstable. What that means is there exists an initial condition, so the flipped e,

Â and to speak for the existential quantifier, which we read it as exists. So

Â it's unstable if there exists so many extra conditions from which the system

Â actually blows up. In the scaler case, we had A positive corresponding to

Â instability. and then we have something we call critical stability, which is somehow

Â in between. The system doesn't blow up. But it doesn't go to zero either, and in

Â fact, for the scalar system, this corresponded, corresponded to the, a equal

Â to zero case. So if you summarize that, if you have a scalar system then a positive

Â means the system is unstable. A negative means that the system is asymptotically

Â stable, which is code for saying that the state goes to zero. And a zero means

Â critically stable. Okay. Let use this way of thinking now on the

Â matrix case. X. is ax, capital A. So this is now, x is a vector, a is a matrix. What

Â do we do there? Well, we can't just say. Oh, a is positive, or a is negative.

Â Because a is a matrix. It's not positive or negative. But what we can do is we can

Â go for the next best thing, which is the eigenva lues.

Â And, in fact, almost always, the intuition you get from a scalar system translates

Â into the behavior of the eigenvalues of these matrices. And for those of you who

Â don't know what eigenvalues are, these are the special things that are associated

Â with matrices. So, if I have a matrix A; N by N, and I multiply it by a vector an N

Â by 1 vector, if I can write it as the same vector times a scalar, then what this

Â means is that the way that A acts on this vector is basically scaling it. And the

Â scaling factor is given by lambda. If I can, if I can find lambda of v to satisfy

Â this, then what I have is a lambda that is called an eigenvalue. And it's actually

Â not a real number. It's typically a complex number. So it's a, a slightly more

Â general object than just a real number, but that's an eigenvalue. And v is known

Â as an eigenvector. And eigenvalues and eigenvectors are really these fundamental

Â objects in, in when you're dealing with matrices and when you want to understand

Â how they behave. And, whenever you think scalar first, you can almost always

Â translate it into what do the eigenvalues, eigenvalues do for your systems. And, the

Â eigenvalues actually would tell you how the matrix a acts in the directions of eh

Â eigenvectors. So, you can almost think of them as scalar systems in the directions

Â of the different eigenvectors. And, you know, sometimes you may want to compute

Â eigenvalues. I don't. So, if you use MATLAB. You would just

Â write, eig(A), and out pops the eigenvalues. whatever software you, your

Â comfortable with, you want to use C, or Python, or whatever, there is almost

Â always a library that allows you to compute eigenvalues. And, the command is

Â typically something like eig(A). So, this would give you what eigenvalues

Â are, given a particular matrix. Okay. Let's see what this actually means. Let's

Â take a simple example here. Here's my a system. 1, 0, o minus 1. if you take eig a

Â of this. you get 1 eigenvalue being 1. And the other eigenvalue being negative 1. And

Â the correspo nding eigenvectors are 1, 0, and 0, 1. Okay.

Â What does this mean? It actually means the following. So let's say that this is x1,

Â and this is x2. Okay. V2 was 0, 1. So this was this direction.

Â So here is what, v2 is. This is the direction in which v2 is pointing. Well,

Â the eigenvalue there is negative 1, which means that, if you recall the scalar

Â system, when a was negative, we had stability. So if I start here, my

Â trajectory is going to pull me down to zero. Nice and stable, and in fact, if I

Â start here, it's going to pull me up to zero, nice and stable. Right.

Â So, if I'm starting. on the x2 axis, my system is well behaved. If I start on the

Â x1 axis, I have lambda 1 being positive, which corresponds to little a being

Â positive in the scalar case, which means that the system actually blows up. So,

Â here, the system goes off to infinity. And, in fact, if I start here, my x2

Â component is going to shrink but my x1 component is going to go off to infinity.

Â So what I have is this is what the system actually looks like. So the eigen vectors

Â in this case will tell me what happens along different dimensions of, of the

Â system. So after all of this, if I have x dot as big AX, and I can find a solution,

Â then the system is asymptotically stable, if and only if, for the scalar case, we

Â had that little a had to be negative. What we need in the matrix case is that the

Â real part, remember that Lambda are complex, the real part of Lambda is

Â strictly negative for all eigenvalues to a. For all, this is what asymptotic

Â stability means for linear systems. Unstable means that there is one or more,

Â but one single bad eigenvalue spoils the whole bunch. So a single eigenvalue that

Â has positive real part. This is an, a sufficient condition for instability. And

Â we have critical stability only if so this is a, a necessary condition that says the

Â real part has to be less than or equal to 0 for all igon values. But where we are

Â going to be spending our time is Typically up here in the asymptotically stabl e

Â domain, because what we want to do, is we want to design our system or our

Â controllers in such a way that the closed loop system is asymptotically stable. So

Â we're going to somehow make the eigen values have negative real part That's

Â going to be one of the design objectives that we're interested in. And I want to

Â point out something about critical stability that if one eigenvalue is 0 and

Â the rest of the eigenvalues have negative real part, or if you have two purely

Â imaginary eigenvalues So they have no real part, and the rest have negative real

Â part, then you have critical stability. and we will actually see that a little bit

Â later on, but the thing that I really want to take, you to take with you based on

Â this slide, is, you look at the a matrix, you compute the eigenvalues. If the real

Â part of the eigenvalues are all negative You're free and clear, the system is

Â asymptotically stable. If one or more eigenvalues have positive real part, you

Â toast, your system blows up. That is bad. So, let's end with a tale of two pendula.

Â Here is the normal pendula, well if you compute the of this, you get this matrix.

Â And the eigenvalues are j and negative j. Well, I don't know if you remember, but on

Â the previous slide, there was a bullet that said if you have 2 purely imaginary

Â eigenvalues, which we have here. We have 2 purely imaginary eigenvalues and then no

Â more, then we have critical stability. What this actually means is that, this

Â pendulum, clearly, there is no friction or grav-, or damping here. It's just going to

Â oscillate forever. It's not going to blow up. And it's, , excuse me. And it's not

Â going to go down to zero. It's just going to keep oscillating forever and ever. It's

Â critically stable system. Now, let's look at the inverted pendulum

Â where I'm moving the base, but in that case, a is 0110. We already know, this

Â things is going to fall over. Right? So, if you compute the Eigen values you get

Â one Eigen value to be equal to negative 1 and 1 to be positive 1, which means that,

Â we have one Rothton eigenvalue. This eigenvalue that's going to spoil the

Â system. So this in an unstable system. So now that we understand that eigenvalues

Â really matter, and they really influence the behavior of the system, let's see, ,

Â excuse me, how we can use this to our advantage when we do control design.

Â