Okay, let's be back to the system, right? Expressed in the matrix form, right? We have system of force to the differential equations. Say the X prime is equal to A times X plus F, right? Okay, first, let me introduce the concept of the solution. Okay, what will be the solution of this system, okay? Because unknown vector is given by the vector form. So we must start with the vector, call a vector. Okay, vector functions are X, right, having component Xj of t, right? Okay, a solution, okay? Or solution vector, okay, of the system 2, right? Okay, the system 2, right? Look at this system there, you need the derivative of the vector X, right? Derivative of vector X means in fact derivative of which component Xj of t, right? So is the solution of the system 2, okay, on the interval I, okay? Then you must require naturally then all the coefficients Xj of t. They are differentiable on I, okay, and to satisfy that differential equation, right? When you plug in this vector capital X into this system of equations, right, equality holds on the interval I, okay? And satisfies the system to an I. Okay, then we call such a vector X as a solution of the given system, right? To be a little bit more precise for any given, any given fixed point say the t0 in the I, okay? And the given constant vector, okay? Constant vector say capital X of 0 having component to say our power of j, okay, in rn and included in the n dimensional Euclidean space. Okay, the following problem, right? Okay, not just this differential equation, system of differential equation. But I'm adding one more plus F together with the value of this unknown functions at a given point t0 that is equal to given vector X0, right? Together with this another side of the condition, we call this one. Okay, for this one we call this problem. Okay, the given system of differential equation and this additional condition as the initial value problem. Okay, I think you are already familiar with the terminology to say the initial value problem for the single differential equation, right? We have to handle such a problem, such problems in differential equation part one, okay? The natural question relating to this initial value problem is, okay, is insolvable. In other words, do we have a solution? Okay, not in general, but we need some conditions on the coefficients A and the vector of the function capital F. Right, so let's say into this the Laplacian theorem, okay? I will call it to be an existence of the solution of a unique solution. Okay, unique solution for suitable initial value problem. Okay, so we are considering again the considering the initial value problem given by, okay? The governing differential equation and the initial condition X of t0. That is equal to the given vector, given constant vector X of 0, right? If the coefficient, the entries of this coefficient matrix A and F. In other words, if all the Ai jt, okay, and Fj of t, right?. If all those, the entries of capital A and the capital F, right? If all of them are continuous, okay, are continuous on the interval I. On the interval I, then our conclusion is this initial value problem 3, right? This initial value problem 3 has a unique solution. Solution capital X of t valid on the interval I, okay? That's the first theorem of my claim, existence of a unique solution for initial value problem. Okay, if you remember the existence of a solution for the differential equation, then you might be able to remind the famous, the theorem says that the Picasso theorem, right? This is just the kind of Picasso theorem for the system of equations, right? Okay, you do better to refer to the the Picasso theory, okay? So in the following, okay, in all of the following, the discussions. Okay, we always assume that this coefficient matrix F and the F, right? Their components are always continuous on some common interval I, okay? Without mentioning it this separately. But we always assumed that the A and F, they are continues on some common interval I, okay? Second theory, my claim is I think this is not again, the theorem that you're already familiar with the title, right? I said, the so called the superposition principle, right? It says let's assume that have finitely many solution vectors say capital X of j, where j is moving from 1 to the k. k is a certain, the finite, the positive integer. Okay, if these are solutions of the homogeneous system, okay? Of homogeneous system X prime is equal the capital A of X, right? Then, okay, for any constants Cj, where j is moving from 1 to the k. For any arbitrary constants Cj, make a combination, right? Linear combination of X1 through the X2 to the k using these coefficients as a Cj times Xj of t, right? Then this the vector of the functions, right? The superposition principle says that this is also solution, okay, of the same homogeneous system of equations, right? To make things a little bit more precise, if their solutions of this homogeneous system of equations of equations on I, okay? On the interval I, then the arbitrary linear combination is also a solution on the same interval I, okay? That is the so called the famous, the superposition principle, right? This is an extension of the superposition principle we have for the single linear homogeneous differential equations, right? We encounter the same theorem, okay, in differential equations Part 1. Okay, now the let us introduce the one very important concept of the linear dependence or the linear independence of the solution vectors, right? So now here's a definition, right? It's linear dependence or linear independence, okay? Let's consider the set of case solutions say capital X of j for the homogeneous system of equations for X prime is equal to the A F X, right, on I. Okay, finitely many solutions of the homogeneous system of equations X prime is equal to A of X, right? Okay, on I and we say there's such a solution set. A finite set of solutions, right? Okay, this is linearly dependent on I, linearly dependent on I. If there are, okay, constants which I will denote by C's of j. Okay, k constant Cj, okay? Not all zero, not all the zero means what? Okay, it means C1 squared plus C2 squared plus Ck of squared must be strictly positive, right? Because I'm assuming that these constants are the real numbers so that the square of the any real number is not negative, okay? And so sum of them, okay, sum of the squares of C1 and C2 and Ck, okay? Is strictly positive means, okay, at least one of them must be non zero, right? So, okay, that's what I mean by saying that those, the constants. This is not n but k, right? Okay, the non zero constant, not all zero, the constant from C1 through the Ck. Satisfying such that their linear combination using those constants Cj Xj of t, j is moving from 1 to the K. This is equal to identical 0 on I, right? If you can pick it up, right, such the constant. Okay, such a constant from C1 to Ck. Okay, not all of them is equal to 0. Okay, satisfying that equation then we say that the set of the case solutions is linearly dependent on the interval I, okay? Otherwise, call those set Xj, okay? j is equal to 1 to the k to be linearly independent. Right, linearly independent on I, okay? That's a very, the important and the useful the notion we need, okay, to develop any theory further, okay? [MUSIC]