So let's briefly outline why we have lecture part A and the lecture part B in order determine such reduced models which are also parameterized. In the approach of part A, which is the subject of today, we are starting with large scale system model, which is drastically condensed and reduced, which we see in the center of this view curve, and the concept and idea is that the parameterization shall also work on this drastically reduced model. So in parametric studies and optimization studies and so forth, we then only deal with such reduced models, which from time to time, of course, might have to be updated which we will see later. Such a technique [inaudible] runs on so-called clearly structured problems. There's another technique which is subject of our approach in discussion in part B of our lecture later on. We again have two parameters, parameter 1 and parameter 2 on the x and y-axis. We have the response or output of the system, and we have some reference points which might be either taken from simulation, which might be taken from experimental results or databases, might be taken from knowledge bases, and so forth. These reference points help to establish and serve to establish so-called approximation functions. Instead of the overall large scale model, these approximation functions as functions of the parameters are used to investigate the system. In both type of approaches, our aim is a drastic reduction of the computational effort drawn to design and development process. But at the same time, we have to take care, set the surrogate response, which is given by this a circumference. The other [inaudible] value represent or sufficiently value represents, the high fidelity model, or in other words, the approximation error shall be small. So let's look into the basic and fundamental steps of such reduced and parameterized models which come from our part A type of problems and methods. We have seen that we are starting with a large-scale system model, which somehow is reduced to a small-scale model, and we shall represent the overall behavior. But if this small scale model is not yet parameterized, we have to parameterize and do the parameterization process again, apply to the original large scale model when we are modifying the parameter, and modifying of parameters is a process and step often done during the development phases of any system, including aerospace systems. So the reduced model has to be parameterized, so we are again starting from the large-scale model, which you'll see as symbolic representation on the lower and center side of the view curve, which is condensed, also which is represented as function of our parameters, which determines the behavior of the overall system parameterized. So we can investigate the parametric dependence and parameter study on this reduced model instead of using large-scale models and doing the condensation again and again, and again. So let's look into even more detail of this approach. Again, we are starting with the large-scale system model represented by a large-scale matrix representation, which shall be shown here on the left upper part of view curve. Often, different subsystems of the aerospace system contribute in a different way to this overall model, and this being said, different sub-matrices are contributing to this. One has to find, in order to reduce these sub-matrices and do superimposed sent to small-scale model on S to find a proper projector and from S to find from a mathematical point of view, projector operator. We, which protects this subsystem matrices into a very small space, which might be Krylov-space or any other small space. In vibration, it might be a modal space or other possibilities exist. But the important point is you see on the lower part. So typically such a projector is composed again, on large-scale matrices and this shows that the application of such a projector after modification of the system matrix due to parameter changes. This application and determination means a lot of computational effort and you chose [inaudible] is better to modify and to use this parameterization already on the small-scale model and the reduced system. So in order to really save computation effort in a sequence of parameter modifications, it's relevant and important set not only the models are reduced and the projector is determined, but also [inaudible] projector we is parameterized biases parameters, which influence the behavior of the system under investigation. This view curve summarizes still previous two or three if view curve, namely we have see large matrix, extra-large system model on the left-hand side, see contributions of the blue, yellow, red, cream parts of different parameters and subsystems. These matrices are projected into a significantly smaller space, where they project operator V and superposition of cis projections results into these other small-scale system model, which is shown on the lower part of these view curve. What's left over is, since we have to deal with parameter variation in the design and development process, the question is, how is this projector V to be parameterized? It's a parameterization of the projection operator V, and the contribution of different subsystems and sub-matrices is done in the following way. In principle, and in principle means there also again, different means to do this. One fundamental principle, and often used is, to use linear or non-linear tailor series expansion. Let's assume that this is matrix A, which we are parameterizing, depends accurately or approximately on this parameter p to the power r. Then see Taylor Series Expansion says set the new matrix A due to parameter modification of Delta P is the original matrix A plus C summation, which we seen in the center of this view curve, where the partial derivatives of this matrix A with respect to this parameters, is used. So delta p_i is a different parameters are then applied to. The crucial point here is to determine this gradient dA via dp, and these well-structured problems can be often done by so-called semi-analytical means. So this derivation, this differentiation can be done on the matrix itself, in a semi-analytical way, and semi says it's done on a computer numerically. If this does not work in certain circumstances, one can apply finite-difference. Of course, it means more computational effort, but keep in mind that difference can be done in parallel on many processors in parallel and then again, at least computational time, wall clock time is saved. What's also important is say, possibly, the chain rule should be applied if one knows and if one can estimate the influence of this parameter on these different matrices, namely; let's assume if this parameter is a one over P influence, on inverse influence. Since the gradient of this matrix A on the coefficients of this matrix A as function of this parameter P and the gradients are obtained via this chain rule. The original coefficients divided by p squared with a minus sign and all other terms of this matrix A which are not influenced by this parameter are set to zero. It's a partial derivative, and let's assume alternatively since this parameter influences since matrix A in a quadratic term since the chain rule tells us that the original coefficients of this matrix A can be determined by multiplying is coefficients with two parameters p. Again, because it's a partial derivative, all as a parameter is not influenced by p's are set to zero. This way of expanding and parameterization very much determines the quality of the interpolation and the parameterization, and the quality of approximation itself.