Before I jump into this, just timewise, let me do one example by hand where I want to lay out how to actually implement a control. The last homework you have to simulate stuff. You've been working on how to simulate kinematic differential equations. Now, you're adding the kinetic differential equations. And in control, you still need the full six degrees of free, you know, the translate, altitude, and rate. Six states, not six degrees of freedom. Rotational motion, but we have to add control. So, any simulation, let's start this out. We're gonna have, yeah why not, we'll do it in green. It's not St. Patrick's Day, but I'm sure the Irish are happy. OK. So we're gonna say sigma, and I'm gonna be explicit, omega_B/N. If you're coding a tracking problem, I highly recommend your variables aren't just sigma in your code, because you'll get confused. I know I get confused, even out all these years. Which sigma is this? So sigma relative to B relative to our R relative to N, R relatives to N. You know, be explicit. You know. So here, I'm just gonna say these are the states. This is our typical sigma we've had before. And therefore, our differential equations of this is gonna be the 1/4 B of sigma_B/N, right? The matrix, omega_B/N and this other one is the inertia inverted times omega tilde I omega plus torque plus L, if you add disturbances to it, right? Those are the differential equations you have to integrate. There are six of them now that you have to deal with. So let's see how we gonna simulate this in our set-up. In the code, at some point, you have to say: what are my initial conditions? Which should be pretty straightforward. You put in your initial altitude relative to inertial, your initial rate relative to inertial. So good. Then we start a time loop. I'm just gonna do a first order integration. If you want to Runge-Kutta, it's a few extra steps. It's easy to implement. It's just more stuff to write. So, right now, before I integrate, I actually need to compute my control solution because this integration needs an input what is the control torque that you're applying to it. All right? To get the control, the control we're gonna do is, or just one of these functions. What was it, -K sigma_BR minus P_BR and then all the other terms. Right? So I need my altitude relative to the reference. I need my attitude rate relative to the reference before I can compute the control. So you're gonna have to find the control. To find the control, we have to find the reference. So if you're doing reference tracking, what you're gonna have to do first is find R. That means you need the altitude relative to R, you need omega R relative to N, and you need the angular accelerations relative to N, right? So, however you generated the reference trajectory, especially in the project two, you're defining a frame. You can maybe numerically differentiate it to get these rates. Maybe differentiate it twice with enough histories and you get the feedfor-. Now in the project, you don't need feedforward acceleration. It's just a PD control without feedforward. So that would do. But in your other code, if you have something, if this is your elliptic orbit, you need to know your orbit rates, your orbit accelerations. This is what feeds forward into the control. So you have some subroutine probably at this time, what is my reference state? Right. Where should I be pointing in some way? Good. Once we have these, then you compute the control, right? And that's a maybe not a subroutine that you go, "Hey, this is the control, and I'm doing interval feedback. I'm doing the PD. I'm doing this nonlinear control, whatever control variable you have." That way, if you have a different control you want to apply, all you do is change one line and say "Hey, no, use this control. Use this function. Use this function." The reference generation is the same. The equations of motion are the same. This gives it a nice modular architecture. Now that we have this, then we can compute. Ugh, I can't write today. There we go. In the Runge-Kutta thing, they're called the first derivative just K_1, I believe. Right. So I need to compute this. This is this F function which depends on the states, this, but also my control. So here you compute this one with the current control -- states. The current this. And if something is time-explicit in there, let's say your dynamics model included time-dependent atmospheric drag, you know. You may have to throw in a time variable so it knows how to resolve that and so forth. In our current problem, we don't have anything that's time-explicit, but if want to make it general, you do this. So good. If we do this though, this integrates it. If you do Runge-Kutta... Let's to Runge-Kutta. So then you do K_2 is equal to this F function again. And then you do X plus K_1 times h/2, I believe. T plus h/2. It's the same control. In your dynamics, we don't typically update the control as we're doing a Runge-Kutta time step. Because really, your control gets implemented digitally at a discrete frequency. You may be simulating your dynamics with a thousand hertz, one millisecond time steps, but your control only gets updated at one hertz. So, you can actually put logic in your code that says, "Hey, only every thousand time step updates the u control. Right? So then, you're really holding your control piecewise constant. In the most simplest case, if you have a thousand hertz integration and you compute the control every time, you're still holding your control piecewise constant within it. And that's quite practical too, because here, let's see, to get this, it needs these states. If you took an estimation class, you'd have a whole routine that computes. To get the control, you'd have to have your estimated biases and rates and all of this, and this doesn't happen at the subcontrol intervals. That only gets updated once, right? So that's why this is just hold your control u constant. That's all you have to do. It's just an input to the routine, and, then you do your rest of the dynamics like normal. So you do this, K_2, K_4. And then the next state, N plus 1 is equal to the current state plus all these K's, you know, they go in here, if you're doing a Runge-Kutta. If you're doing an Euler, it's just one of the steps. So it's kind of easy, alright. And then you do that. And then that's it. If you're doing MRPs, you still have to check if the sigma part of X is greater than one, then shadow, right, then switch to the other set. That should be happening outside of this integration. So up to now, if you've been running your code, well, you had the kinematics first, integrating this, then we added the kinetics, still basically the same integrator. If you're doing altitude, you probably already implemented this logic somewhere to switch the MRPs to the set. So to do these control homeworks, if you've had nice code, it's gonna be really simple. All you have to do, if it's regulation, you don't even need this part. You just compute u, and then integrate forward and apply that. You just have to make sure your equations of motion include that u. And if you have a disturbance, if you're doing an interval kind of problem, an L, is it on model torque, or some external disturbance, you can throw it in as well. So it becomes really, really easy now at this stage to implement that kind of a control. If you're doing the current homework as you do with spring-mass-damper with lots of stuff too, it's the same logic. At some point, you have your reference X_R for that spring-mass-damper reference system minus the actual, that gives you the tracking errors. Then you compute the control, hold it piecewise constant, integrate forward a time step, and repeat. And that's it. That's how we would typically implement a numerical simulation of a dynamic system with a feedback control applied. And again, there's no estimation here, otherwise that would happen here somewhere. You gather the information, run through a QUEST algorithm, to take this stuff and figured out what's my heading and maybe the common filter to figure out what the rest of the stuff is, and that's what feeds into the control evaluation, you know. So...