[MUSIC] In this lecture we'll talk about testing, give outline for how you should test your system and how you're actually going to have to report that with the caps on design projects. So this is going to give you a little bit of detail on what I expect to see. So testing, two big testing phases. And by the way, just like design, there are many different ways to do testing, many different testing approaches. We're just talking very generically, so these two testing phases, nobody can really argue with these two testing phases, you're always going to need these. Remember the testing itself, testing happens to be my field, right? I teach a whole class about just testing, okay? So this is just an overview of the basics of it. These two big testing phases, nobody is going to disagree with these. So one thing is component testing. That's where you test the individual components. So if you look at your system diagram, you've got a bunch of components connected somehow. You're going to implement each component. Design and implement them. Then you're going to have to test them. So you test the individual components. That's called component testing. And then there's integration testing, where you test them working together. So and then once you're done integration testing, you're essentially done with testing, okay? So test individual components, then test them working together. Component testing versus integration testing. Now component testing, a few rules of thumb, okay? So this testing one component, you have to apply test data some how, and then you have to observe the test results. I mean that's what testing is about. Now note that this different, depending on whether it's hardware or it's software. If it's a software component and you're testing that, applying test data means calling functions. Right? So actually we'll get to that in a second. But it means calling functions. So if a If the function it takes two arguments, applying test data means calling the function with different values for those two arguments. Or maybe it's a command line input, maybe it takes input from the command line. So you give it command input, but it's, the data, the way you feed the data is different than if it was a hardware component. If it's a hardware component, then you have to actually wire devices to it's inputs to feed it data. So maybe you connect a switch to it and a button to it, something like that to it's inputs, depending on what type of input it had. And you feed it data in that way. So the way you feed these hardware and software components data depends on how it's built. But either way, you're going to have to supply it with some test data and then you look at the results. Whatever the response is, you evaluate that. Now the test data that you supply, it should be complete. Okay, now complete is never complete, but you want to, so ideally you would love to cover all possibilities. But in practice you can't do that usually. If it's a very simple system, then yes cover all possibilities. But often you can't cover all possibilities, because there are just too many. So in that case, you want to cover all possibilities, which are significantly different from one another. So you can say, look, I've covered all the relevant and distinct possibilities, and then the rest of them are just basically in the same equivalent set as some other possibility you've tried out. Here's what I mean by that. Say you got some 32 bit divider, and by 32 bit divider, I say mean it takes two, divide two 32 bit numbers,okay, and you want to test this. So, it's going to have 64 input pins. Because it's two 32 bit numbers. I could say look I'm going to try all possibilities, I'm going to try dividing every pair of numbers. That would give me two to the 64 possible combinations. Way too big to actually try. So this is a case where you just, there are too many possibilities to actually try them all. You're only going to try as small sub set of those. So you want to try a small substrate set of what I'll call interesting feature, interesting inputs. Inputs that trigger different behaviors inside the divider. So for instance, I know that divide by zero is a special condition. Right? So I know I'm going to want some tests that divide by zero intentionally, just to see how the divider behaves in that situation. Okay. Also positive and negative might be different might be different conditions. You know. How would we positive negative. So maybe I want my inputs to positive negative vise versa at some point during testing. Also, large and small. If you take a large number divided by a small number, maybe the divider does something different than if you take a small number and divide it by a large number. Because then you get a fraction result or not. So you try to find all these. Depending on the application, you try to find out all these interesting conditions, and you test them out. So you've got, you should list out, here are the interesting things that I want to have happen during testing. And you list them out, and then you make tests to satisfy all those. If your system is simple, only has a few inputs, then try everything. That's the best way to test. Just try all possibilities, if that's tractable. But if it's not, then you've got to manually pick what you think are the most interesting ones, most likely to reveal errors, and apply those. Now integration testing, once you've tested components, you put them together and you test them together. Now you generally perform integration testing incrementally. If you have a lot of components in a system, you don't just test them all out, and then. Test them individually and then just say okay, I'm wiring them all together and test the whole thing. This happens a lot, I mean students try this. But I think the problem is that once you put everything together and you test it, and if it fails, which it almost always does the first time. Now, fail is very hard to debug which component or pair of components or set of components is responsible. It is much easy to do debugging if you do it incrementally. So you try connecting one part to another. Test that out. Then try connecting another part. Test that out. And so on. Then when it fails, you say aha, it is the inclusion of that last part that made it fail, and it's easier to debug what the problem is. So that's why it usually makes sense to do this type of testing incrementally, one piece at a time. As an example, in multi-copter, you've got a flight control so maybe you test that out first because it's the center of the system. Then, you test the ESCs, the electronic speed controllers, test them individually. Then, you test the flight controller connected to the ESCs. Then, you test the flight controller connected to the ESCs and connected to the motors, right? And at each stage you're going to figure out, if say stage three works but stage four doesn't then you say, the motors are probably faulty. Or if stage two works and stage three doesn't, you can say, wait a minute the ESC's or sorry the communications between flight control and the ESC's isn't correct. All right because since test two worked, test one and two worked, I know the fly control and the ESC works separately. But their integration in step three doesn't work I say, okay, somehow they're not speaking to each other, their interactions are wrong. So I can zero in better on what the problem is, maybe the ESC is reading data incorrectly from the flight controller. Or the flight controller is sending the wrong data to the ESC that somehow looked correct when I was doing my test in step one. So, it's better to do this incrementally to help you zero in on the bugs. Thank you.