[MUSIC] Nice to see you again. In this lecture we will look at how we can avoid parallelism to keep a state space small. So a typical software architecture could look like this, where you have a number of workers and the manager controlling the workers. And we let Worker1 have n1 states, Worker2 have n2 states, and Worker3 have n3 states. And the typical approach is that the manager instructs each worker to start. And lets them work completely independently as much as possible. So the manager actually starts Worker1, it starts Worker2, and it starts Worker3, and they carry out their tasks and report to the manager when they need to. What we see is that the state space of such a system typically is the product of the state spaces of these workers, so n1*n2*n3. So this is exponential in the number of components. And that means that we cannot have a large number of workers if we want to avoid the state space explosion problem. So one can also do this completely differently by letting the workers work sequence, in a sequential way. So the manager actually starts Worker1. It waits until it is ready, so it indicates that it is now ready and stops. It starts subsequently Worker2. Until it's ready with its tasks and its stops. And then it starts Worker3 until it is ready. And in this way, it gives a token to each worker doing the work while the others are waiting. And this looks very inefficient, but it has a very beneficial effect on the state space. Namely, the state space is used to the sum of the state space on the individual workers. And that is linear in a number of components, and that means that we can actually model and verify huge systems. So let's apply this to one particular example, namely a parking garage. We have a number of entrances in this garage. We have N parking places. And the idea is that the controller takes care. That only a car can enter if there is also a place in the garage. And we made a parallel model of this and a sequentialized model, and I will not give the details of the model here, but the idea is that we have this controller and a manager. And that is how the whole thing is checked. If we have M controllers and N parking places, we can ask ourselves, how large are the spaces are. We have, here, one until ten entrances, and we have the parallel model, and we can see the state space of the parallel model grows exponentially with the number of entrances and the number of controllers in our software system. We can also have the sequentialized version, and then we can see that the number of states, it only grows linearly. About functionality of both systems is by and large the same. You can also see, that the number of states only linearly depends on the number of parking places in the garage. That is not about a substantial influence. So the conclusion is that sequentializing systems makes the systems far more analyzable. So let's look at a somewhat playful exercise. We have a pond with 100 frogs. Each frogs has two states, namely the state where it croaks. And a state where it is silent. And we have a director. And the director can let all the frogs croak in parallel. So they all start and stop croaking independently. And he can actually sequentialize the croaking of these frogs. By first allowing the first frog to start croaking, then waiting until it stops. Then go to the second one. Give that frog the turn, etc. And the question is what are the sizes of the state spaces in this parallel and sequentialized pond of frogs, okay? So avoiding parallelism is a good strategy to avoid the state space explosion problem. A way of doing that is by introducing a manageable component that allows to start and stop components since only one component is doing real work at any time. And this generally keeps the state space substantially smaller than when components can do behavior in parallel. In the next lecture we will look at a modular approach to modeling and the specification of interfaces. And these techniques can help us to model really big systems. [MUSIC]