[MUSIC] Nice to see you again. In this lecture, I would like to introduce the notion of guidelines for modeling, which can help us to keep a state space explosion away as long as possible. So these days, we can deal with the explicit state spaces of approximately 10 to the power 9 states. And for that you need a sizable computer, but it can be done. But 10 to the power 9 states, or a billion states is not that big. Actually, if we have this transition system of 10 states, and we put that in parallel with 9 more of the same transition systems, and I think these are 9. Then this already gives you a state space of 10 to the power 9 states. So here, you see the limit of what tools can do. And if the state space becomes bigger, then explicit state space exploration still fail to deal with this. Fortunately, there's lots, in general a lot of communication between the different components, and that will restricticize of the state space. So you can use more complex systems in larger numbers than this before you reach this limit. But this shows how easily search limit can be reached. If we would have data, it's even easier to reach this limit of 10 to the power 9 states, and I think this is quite a shocking and illustrating example. So, what we have here are simply four bytes, a byte consists of 8 bits. And if you have these four bytes of 8 bits, and 8 bits can either be 0 or 1, then you already have 4 times 10 to the power of 9 states. So four bytes are enough to lead to such an explosion, and yield a state space that's actually too big. So, the question is, what can we do to avoid state space explosion? And there are different kinds of solutions. So the first solution is that we make a model, and we actually translate it to a linear process, and we try to make it smaller. So what we can do is we can use tools like lpsparelm, lpsconstelm, lpsparunfold to simply reduce the state space before actually generating an explicit transition system. Even more effective are techniques like tau-confluence, which I will not explain. It's also called partial order reduction, and it can sometimes reduce if it is applicable, the state space exponentially. So if you can apply it on a linear process and then generate state space, you can sometimes even put many thousands processes in parallel. And simply generate the full state space after applying it to our confluence reduction. What you can also do is abstract from data. And there are quite a number of techniques to do that, so abstract interpretation is one of the techniques. But you can also do it explicitly by taking data domains with large number of failures and reduce them to a small number of failures. So that's the first technique. The second technique is that one can use improved verification methods. And if you look into the literature, almost all the literature is about how can we make better algorithms to deal with larger state spaces? So the most important friend of algorithms here are the so called symbolic methods. And symbolic measures essentially try to store state spaces in a compressed way in memory, and situated analysis can still be done. And this works when your state space does not have a too complex structure. If it has a complex structure, then often explicit state space methods are more effective. If you would have a simple structure with large state space, then I advise you to look at the LTSmin tool that has been developed at 20 universities in the group of Jaco van de Pol. This is Jaco van de Pol. And these tools will eat and shovel tool specifications, and can analyze huge specifications with 10 to the 5th power of 15 or even larger numbers of states in quite short of time if you are looking. And there's another approach which I preferably favor, and that's called modeling for verification. So, what's that? Modeling for verification means that we want to specify in such a way that the state space remains so small that we can still verify it. And there are a number of guidelines that can help you to keep your state space small. And these guidelines come from this article at the bottom of the page. So the first guideline, and they look quite shallow but are quite effective, is information polling. So the idea is that one should avoid to store redundant information in a model, but only get the information when it's actually needed. So one should poll information at those places where the information's available. When it's immediately needed to come to some form of conclusion. I will go into this in the next lecture. You could also work with a global synchronous communication, so if you have a large number of parallel components, they can communicate with each other in an asynchronous way. But you can also synchronize the communication, such that the individual components will go from state to state in a more synchronous fashion. And this also reduces the state space. We saw that parallelism was one of the causes of the state space explosion. So what one can do is that one sequentializes the behavior of all parallel components. So first, let one component do its work, then stop that component, let another components do its work. And this also has an enormous effect on the size of the state space, so the state space becomes much smaller. And for many applications, it's allowed to actually avoid a parallelism because it's not really needed. We can use a system, such of a designed system, since it is confluent and then have this remarkable exponential reduction, but I will not go into this. One can restrict the use of data, I'll spend a lecture on it. And one can actually do compositional design and reduction. So instead of designing large systems, make design system piece by piece then take a small number of these components, reduce this to have simple behavior and use that in larger components. And this method is even more successful if one forces oneself to first specify the behavior of all these combinations of subcomponents. Well, I'm quite sure that this is not all that there is to be uncovered about guidelines. But as it stands, and as far as I am aware, these are the only guidelines that are currently in existence. So when I was speaking about the need for these guidelines, I actually met a lot of skepticism among my colleagues. What they said was the following. One should have the freedom to design every system, so one should make a model of every system. And as long as verification techniques are not capable of dealing with any model, these verification techniques are immature. But I actually mean of a belief that this is a sign of immaturity of these fields. If we look at other engineering disciplines, then we see that they have a design technique where they design relatively simple systems. Because they can only verify or determine the properties of these simple systems, and then they make it, as if they make them look complex because people sometimes like that. So typically, if you look at buildings, most buildings that are being built these days have a very simple structural construction. This, for instance, is the building in which I am standing now. When it was being built, it simply consist of a row of pillars, a iron cross, if you can see it, and then a number of floors. So the essential structure of the building is quite simple. But now that we are living in this building, we see all kinds of details in the building, and the building looks much more complex than the simple structure. But if it would not have been built for this simple structure, this simple analysis could not have been done. There's another discipline where it's even stronger, and that's electrical engineering. If you look at electrical engineering, they have adapted all their essential components such that they can be dealt with in their models easily. Actually, if you look at resistors, condensators, coils, they all make them such that they are linear, and that they can describe them using linear differential equations. They can calculate with huge systems of linear differential equations, and determine the properties of such systems. They are totally incapable of analyzing behavior of systems that cannot be captured in linear differential equations. So normally, linear differential equations are really a problem and cannot be used for large engineering designs. So what did we do? We looked at state space explosion problem, and we saw that it's very easy to actually let the state space explode. But we should try to avoid it, or deal with it in one way or another. So we can do with it with, for instance, symbolic methods or data abstraction. But I strongly believe that it's far more effective that one should actually be aware of the state space explosion, and design systems such that state space explosion does not occur. In the next lecture, I would like to spend a little bit of attention to the polling guideline. Thank you for watching. [MUSIC]