Welcome to the model summary of module four, of course number three. Jimmy, what's happening in the story? Well, after Nerja Zhuge Liang had to fix another problem that was caused by him. At this time, is to help Wung Kong to borrow the medical fan from Princess. I intend to fend off the fire on flaming mountain so that, his master will go past the mountain. While, when I see borrow actually Wung Kong had to have several pick fights with the princess. Eventually, he managed to get the magical fan and then he wants able to put out all the spot fires on the flaming mountain. Now on the technical side, I realize that we are teaching Wung Kong an entire new way of searching to solve the problems that he encountered. That's right. We're looking at now an entirely different sort of solving technology for discrete optimization problems and that's local search. So local search is all about having a current state that we are now, and moving around so we have these concepts of the moves. So I'm in a state now and I moved to another state. I have a neighborhood that's the little places where I'm looking to where I could move to from the state. And then we looked at basic greedy sort of local search. I just look around all my neighbors and just move to anyone which is better, or I can do steepest city neighborhood search where I look at older ones in my neighborhood and I find the one which gives me the best possible venue and go there. While the kind of local search algorithms that you're talking about is very different from the complete search strategies that we have been talking about. Right? Absolutely. So local search algorithm is never really going to prove that the best possible solution. It's only going to basically find you as good a solution as you can, in the results as you get to give it. So in local search, we are essentially jumping from one point to another on the search based, whereas in complete search we are kind of growing the valuation. Yes, we're building out towards finding a solution that can be deliverable. Okay. Quite a different way to go about search and local search really has two main problems we have to think about. Yeah. Its constraints. So, when we do satisfy this constraints and as these local minimas. So when I'm stuck in a place where all of my neighbors were asking me, what should I do. So how do we deal with constraints. So there's two ways of dealing with constraints. We can deal with them implicitly. So if this is simple enough we can make sure that every move we do maintain those constraints. OK. Is it possible to deal with constraints like this all the time? No. Because often constraints will be too complicated and we won't be able to make sure that every move we do maintains the constraints and then we basically going to do is convert those constraints into penalties. So we're going to actually say we're gonna allow you to violate the constraints but you're going to pay a penalty and eventually say individually to get the best solution you won't want to pay a penalty and eventually you'll satisfy the constraints. So a big problem with using this penalty approach is to decide how big a penalty it is for each of the constraints. Absolutely, we need tuning how much penalties its constraints needs is a difficult problem. But we look at methods to do that. But the other thing we'll do first is we'll look at three different ways of escaping local minimum. Now my understanding is that every successful local search algorithm must have an effective way of escaping from local minima. See, at least one. So the simplest one of course is just restart. We're just going to start the search from the beginning and hope we don't end up in the same spot. So this is stochastic in nature. Exactly. Right. Almost all local searches are random in nature. We don't do them. Okay, the second possibility is semered kneeling , which is basically a way of saying sometimes we're allowed to jump up. And so that means that if we're stuck in a local minima when we're allowed to jump up, we can move to a different place and then we can find hopefully a better solution. But the probability of this so-called sometimes, it would decrease gradually. Absolutely. As we go down through the search, we want to have less and less chance of jumping up because we want to be more sure that we end up in a good local minima. As close to possible as the government. And last way of escaping from local minimum actually to avoid going into local minimum right by using a capitalist. Yes or you can think about it as filling up the local minimum because an even local minimum then it will be or won't be able to go back into it. So keep moving around and hopefully that is enough. Gives me not freedom to push me out of local minima and find somewhere else. And it's particularly useful if we have this sort of plateaus where lots of lots of solutions are equal value. We need to be able to move around them without sort of just going back and forth from back and forth. And top list can really prevent cycling. Exactly. Also, we can go back talking about constraints. Exactly. We said already that it was difficult to find the right penalty rate for those constraints. So Leghorns multipliers will give us a kind of principled way of working out what's the right penalty for each constraint. So the idea of this method is, when we have an objective function to optimize, we do not just optimize this objective function. We combine it with the penalties associated with each of the constraints and the problems as well. So we have a new landscape and new optical function to optimize. Exactly, and the key thing is that when we find a constraint is difficult to solve, it's multiplier or it's penalty we will get more and more high which will change the landscape to make it more and more unattractive to not satisfy that constraint. And so we'll end up forcing to be satisfying the constraints. So in some sense, we do not have to set the penalties in advance but we learned the importance of each of the constrains gradually doing the search. Absolutely. Then the very last thing is, sort of a complete jump back to things we've seen before. So now we look at larger neighborhood search. So now we're going to be doing a local search but going to look at a huge number of possibilities in the neighborhood and that means it's a problem that we've already looked at before. Previously we we're looking at small neighborhoods. So that we can afford to examine each of the neighboring points one by one, look at the objective value and then select the best one. Right? Exactly. But when we have a lot neighborhood, there are too many such points, we cannot afford to do that. So how can we find the best neighborhood? Where this is a discrete optimization problem which this causes all about. So we can use all of those complete methods. We've already looked at for solving this large neighborhood surge problem and in fact that combination of using a complete method on a large neighborhood is a very very powerful approach to using complete methods. But for problems with I don't scare. Exactly. And we can also see the power of such a method when it is combined with restart as well. Absolutely. Restarts in large neighborhood search give us a very very powerful way of tackling all kinds of problems. How about our final workshop? So our final workshop. Well, you can tell us about the story Jimmy. It's about Wung Kong as well. You know he has to fend off the flames from the flaming mountains but then there is a village that is nearby. So he had to ask for help from Nerja to send his magical books to go and inform the villagers so that you know they will get prepare to fight off possible fires that fly over to the villages. But it turns out that this has become a very interesting complex and difficult routing problem. Absolutely. And surrounding problems are one of the most common uses of large neighborhoods search, one of the most powerful use in large neighborhoods. So you're going to be building an interesting routing problem. And then we want you to explore how to best search that making use of restarts, large neighborhood search, search service all of the tricks that we've seen throughout this course. For a change, our learners will have to build a model again. Exactly. Yeah. Well it's the last workshop we have to make you work. And that brings the end of. Not the end yet. I realized that today you're wearing a different T-shirt with a character that has never appeared in one of our modules. So what can we tell our viewers about this character. Well, they'll see.