Welcome back. This is discrete optimization again, the knapsack problem. And as you can see, I have a new hat. This is the branch and bound hat. Something which is really useful, and going to be used over and over again in this particular class, okay. We're going to introduce branch and bond, and also the value of relaxation, okay? Two very important concept in one lecture, but still the lecture will be short, you'll see. Okay so, this is the one dimensional knapsack, we have seen it many times, you're probably start getting fed up with this particular knapsack, but you going to see something beautiful today. Okay, so, you know, remember when we talk about binary programming, we talk about exhaustive search, okay, and in a sense we have this, you know, decision variables on top and what we're trying to find out is the values that are possible for this. This, decision vibrance and what I've shown you during the dynamic programming lecture is that they exponentially many of these guys in terms of the number of items. Okay? But how do we, this is a way to actually build this configuration okay. You take the first item, and you have two decisions. Okay. Whether you take the item or whether you don't take the item, okay? And now you have these two possibilities, and for these two possibilities. You're going to look at the second item and you can decide to take the second item or not to take it. And once again, what you get now are four different configurations. And now, you can do the same with the third item, okay, and you get the final eight configuration, okay. And every one of the configuration will tell you whether you choose, you know, item one or not, whether you choose item two or not, or item three or not. And obviously I had four item we would have, you know, 16 of them, 5 item, 32, and this would grow, grow, grow very large. What branch and body is about is looking at this tree and trying to explore only a tiny part of it And finding the optimum solution nevertheless, okay? So that's what we're going to do, and we're going to show you how to do that. And the key idea is that the iterative two steps, okay. Branching and bounding. And branching is boring, right? It's like the exhaustive search, except later on I will tell you that there are smart ways to do this. But at this point it's completely boring. You know, it's like, okay, so I take an item and whether I take the item or not, that's what branching is going to be about, okay. It's like in the exhaustive search. But then bounding. Bounding is very different. It's like finding an optimistic evaluation Of what you can do, okay. So in optimization you have to be optimistic, in life as well right, so we want you to be optimistic. In fact, what is the best that I could ever do, okay, if you are maximizing. Or if you are minimizing, how low can be my cost, okay. So that's the kind of optimistic evaluation that we need for actually for bounding, okay. And I'm going to show you how we can get this. Okay. So how do we find these things? So we are going to basically take the problem and relax it. And so optimization Is the art of relaxation. Okay, so image yourself in this [UNKNOWN], and you are completely zen, and you find out the best way you can actually relax these problems and make it easy to solve. Okay? And get and object, and get an optimistic evaluation of this thing. Okay, so once again This is the knapsack, and you ask yourself, [INAUDIBLE], okay, now, relax this [UNKNOWN], okay? And there are not that many things in this notation, right? So, in this formulation. And so, you may say, oh. with what I could relax is this constraint, okay? We're going to relax the capacity constraint, okay? And then, as soon as we do that, usually, we can get an optimistic evaluation of this problem. evaluation, okay? So, what you going to see is that you're going to see these node in the tree that I'm going to build, okay? They have three entries, okay? They have the value of that I have accumulated inside the knapsack. They have the space left in the knapsack. And then they have this optimistic evaluation, okay? So initially I have accumulated no value. I have, you know, the full space in my knapsack. 10 in this particular case. And then I have this optimistic evaluation which is oh, I can select everything okay, so in this particular case is 128. And now I start doing things. Right? I'm selecting the first item, okay. As soon as I do that I computed the value of the first item, 45. I take 5 units of space in the, in the knapsack. And my, you know, optimistic evaluation is still 128. You know, then I try to select item 2 but as you can see item 2 is a size of 8 so I'm exceeding the capacity of the knapsack. That's not a feasible solution, okay, so I can't take item 2. And at that point my optimistic evaluation is reduced to 80, okay? And I can decide to take item three or not, okay? If I take item three, I get the value of 80, okay, this is one and three. And if I don't take it, I get a value of 45, which is essentially completely dominated by the previous solution I found. Okay, so I know that this is not an optimal solution. Okay. Now I go back to the next part of the tree. I decide not to select item 1 so my optimistic evaluation at this point is 83. It's still better than this guy, okay, 18, the best solution that I've found. So I keep going. Okay. I select item 2. If I select item 2 I'm still 83. Okay, then I decide if I select item 3 or not Okay, in the first case I get an unfeasible solution. Otherwise I get the value 48, which is also dominated by the best solutions that I have found. So, you know, this is why I put this cross over there. Otherwise if I decided not to take item two, okay? So, what I get is a configuration there, okay? Which is optimistic, whose optimistic evaluation is 35, okay? Now that's terrible right? So it's worse! That the best solutions that I have found so far. So, I can decide without continuing anything, now this is not valuable, okay? I don't have to actually branch on item 3. I already know that the best I could do is worst than the best solution that I had. And this is the key idea behind bounding. Once you have this optimistic evaluation, if it's worse than the best solution you have found, you know that you don't have to explore whatever is below the tree. Now this is an example which is terribly bad, alright? Because my optimistic evaluation is so optimistic that most of the time I'm going to basically explore the entire tree, okay? But so you can see sometimes we get a little bit of pruning, okay? But what we have to do is to find a better relaxation so that We can prune a bigger part of this tree, okay? So, look again at this knapsack example and think, okay? What else can I relax? Okay, I let you think for about, ten minutes I guess, okay? And then I'll tell you, okay? Can we relax something else? And you're going to see a beautiful slide, okay. So we are going to assume that instead of packing artifacts, we are packing bars of Belgian chocolate, okay. Now this is completely different, right. So because now, you know, when you have Belgian chocolate. Let me give you a picture, okay. So, because this is so beautiful I'm already hungry here. Okay, so and, and here I can tell you that Indian Jones loved Belgian chocolate, that's the best chocolate in the world. By far, okay. So this is this bar of chocolates, okay. Now what Indiana Jones can say, mm, oh i can't pile the whole thing in my knapsack, but I can break it into piece and put it in my knapsack, right. So this is completely different, right. It's not I take, or I don't take. I can take a piece of it, right. And so now, assume that my old knapsack. Is built of artifacts that I can break. Like you know, Belgian chocolate, okay? So then I get this beautiful [UNKNOWN], right? So, the only thing that I did was for the decision variables. They don't have to be zero one. I take it, or I don't take it, black and white, ying and yang, right? Only, what I'm doing here is saying, ooh, but I can take a fractional value of these particular decision variables. I can take a quarter of it, or I can take half of it, or something like this, right? And so I get this optimization problem here, which is almost the same as before. Except that the value of the, of the decision variables now can take any fraction between zero and one. Okay, and now you can ask, oh, but, you know, is that problem easy to solve? Okay, well actually this problem is called a linear relaxation. You're going to see that in this class all the time, okay. Especially when we start looking at, you know, mathematical programming, linear programming, mixed integer programming. And so this is a very, very important concept. So this says, a linear art, a linear relaxation of the knapsack. And this has a very important properties that in practice it's easy to compute. And I'm going to show a very easy way for the knapsack. In practice it's a little bit more complicated. We'll, we'll cover that as well, okay? So the only thing that the linear relaxation is doing is take every value that To be a. A natural number, so an integer numbers. And relaxing them to a fraction. That's what the linear relaxation is doing, is basically relaxing the integrality constraints that you have in most of these models, okay? So, now how can we solve this? I'm going to show you a very simple way to do this. And this is going to you know, basically be very close to some, some of the greediogories in that we have seen. So we going to order the item By value per kilos. Okay. So this is this WI you know, over you know VI over WI ratio that I, that I've shown you before, okay. So basically the item that is the largest value Okay, it is essentially the va, item that is the most valuable by kilo, and when you look at chocolate that's, that's what you want right, 'cos you can start breaking them up okay? So essentially you alter the item by this and what you have to do is basically select all the item okay, until the capacity is, select all the items until the capacity is exhausted, putting another item with, go over the capacity and for this And once you have that you select the item and select a fraction of it so you fill the knapsack. So in a sense you order these items in the greedy algorithm you accumulate them if the next one is going to go over capacity what you do is take a fraction of it such that, that particular fraction is going to fill the knapsack. And this is essentially an optimal solution of this linear relaxation Okay. So look at this in this particular example, okay. So, you see the various, you know, you see item one, item two, item three, okay. So the first item is 45 over 5, okay, and that gives you a value of 9 for that particular item. The second one is 48 over 8, okay, and that gives you a value of 6. And the third on is that? It's 35 over 3 and that gives you a value of around You know 11.7. So the item number three is actually per, y'know per kilo the most valuable item. And then next one and next two. So what you would do here is you would select item one, you would select item two okay, that gives you two units for the capacity of the knapsack. and then you essentially take a quarter because the weight of item two is eight okay. So you take a quarter of the value of this guy going to give you umm which is going to give you a quarter of that a value of 12 and the overall value of the linear estimation is going to be ninety two. So ninety two in this particular case is an optimistic evaluation Of the knapsack. You know for sure that you will never do better than 92. Okay? So, but this is much better than the relaxation that we had before, right, where we would basically sum everything, okay, and get 130 or something. Right, so here, we have actually a value which is actually pretty good, okay, in this particular case, okay. So, essentially the linear relaxation here is giving you a very good value for the, for bonding. particular knapsack problem. Okay? Now why is this correct? This is a simple reformulation of, of the, of this linear relaxation, okay? So you basically express, you introduce a new variable YI, which is basically the product of VI and XI okay? And then you reformulate the knapsack and what you see when you reformulate this knapsack is that all the item Have the same, you know, value. Okay, they have a value 1, okay. But what is interesting when you reformulate this problem is one, is not only the values are all the same, but then this, this, this weight is basically the density, the inverse of the density that I was talking about, okay. The most valuable item have the least weight in this reformulation. Okay. So how do you solve it? Well, it's like the greedy algorithm that we have seen before. You take the smallest item first And when you get stuck you take the fractional value of the last one. That's why, in this particular case, the linear relaxation is actually, is actually well, the algorithm that I've given, that I've shown you. Select all the items until you get stuck and then a fraction is actually computing the linear relaxation. But of course, in practice, okay, the linear relaxation is not a solution to a problem. Right? Because we have this mask and there is no way we can break it, and anyway, even if you could break it you would not want to break it. So essentially we can't take a fraction part of this mask right, we can't decide to just take the nose or whatever right, so we have to do, we have to use this linear relaxation, ok inside the branch and bound algorithm. And that's what we going to do, okay? So we start again at the root, okay? There is nowhere else to start. But we know that the evaluation, the optimistic evaluation here is 92, okay? Now we start, we go on the left, okay? So we take the items one. Same evaluation, you know? Fewer space in the line. Less space in the knapsack. We take item two, this is [INAUDIBLE] feasible. You know? This is essentially the same as before. We get this solution with [UNKNOWN]. The other one with 45 is dominated. But no. Now you're going to see something really, really cute, okay. So you go under, you, you go on the right, okay, you don't select item 1. And what you see there now is that the optimistic evaluation is 77. Oh, the 77 is worse than the best solution that I have found so far, 80. So you know that the entire tree here, can be thrown away. You don't have to look at it at all, right. So you are at 77 here. And you know that anything which is below you. Any solution, configuration which is below that is guaranteed to be worse than the best solutions that you have found so far. So you can stop there, and you know that you have an optimal solution. And does the value of a good relaxation. Okay. So if you relax and you do a good job of relaxing, okay, the, the computing job that you will have to do. The job, the search space that you will explore, is going to be much smaller. Okay. And that's the value of relaxation