[MUSIC] Hello and welcome to the next module. In this module, we're going to continue studying the dynamic programming technique. We do this by considering the famous knapsack problem. So this is a famous problem in combinatorial optimization. In this case, we're given a knapsack with a limited capacity, together with a collection of items with known weights and values. And our goal is to select the sub-collection of these items so that their total weight does not exceed the capacity of our bag, while the total value is as large as possible. This is a classical problem in combinatorial optimization with hundreds of applications in areas like planning, scheduling, resource allocations, and others. It is important to note that in applications, weights and values may actually denote various things. To give an example, consider the following application. Suppose we're given a time interval and a set of TV commercials. So for example, this interval might be just 100 seconds. What we would like to do is to select some set of these TV commercials with the following two constraints. First of all, we of course would like to maximize the total revenue from these commercials. On the other hand, we would like the sum of the lengths of these TV commercials to not exceed the length of the available time interval. In another application, another toy application, another example, suppose that we're given a limited budget. And what we would like to do is to purchase some set of machines to maximize their total performance. So in this case, our capacity is actually our budget, and the values of the machines are actually their performance. Okay, so there are two natural variations of the knapsack problems. First of all, the knapsack may be either fractional or discrete. In the fractional version, we assume that we can take any fraction of any item. On the other hand, in the discrete case, each item is either taken or not, okay? So in turn, for the discrete knapsack problems, there are two types of variants, knapsack with repetitions and knapsack without repetitions. In the first case, we're allowed to take any number of any items. In the second case, knapsack without repetitions, we're allowed to take only a single copy of each item. As we already know, the fractional knapsack can be solved by a natural greedy algorithm. Namely, we just keep repeating the following step. At each iteration, we just take the maximum amount of item whose value per unit of weight is as large as possible. It is not difficult to show that this leads to an optimal solution in the fractional case. But this strategy does not work for the discrete version of the knapsack problem. So instead, we will design together a dynamic programming solution. So before going into the details of a solution, let's consider an example. So consider a bag of the total capacity 10, and assume that we have four items with weights 6, 3, 4, and 2, and with values 30, 14, 16, and 9 respectively. Then for the discrete version of the knapsack problem where repetitions are not allowed, the optimal value is equal to 46. So to achieve this optimal value, it is enough to take the first item and the third item, okay? On the other hand, if we do allow repetitions, then the optimal value increases to 48. So in this case, to achieve this value, we can take, for example, the first item and two copies of the last item. Finally, if we are allowed to take fractions of items, then the optimal value increases again, and in this case, it is equal to 48.5. And to achieve this value, we can take the first item, the second item, and one half of the last item. We will now design a dynamic programming solution for the variant of the knapsack problem with repetitions. Recall that in this case, we are allowed to take any number of the available items, okay? So to formally state this problem, so denote the total capacity of our bag by capital W, the weight of the n items by W0 and so on, Wn-1, and their values by V0 and so on, Vn-1. So our goal is, again, to select some subset of these items, probably multiple copies of some items, so that their total weight does not exceed capital W, while their total value is as large as possible. As you remember, the most important step in designing a dynamic programming solution is coming up with the right notion of a subproblem. And one way of finding this right notion of a subproblem is to consider the structure of an optimal solution. So consider some optimal solution for our bag, and assume that we have some item, wi, in it. So a simple but crucial observation is the following. If we take this ith item out of our solution, then what we get is a solution for a knapsack of total capacity W-wi, and it must also be optimal solution for this particular capacity. Why is that? Well, for a simple reason. If this were not optimal solution for a bag of capacity W-wi, then the initial capacity would also not be optimal, right? So this allows us to define the following subproblem. For a parameter u, let's define value of u to be equal to the optimal value of the bag of total capacity u, using the same items of course. This allows us to write down the following recurrence relation. Value of u is equal to the maximum over all items i, such that wi is not greater than u of the following quantity, value of (u-wi) + vi. So this recurrence relation just reflect the fact that in order to get an optimal solution for the bag of capacity W, we can actually take some optimal solution for a smaller capacity, namely u-wi and add the ith item to the solution. This will give us a solution for a capacity u, and this will add vi to the total value of this smaller solution, right? So when there are no such items, in this case the maximum is taken over an empty set, we assume that it is equal to 0. So another base case also corresponds to the case when u is equal to 0, and in this case, value of 0 is equal to 0 also of course, right? Because we have no items to put in our bag of capacity 0. So when we have this recurrence relation, as usual it is just a technicality to transform it into a recursive algorithm. And this is what we're going to do now, and this is what you see now at the slide. So what we see here on this slide is a recursive algorithm that we get out of the recurrence relations that we've just discovered. As usual, we use a table T to memorize all the intermediate results, namely, all the solutions for subproblems for the bag of capacity u, where u ranges from 0 to capital W. Okay, so the knapsack function takes three parameters, w, v, and u, where w and v are just lists of all weights and values of our items, and u is the current capacity of the knapsack. We only start computing value of u if it is not stored in the table. If it is not, then we do the following. We first initialize it with 0. Then we go through all n items, and for any item i whose weight does not exceed u, we try to improve our current T[u] using item i. Namely, we can see the resolution for the problem u-wi, and we try to add the ith item to this solution. If this gives value which is higher than T[u], then we improve T[u], okay? Then, as you remember, it is usually possible to transform a recursive algorithm into an iterative one. In this case, it is particularly easy since what we need to do is to solve all subproblems for u ranging from 0 to capital W. We can do this just in a straightforward fashion by gradually increasing u. And we will use just an array for storing all the various value of u. So what you see here is the corresponding iterative implementation of the same algorithm. Here, instead of using dictionary, we use just an array T of size capital W + 1. Then for u ranging from 1 to capital W, we compute the value T[u] using exactly the same recurrence relation, okay? So before proceeding, let's consider an example. And let's review the example that we've seen previously and assume that we've already computed all the values in our table except for the last one. Namely so what we're computing right now is what is the optimal solution for the knapsack of total capacity 10? So what we see here is that there are actually four possibilities. Either we add the first item to the solution of capacity 4 or we add the second item to the solution of the capacity 7. Or we add the third item to the solution of total capacity 6. Or finally, we add the last item to the solution of total capacity 8. For all these capacities, we already know their optimal solutions, so we just compute the maximum of four values. And it turns out that in this case, the maximum value is equal to 48, which convince us actually that this is indeed an optimal value for this case. Okay, now let's conclude the video by revisiting the subproblems. So as you remember, another way of defining the right subproblems is to design a brute force solution and then to somehow optimize it. So in the brute force solution in this case is not so difficult to implement. Namely, since we need to find some optimal collection of items, let's just populate the current set of items one by one. So what you see here is actually an implementation of this approach. So this recursive procedure uses the parameter items, which actually exactly stores the items that we're going to put in our knapsack. So we start with an empty sequence of items, and then we recursively add the current item to the list items and make a recursive call, right? And we only add the item if adding it does not exceed the total weight, okay? So this gives us just a brute force solution. Actually, we essentially go through all possible subset of items. So then finally, to get a dynamic programming solution out of this brute force solution, it remains actually to note that in this case we're not interested in the whole subset of items. What we're interested in is just in their total weight. So just by replacing the collection of items by their total weight, we actually get back our previous dynamic programming solution.