[MUSIC] We conclude this lesson with a few final remarks. So first of all, let us discuss which approach is better, the recursive one or the iterative one. So in many cases, an iterative approach is slightly faster, because it uses simpler data structures, for example, just an array instead of a dictionary. And also, there is no recursion overhead. Right, instead of making a recursive call and putting some information on stack for example we just fill in a simple table, a simply array or two dimensional array. At the same time there are cases when this recursive approach pays off. And the knapsack problem is a nice illustration of such iteration. So we imagine a case where the total capacity of the knapsack is divisible by 100, for example, and also all the weights of all the available items are also divisible by 100. This essentially means that we are only interested in and solve problems, whereas the capacity of the back is divisible by 100. Because we are completely sure that all possible solutions that exist actually in our case, the solution is the capacity is divisible by 100, right? So what will happen is that the recursive approach will only consider such sized solutions, right? Because it always makes recursive calls to smaller subproblems. And in every such case, the current capacity of the bag will be divisible by 100. On the other hand, if we run an iterative approach, it will tediously go through all possible values starting from 0 and ending with W. But, so it will ignore, in a sense, the fact that it should only solve subproblems where u is divisible by 100. So once again, this is a nice example where memorization pays off and in this case, in the case of the knapsack problem for some, datasets, the recursive approach might be much faster than an iterative one. Okay, so the next remark is about the running time of the resulting algorithm. So as you remember what we got, what we designed is an algorithm whose running time grows proportionally to and times capital W, which looks like a polynomial running time, so this is just the product of two perimeters. At the same time, in fact this is not considered to be a polynomial running time. And this is why when we say polynomial we mean the polynomial with respect to the input size. However, in our case the input size is proportional to n not plus W but plus log of W. Namely to write down the number W you actually need, roughly log W b, so log w decimal digits, right. This basically means that our input is proportional in size to n plus log W. And with respect to log W, the writing time of our algorithm is expressed as m times 2 to the log W. Right, just because W is equal to two to the power of log W. So put it otherwise, if you add just a single digit to W, if instead of considering a knapsack of total capacity. For example 17, you can see there a knapsack of total capacity, 170. Your algorithm becomes 10 times slower. This means that actually by adding just a single 0 to the parameter W, you slow down your algorithm by a factor of 0. Not just as another illustration if you consider W. If you consider some number W consisting of 20 decimal digits, this suggest actually a short number, right? This is not gigabytes of data or something, this is just a number with 20 decimal digits. Then, if you run the corresponding algorithm on your laptop, it will require to perform roughly 10 to the 20 basic operations which essentially means it will never stop in the laptop, right? And finally, let me also mention that designing a truly polynomial algorithm for this problem is actually the essence of the famous P versus NP open problem. In a sense, this open problem asks whether the class of all problems, whose solutions can be found efficiently. And by saying efficiently, I mean in polynomial time, coincides with the class NP of all problems whose solutions can be verified in polynomial time. This is probably the most important problem in computer science, with a bounty of $1 million from Clay Mathematical Institute.