0:00

Okay, so now that we have these asymptotic notations Big O, omega and theta.

Â Now we can use these kind of, of notations and their definitions to put upper

Â bounds for example, on the running time of functions, and use these to give us a bet,

Â an id, to give us a good idea of how the algorithm is going to behave.

Â When we use an upper bound,

Â we are going to say that the algorithm is not going to get worse than this, or

Â the running time is not going to get worse than this.

Â When we put an upper on we put a lower bound,

Â we are basically saying, this is the best the algorithm can do.

Â And usually, the algorithms do not necessarily have even to

Â achieve these running times.

Â So if I put a lower bound of 2 to the N on an algorithm, it doesn't mean that

Â it's going to, it's going to, there are instances it's going to reach 2 to the n.

Â It might be that for

Â every instance of the input is going to be doing much more than 2 to the n.

Â But still, that makes 2 to the n,

Â it doesn't violate the fact that 2 to the n is a lower bound.

Â Now, of course, the, the, one of the issues that we have to make sure or to,

Â to make certain that everyone understands, is that when we put an upper bound or

Â lower bound, if all I'm saying is that give me a lower bound and

Â the running time of an algorithm, of course you know, one can say 0,

Â every algorithm is going to take more than 0 seconds, right?

Â Or more than, it's going to perform more than zero operations.

Â Now that is not an interesting lower bound.

Â Even though we're saying we want lower bounds and

Â upper bounds, we are often implicitly asking for the tightest possible, okay?

Â For the tightest running time possible.

Â And when we, let's now go back to the, to one of these which we are going to be

Â using most of the time in this course, which is the big connotation.

Â And just a, a, a reminder,

Â that we say that F of N is O of G of N,

Â if there exists two constants such

Â that F of N is smaller than C times G of N for

Â every n greater than n 0.

Â Now let's use this definition, and try to establish for

Â example, some some upper bound on some functions.

Â And I will take as an example, f of n equal

Â 1,000 plus 17n plus 2n squared.

Â 2:34

And I want to show, as I said before when we were talking about the growth of

Â functions, that we can ignore the lower terms, and

Â we can ignore multiplicative constant factors.

Â I can ignore the 2, 17 and thousand, and so on.

Â And just say that this is going to grow in the order of n squared.

Â Where does, where does that come from?

Â Or am I just doing some sort of tricks?

Â If we come actually now and understand the definition of big O, we will see that in

Â fact, F of N, is O of G of N, where OG of N is n squared.

Â How do I get that f of n is, is O of G of N in this case?

Â 3:13

We have to go back to the definition.

Â The definition says that if to claim that f of n is O of g of n,

Â I have to find two constants, C greater than 0, n 0 greater than or equal to 0.

Â Such that this is going to be bounded from above by

Â C times n squared for every n greater than or equal to n 0.

Â Now, we can guess some of these numbers, but

Â also we can go about it a bit more systematically.

Â Remember that

Â I'm interested in finding f of n is smaller than or equal to something.

Â All right?

Â So, the way I would do it, is that f of n is 1000 plus 17n plus 2n squared.

Â 4:20

I can say that 1,000 is smaller than or equal to 1,000 n squared.

Â If you take a number, and you'll now multiply it by a positive number,

Â of course this is smaller than or equal to this, right?

Â 1000 is smaller than, or equal to 1000n squared, except if n equals 0.

Â Right?

Â If n equals 0, 1000 is not smaller than or equal to 1000 time n squared.

Â If n is 1, 1000 is smaller than or equal to 1000 n squared.

Â If n is 2, this is where this starts growing faster now.

Â 1000 is much smaller than 1000 times 4, and so on.

Â So for this to hold, I have to start now putting conditions that n has to

Â be greater than or equal to 1.

Â Okay?

Â For me to make this jump that 1000 is smaller than or equal to 1000 n squared,

Â I have to have this assumption that n is greater than or equal to 1.

Â 5:18

So this is smaller than this, plus 17 n,

Â 17 n also is smaller than 17 n squared.

Â Okay?

Â Again, 17 n is smaller than 17 n squared because we have n is, n squared is,

Â is larger than n, and we can make this assumption, yeah, okay?

Â In this case, even if n is 0, 17 n is smaller than or

Â equal to 17 times n squared.

Â Okay?

Â So we don't have to modify this condition.

Â This is still going to hold.

Â I cannot lower it to 0 because it's going to violate this one.

Â Plus, 2n squared is smaller than or equal to 2n squared, okay?

Â Now, you might be asking now, but also 1000 is smaller than or equal to 1000n.

Â Or you might be asking, but 1000 is also smaller than or

Â equal to 1000 n the fifth, which is true for this condition.

Â Why did I, you choose N squared?

Â Again, keep in mind that I'm trying to establish that this is O of n squared.

Â So I would like to see N squared here, because once I see N squared here,

Â now this is a term that's easy for

Â me to manipulate, because now I can add these numbers here and

Â say 1,000 plus 17 plus 2 is 1,019 n squared.

Â Right?

Â So now I have this function, that is guaranteed to be smaller than or

Â equal to 1,019 n squared, for every n greater than or equal to 1.

Â 6:53

Again, I can basically say that 1,000, plus 17 n plus 2n squared,

Â is smaller than or equal to 1,019 n squared for

Â every N greater than or equal to 1, okay?

Â Now, when I look at something like this, and look at this kind of definition,

Â now you can start seeing some sort of correspondence.

Â This, is F of N.

Â 7:52

The last thing I need to do, is try to find C and

Â n 0, but they actually jump at you.

Â Once you look at it like this, because, if you look at this, just call this C,

Â call this n 0.

Â We have to make sure that this if I call them C and

Â n 0, that they satisfy the, the properties of the finishin.

Â The finishin says that C is greater, has to be greater than 0.

Â It is greater than zero.

Â For example, if doing this exercise I got to minus three here,

Â I couldn't call minus three C, because C has to be greater than 0.

Â 8:33

I call this n 0.

Â N 0 has to be greater than or equal to zero.

Â 1 is greater than or equal to 0.

Â So I am done.

Â Okay? I am done.

Â I have showed that I can find C,

Â I can find an n 0, such that this function is smaller than or

Â equal to C times n squared, for every n greater than or equal to n0.

Â The C is 1,019, then n0 equal 1.

Â So, this is what I meant in the beginning when I said that we

Â will be doing rigorous mathematical reasoning rather than proving.

Â I'm not saying that this is a full proof that this is a,

Â that this is true, but we can argue, or this is, this is not the technique for

Â proving formally that f of n is g of n, and it might not work for many cases.

Â But it's easier for me to reason about how why this is smaller than C times this,

Â by using this kind of technique, okay?

Â So here, I just showed a specific value of C, a specific value of C of, of n0,

Â such that I can bound f of n from above by C times n squared, for

Â every value of n greater than or equal to 1.

Â And this is how we would basically do this kind of, of of exercise.

Â Now, if this was omega, for example if I said,

Â f of n is omega of, of n squared, we can do a similar exercise.

Â But then, we shouldn't be focusing on showing it is smaller than or

Â equal to something else.

Â We should be focusing on greater than or equal to something else, okay?

Â And the last thing I want to say is that, if you want to show a function f is theta

Â of g of n, and you have showed that the function f f is O of g of n,

Â and the function f is omega of g of n, in fact, you are done.

Â Because it's easy to, to see that if f

Â is f of n, is big O of g of n,

Â 10:44

You can conclude from these two that f

Â of n is theta of g of n, okay?

Â In fact, you can also go this way.

Â If you prove that f of n is theta of g of n, you have, in fact, proved that f

Â of n is O of g of n, and at the same time, f of n is omega of g of n.

Â