As I mentioned in the first few lectures,

the big O notation is really not adequate for this.

If you say big O(log N), it doesn't give you an accurate measure of the quantity.

It's an upper bound, and it's within a constant factor, and

there's no way to get a precise estimate of what the quantity is.

If we had to take a definition like this, say H sub N, for

the harmonic numbers, that one's accurate, but it's not concise.

It's going to take time to compute the exact value that you want.

Now, that's not too bad, but still the spirit of we're talking

about is to try to get accurate and concise expressions like this one.

Natural log N + gamma + big O of 1 over N.

So that one, for large values of N, will give a numerical result

that's very close to the quantity that we're interested in studying.

And we'll see lots and lots of examples, but that's the basic goal.

We want a concise and

accurate estimates of the quantities we're interested in studying.

Now we won't go crazy with defining what concise means,

what I mean by it is I've got standard functions and I've got constants that

are maybe known and I want to be able to compute this value for large N.

It's as simple as that.

I want to write a program or use a calculator to compute the value.

And actually, that kind of definition,

it's easy to understand the motivation for Scientists in the 18th and

19th centuries, who were learning more about mathematical models of the world and

coming up with functions that describe whatever their interests in studying.

They wanted to be able to do calculations in order to be able to compare

their hypotheses with what goes on in the natural world.

And without asymptotics, it would be hard to do so

because without computers you definitely need to be able to compute your answer.

And that's always a good perspective to have

in the back of our minds when we're thinking about asymptotics.

So this is just a reminder I talked about this notation earlier on.

So the big O notation for upper bounds.

If you say g of N = big O of N.

That means the absolute value of the ratio is bounded from above as N

goes to infinity.

And we're going to use that notation for error terms,

but not for our leading terms.

Also we use the little o notation which says that g of N is little o f(N).

If their ratio tends to 0 as N approaches infinity.

So that's gN asymptotically smaller than f(N).

And again, we use that for error terms,

in fact, often we use the so-called tilde notation,

which just says that g(N) and f(N) ratio approaches 1 as N approaches infinity.

That's the weakest nontrivial little o.