0:25

By now you know that the sum of one of our N squared converges.

Since it is a P series with P equal to two.

But to what does it converge?

We've claimed in the past that this converges to pi squared over 6.

That is a deep result.

We can't get that easily, so let's approximate.

The true value of pi squared over 6 is in decimal form, 1.6449 et cetera.

How many terms would we have to sum up in this series?

To get within a certain amount of that true value.

Well, let's fire up the computer, and

see what happens if we add up let's say, the first 10 terms.

Then we get an answer that is, well, it's within a neighborhood.

It's not exactly really close.

So let's add up the first 20 terms and see how close we get.

Now we're doing a fair bit better.

If we take a little bit more time and effort, compute the first 100 terms,

then it seems as though we're definitely within 1% of the true answer.

And if we added up the first 1,000 terms, well now we're getting something

that is really, fairly close to pi squared over six.

But how close?

1:52

Well in general, you're never going to be able to get to the truth.

The true answer involves a limit.

And that just takes a lot of work.

In general, what you can do is come up with an approximation.

Let's say by summing up terms up to and including a sub capital N

now what's left over is the error that we'll denote E sub capital N.

You're not gonna be able to compute that error exactly,

since then you would know the truth, but you can control or bound that error.

Let's see how that works in the context of an alternating series.

Let's consider a series that satisfies the criteria of the alternating series test.

That is, the sum of -1 to the n a sub n where the a sub ns are positive,

decreasing, and limiting to 0.

Then remember how this convergence happens.

As you take the partial sums, you're always jumping over the true answer.

To the right, and then back to the left, because of the alternating nature.

Then in this case, it's easy to get an upper

bound on the error E sub N, in absolute value.

It is precisely a sub n+1, the next term in this series,

because you're always overshooting.

When you have an alternating series, this result is simple and useful.

Let's consider the approximation of one over square root of e,

with the goal of getting within one one-thousandth.

Well, if we use our familiar expansion for

e to the X where X equals negative one-half,

then we see this is really an alternating series with the ace of n term

being equal to one over n factorial times two to the n.

Now, if our goal is to add up a finite number of these terms and

get the error less than one one thousandth.

And the alternating series bound says that we need to find a capital N so

that a sub capital N plus one is less than one one-thousandth.

Well, what does a sub capital N plus one, it is capital N plus

one quantity factorial times two to the capital N plus one.

4:59

But, let's consider what it would take to approximate

log of 2 using the alternating harmonic series.

In this case, a sub n is 1 over n.

In order to get the error less than one one thousandth, what do we need?

Well again, by the alternating series bound,

when need a sub N plus one less than one one thousandth.

That's then same thing as saying N is greater than or equal to 1000,

and that is a lot of terms

to get within the same error amount that we used for an exponential.

5:43

What happens if you don't have an alternating series?

Well, you need a different error bound.

There is one associated to the integral test.

Let's say that you have continued your series a sub n to a function to

a of x and have shown convergence by means of integrating this function.

Then one can see that the tail, the E sub N term

has a natural lower bound in terms of the integral of a of x.

Specifically, if one integrates x goes from N+1.

To infinity, then that is a strict lower bound for E sub N.

6:58

With this in mind, let's see what it would take to get close to

pi squared over 6 when we sum up terms from 1 / n squared.

Now we know the value of pi squared / 6, and

let's say that we wanna get within 0.001.

Well, we know that this P series converges by the integral test.

Using a continuous function, a of x equals 1 over x squared.

By the integral test bound E sub capital N is less

than the integral from capital N to infinity of this a sub x.

We've done the integral of 1 sub x squared enough times so

that you'll believe me when I say that this integral comes to one over capital N.

Now, if we want that to be less than one one thousandth,

that's really saying that N has to be larger than 1,000.

And if you'll recall when we did some of our computations,

that is exactly what we saw.

When we summed up the first 1,000 terms.

The integral test gives very precise bounds.

8:14

If you don't have an integral test and

you don't have an alternating series, what can you do?

Well there is one last error bound that involves only Taylor expansion but

we are going to pay for that generality in terms of complexity for

the following result is deep and difficult to grasp.

For that reason, we'll keep it simple by looking at what happens at f of x for

x close to 0.

Assume that f is smooth, then Taylor expand f,

about X equals zero, keeping only terms up to and including order n.

8:58

Now of course, f is not equal to this Taylor polynomial,

it's just an approximation.

So there's some error, but here the error term E sub N

is a function of x and not a constant.

So what can we say about that error function?

Well the first thing that we can say is that E sub N of x is in big O

of x to the N plus 1.

That's not too surprising.

Everything else is in higher order terms.

On the other hand, this is kind of a weak result in that in big O,

you're only find out what happens up to a constant and

in the limit as x goes to zero.

What we'd really like is a more explicit bound

that we can use to get numerical results.

Well, there is a strong form of this theorem that says

that the error is bounded in absolute value.

I some constant c times x to the n + 1,

over (N + 1)!.

Where this constant C serves as an upper bound to the N + first derivative of f,

at all values of t between zero and x inclusive.

This is a much stronger version of the arrow bound and

it tells you that it's really the n plus first term

in the Taylor expansion that is giving you control over the error.

In fact if you wanna get really strong we can replace the constant big C i exactly.

The n + first derivative of f at some t that is between zero and

x and this is a very remarkable result in that you're not bounding the error,

you're saying exactly what the error equals.

As a function of x, what I'm not telling you is what

t you have to choose in order to evaluate.

That n plus first derivative.

Now, I'll let you work out what this would be if you replace 0 with a.

11:21

Let's see how this bound works in an example.

Let's approximate the square root of E within 10 to the negative

10 using the familiar expansion for e to the x and

evaluating at x equals one half then what do we get?

We have some E sub N.

Where by the Taylor Theorem, E sub N is less than

some constant C over N+1 factorial times x to the n plus one.

In this case, x equals one half and c is some constant

that bounds the n plus first derivative of e to the x for

all values of x between zero and one half.

Now fortunately,

derivatives of E to the X are easy to compute, that's just E to the X.

So, what is a good upper bound for E to the X?

Well, since E to the X is increasing.

Then, a good upper bound would be the right hand end point.

E to the 1/2.

Well, that number is maybe not so easy to work with.

So let's just say 2 because I know that 2 is a reasonable

upper bound for e to the 1/2.

Therefore, we get that N is less than 1 / (N+1)!, 2N.

That's may be not the best bound we can come up with but it will get the job done.

Because if we tell you up and plus one factorial times two to the N for

various values of N.

We see without too much effort that having N bigger than or equal to ten will work.

13:12

Well that went so well.

Let's do it again.

This time to estimate ARCSIN of one tenth within ten to the negative ten.

Now, I won't go through the details of the taylor expansion for ARCSIN of x.

The terms are a little bit complicated, but

not too bad if you assume that they're given.

What matters is the Taylor error bound that E sub N is less than a constant C

over (N+1) factorial times x to the N+1, where x equals one-tenth.

Now, this constant C is the critical piece of information.

It's an upper bound for the n plus first derivative

of ARCSIN(x) for all x between 0 and 1/10.

Now, who remembers the formula for the n plus first derivative of ARCSIN(x)?

Anybody?

I don't remember it either.

And this is the difficult part of using the Taylor bound.

You don't necessarily know a good bound for the N plus first derivative.

How are we going to solve this?

Well, if the Taylor theorem is not gonna work and it's not an alternating series,

and I don't think I wanna integrate this function, then what do we do?

Well, we're just going to have to think.

But, if we think, well this is not so bad.

Look at the terms in this series.

We have one tenth, and then something times one tenth cubed,

plus something times one tenth to the fifth, etc.

It seems as though every step where we go from n to n+2

we're picking up an extra one tenth squared.

Okay, so that's 1/100, but if we look at the coefficients the 2n+1 and

the product of odds over the products of evens,

then we're picking up another factor of 10 in the denominator.

And I claim that a, n+2 the next term in the series is less

than the previous term a sub n divided by 1,000.

What this means really is that you're picking up three decimal

places of accuracy, with each subsequent term.

And that means that if we want to get within 10 to the -10th,

it's going to suffice to choose N bigger than or equal to 7.

So the first four terms we have represented on this

slide suffice to approximate.

ARCSIN one-tenth within ten to the negative ten.

Never forget to think even if a Taylor bound doesn't work.

In general, bounding errors is just hard.

There's no getting around it.

If you're fortunate enough to have an alternating series, then it's not so bad.

If you've got something that works with an integral test, you're great.

If not, you're either going to have to resort to the Taylor theorem or

use your head.