So just by following our nose in this way, we get exactly what we want.

We get, on the left hand side what we really care about, namely,

the total value of the solution output by our heuristic.

And this is the total value with the original values,

mind you, not with the transformed ones, with the ones we really care about.

We get a lower bound on that total value, that's exactly what we wanted.

In terms of the best case scenario, the total value of the optimal solution,

and the only blemish is we pick up this error of at most minus m for

each item i in the optimal solution s star.

So let's just lower this down crudely.

The optimal solution as star it can only have n items at most.

So we pick up a minus m at worst for each of these in most and items.

So if we subtract an error term of m times n that

gives us a lower bound on the value of our solution s.

So now that the dust has cleared, we see that we have a performance guarantee for

our solution s in terms of that, of the optimal solution, s star, basically,

we're off by this error term of m times n.

Remember, m is this parameter that governs how aggressively we scale the item values,

and it's controlling the running time accuracy trade-off.

We were expecting this all along, we argued with the bigger you take m,

the more information you lose.

So the less act that you'd expect and indeed the extent that were off

by the optimal solution is growing linearly in this parameter m.

The question were striving the answer, remember,

is how big an m can we get a way with, and

still be sure that our solution is within a 1 minus epsilon factor of the optimal 1.

But to achieve this we just need to make sure that the worst case error in our

solution m times n is never bigger than an epsilon fraction

of the optimal solutions value, that is we just set m small enough that

m times n is not more than epsilon times the optimal value.