0:03

Now, if we put the last two sections together it gives us a way to start to

Â think about how to classify problems according to their difficulty. Let's look

Â at how that works. so what we're always looking for is a problem that where the

Â algorithm matches the, the, lower bound. and so, one way to do that is just use

Â reduction. so, in order to, we want to prove that two problems X and Y, have the

Â same complexity or the same research requirements. First, we show that X linear

Â time reduces to Y. And then we show that Y linear time reduces to X. and then with

Â that way, even if we don't know what the complexity is at all, we know that X and Y

Â are, have the same computational requirements. And this is what we just did

Â for sorting in convex hull. In this case, we know that both of them take time

Â proportional to N log N but the reducing one to the other in both directions shows

Â us their, their equivalent with respect to computational requirements. in the one

Â case reducing sorting to convex hull gave us a lower bound. In the other case,

Â reducing convex hull to sorting gave us a useful algorithm. but together they're

Â useful because it helps classify those problems. now, there is a bit of a caveat

Â to this and this is a little bit of an idealized situation. But it actually is,

Â is something that can lead to trouble in the real world. So, we have these two

Â problems. and we have the, these two propositions, that they, in linear time,

Â reduce to one another. now, it could be that in a big software engineering effort

Â there there's a lot of people making decisions. and well, so we found out they

Â have the same complexity. But maybe some system designer has a big project and

Â there's a lot, a lot of things, so they need both sort and convex hull. And one

Â programmer is charged with a job of implementing sort. and understands that

Â well, you could do that using convex hull. I learned this cool trick. And the other

Â one knows the Graham scan, says, okay, I'm going to index convex hull using sort. and

Â that's in a big system. and that's going to lead to a problem. It's an infinite

Â reduction loop which certainly is going to lead to problems. whose fault, well that

Â would be the boss. Alice and Bob just did what they were suppose to. and it's, they

Â were all, somebody could argue Alice maybe could have used a library routine for the

Â sort but you, you get the point for a complex situation. this definitely could

Â have come up just to show the power of this kind of technique and how it relates

Â to research that's still ongoing in Computer Science. let's look at some

Â simple examples from Arithmetic. So here's the grade school integer nullification.

Â let's do it with bits. So, I have two N bit integers I want to compute the

Â product. and this is the way you learned to do it in grade school. For every bit in

Â the bottom you multiply it by the top and then you add them all together. and that's

Â going to take time proportional to N^2. You have N bits and you have N rows. and

Â so that's time proportional to N^2. And but now, nowadays, people are doing

Â integer multiplication on huge integers because mathematical properties of

Â integers play an important role in cartography and other applications. so,

Â people are interested in algorithms that are faster than quadratic for something

Â like integer multiplication. now one, one thing that you can do, first you can do

Â with reduction is people have figured out that you can take integer division or

Â square or square root and all different other integer operations and reduce them

Â to integer multiplication. So, you can show that all of these and, and vice

Â versa. And so, you can show that all of these things have the same complexity,

Â even though we all know what it is just by simple reductions one to the other. so and

Â you can imagine how to do divide to multiply and so forth. These have been

Â done in all different directions. so now, the question is that so now, they're all

Â the same difficulty as the Brute force algorithm and how hard is the Brute force

Â algorithm. well, people have studied that for a long time and actually one of the,

Â 5:24

the earliest divide and conquer algorithm by Karatsuba and Ofman showed that the

Â integer multiplication could be done in time into the 1.585. and it was divide and

Â conquer, we divide the integers in half and then find a clever way to combine it

Â to reduce the exponents. And people have been working on this. you can see through

Â the decades, actually, there's a big breakthrough just in 2007 that is going to

Â get much closer to N log N. Although there's no known lower bound for this

Â problem could be linear. And some people are still going to work on it. so, and all

Â these different algorithms, they get more complicated and may be, you know, useful

Â for very large N or for different ranges of N. and actually in practice there's the

Â Multi Precision Library that's used in many symbolic mathematics packages has one

Â of five different algorithms that it uses for integer multiplication, depending on

Â the size of the operands. And, and again, in applications such as cryptography the N

Â can be truly huge. And people want to do this computation. so we don't know what

Â the difficulty of integer multiplication is, we just know that all the integer

Â operations are described by this one thing. And it's similar for a matrix

Â multiplication. one of the, another famous problem that people are still trying to

Â understand. and again, the secondary school algorithm to compute the entry in

Â row I and column J, you compute the dot product of the Ith row of one argument,

Â and the Jth column of the other and the dot product of that, that fills that

Â entry, and you do that for every entry, so that's going to be time proportional to N

Â cube, because you have to do N multiplications for each of the N^2

Â entries in the result matrix. and again there's all different kinds of matrix

Â operations that can be proven to be equivalent in difficulty to the matrix

Â multiplication through reduction. and so, that's the you know, of called matrix

Â multiplication as all these other things like solving systems of linear equations

Â and determine and, and other things. bu t how difficult is it to multiply two

Â matrices. So again, reduction gives us a big class of problems to make it even more

Â interesting to know the difficulty of this one problem. and then, research on the

Â difficulty of this one problem has been ongoing. in this case running time is N

Â cube for the brute force algorithm and who knows when that was discovered I don't

Â know, eighteenth century or fifteenth or something. and then but, when, once

Â computers got involved the Strassen's famous divide and conquer algorithm like

Â you know, integer multiplication breaks it down through divide and conquer and gets

Â the exponent down to 2.808. And people have been working through the years to

Â find successive improvements to this. The last one went from 2.3737 to 2.3727 which

Â doesn't seem like much, but maybe one of these research results will be a

Â breakthrough that will give us an efficient new algorithm for all of these

Â problems involving matrices. so again, it's very useful to have classified all

Â these problems and make the even increase the leverage of the research even more.

Â So, when we started this lecture we started with the idea that it would be

Â good to classify problems. but it's, it's tough to actually classify them because

Â there's new research involved, even for things as simple as integer or matrix

Â multiplication but at least what we can do is put the defined complexity classes.

Â even though we don't want to know what it is, we know there's a lot of important

Â problems that have that same complexity. and there's a really important one that

Â we're going to talk about in the last lecture, called the NP-complete problems,

Â and all kinds of important problems that fall into that but as the time wears on,

Â researchers have found ways to put many, many problems into, into, into equivalence

Â classes. so, at least we know that there's those problems are have the same

Â difficulty even, even if we don't know what it is. this is actually called the,

Â the complexity zoo there's really a large number of complexity classes. now a lot of

Â these are de finitely theoretical devices that are only important in filling in our

Â understanding of some, some important part of the theory. but many of them like, like

Â matrix multiplication, integer multiplication, are very important to

Â practical algorithms. So there's certainly a huge amount of research, I guess, still

Â going on into trying to understand computational difficulty of the problems.

Â So, in summary, we use reductions to design algorithms, to establish lower

Â bounds, and to help us classify problems. but in practice, we've been using them

Â 11:25

already. we, we designed algorithms that made use of priority queues in effective

Â ways. That's a reduction and lots of algorithms for sorting, and really that's

Â the idea of reusable software modules. We find, you know, fundamental problems, and

Â we develop implementations that can be used by clients. Every client using an

Â implementation is doing a reduction. and the other thing is that reductions help us

Â determine the difficulty of our problem and guide our search towards finding

Â efficient solution for it. so in particular, when we get to extremely

Â difficult problems, we'll know whether or not we should look for an exact algorithm

Â or we should find something that doesn't quite solve the problem. And we'll talk a

Â lot more about that in the last lecture. And then all of these studies, in the

Â complexity zoo and in any algorithm that we're going to develop for a complicated

Â problem, what we're going to want is reduction to link the new problem to

Â problems that we know how to solve.

Â