In the Design and Analysis of Algorithms, both this course and its predecessor,

we've covered a lot of famous algorithms. We've covered a lot of fundamental data

structures. We've covered lots of killer

applications. And despite the fact that I've tried to

use our time together in the optimal way, giving you the most bang for the buck,

there are fundamental topics we have not had a chance.

To cover. In these final few videos, I want to talk

about things we didn't talk about. I have a few goals.

The first goal is to give you a little bit of literacy and a few more techniques

so at least you've heard of things like bypar type matching, and the maximum flow

problem, and your aware that there are good algorithms for those sorts of

problems. Secondly, I want to get you excited to do

a little bit more study either through further courses or on your own.

And indeed generally the topics that I'll mention in these final few videos are

covered in your more comprehensive textbooks.

On algorithms. Finally I hope to convey something that I

mention in passing, in the past, is that algorithms is not a oscified field.

Rather, it's a vibrant area of research. There's still lot's of things about

fundamental problems of models that we don't get.

Understand. Let's get the ball rolling with a very

fun, classical problem known as stable matching.

We're going to think of stable matching as a graph problem and there's going to

be two sets of nodes, two sets of vertices, U and V.

For historical reasons, we will call these sets the men and the women,

respectively. A non-essential assumption that will make

for simplicity is that these 2 sets of nodes have equal cardinality.

Call that common size N. So for example, maybe N equals 3 and we

can call the first set of nodes A, B and C.

And the second set of nodes D, E and F. Now what's really important about the

stable matching problem is that every node has preferences, there are things

that advance more than others. Specifically each node in U has a rank

list of the nodes in V, each node in V has a rank list of the nodes in U.

So in this example maybe A, B, and C are all on the same page.

They all agree that D is really the best node, it's better than E, but E is

definitely better than F. That is maybe every single node on the

left hand side has the rank listed D, followed by E, followed by F.

By contrast, maybe there's heterogeneous preferences amongst vertices in capital

V. Maybe D thinks a is better than B is

better than C. While E prefers B to C to A.

And finally, F prefers C to A to B. So what might these nodes and these

ranked lists represent? Well, imagine for example that one set is colleges and the

other set is applicants, recent high school graduates.

Each college has a ranking of the applicants based on things like test

scores, grades and the fit that student is for the college.

And each student similarly has a ranking over the colleges of which one's prefers

to go to. Do over others.

As another example you might think about medical residents that is people who just

finished medical school looking for residency and the other side being

hospitals. Again hospitals are going to have a

preference over which recent graduates they accept as residents, residents,

potential residents of course have preferences over which hospital they get

assigned to. So given this data, 2 sets of nodes, and

these rank lists we're interested in computing a stable matching.

So what's a stable matching? Well, first of all, it better give you a perfect

matching. And a perfect matching means that each

node of the left, each node of U is matched to exactly one node of V and vice

versa. In addition to being a perfect matching,

it should also be stable in the sense that there should be no Blocking pair.

What that means, is that if you look at any pair of nodes, little u from capital

u, little v from capital v, if little u and little v are not matched, there

better be a good reason for them not being matched.

Specifically little u better strictly prefer Who it is matched to, call that V

prime, to this alternate V, or if that's not the case, it better be that little v

strictly prefers the node it's matched to, call it U prime, compared to the

alternative. Little u.

So what's the motivation for this definition? Well think about any perfect

matching that fails this stability property.

You're really asking for trouble for that kind of matching.

Specifically u with its wandering eye will spy v and it prefers v over v node v

prime to which It was a sign. Similiary, V is going to return that

gaze. V prefers U by assumption that this

stability property fails over the node U prime to which it was matched, so U and V

are going to have an incentive to essentially elope and disrupt this

Imagine for example that u is a student and V is a college.

And you have a matching that fails the stability property.

Then, this pair this student and this college has an incentive to do a sort of

back room deal after the fact. U might want to withdraw its commitment

from its assigned college, V-prime to try and get into V instead.

And V to college actually wants to sort of withdraw its offer from its

student, U prime, they give it to U instead.

So that's an unstable matching, but as long as there's no blocking pair of this

form, and for every pair which is not matched together there's a good reason

for it, then we deem it a stable Stable matching.

Let me tell you about an extremely elegant and justly famous algorithm for

computing a stable matching, the Gale-Shapley Proposal Algorithm.

Lloyd Shapely was one of the recipients of the 2012 Nobel Prize in economics, in

large part for this algorithm. Had David Gale still been alive, he

surely would have been a co-recipient of that Nobel Prize as well.