Okay, so welcome back. This is Discrete Optimization: Mixed

Integer programming part 4, and this is going to be quite exciting.

So, what we going to see today some really beautiful mathematics with very

interesting computational results. Okay.

So we going to talk about polyhedral cuts.

Okay. Very different from the cuts that you

have seen before; talk about warehouse location and node covering.

Okay. So this is, this is the motivation; so we

are going to talk about the convex hull. Right?

So this is the linear relaxation that we saw last time of one of, one of very

simple mixed integer program. Okay?

Now, in practice what we are really interested in, are these interger points

that you see. Okay?

And one of the things that we going to look at is the convex hull of these

particular points, that you see on the screen at this point, okay?

So this is 2D, okay? This is a larger 2D example.

Once again, this is the linear relaxation of a MIP.

These are all the integer solutions that you see there.

That's the real thing we are interested in, right?

And this is the convex sort of disguise/g, the convex envelope.

That means you know, you take all these points, and this is essentially the

convex, the smallest convex set, which include all these points.

Okay? So this is 3D, so this is a 3D linear

relaxation. You see all the integer points inside,

okay? Once again, you could take the convex

cell, that's you know, this polyhedral that you see there.

And that 's the convex set of all the integer points.

Okay? Now, why am I talking about this?

Okay? Assume that you have this beautiful

convex hull. You have this rigid, this polyhedra

there. Okay?

What could you do? Okay?

Think about it, what could you do? Well, one of the things you can do is

say, [SOUND] I'm going to use an LP here, optimize my objective function.

And by now you know that if you optimize this finger, you [UNKNOWN] it a vertex

right? What is that vertex?

It's going to be an integer solution, and that's going to be the optimal solution

to your MIP. So if I had this convex solved, okay, I

could use linear programming and I would get the optimal solution immediately.

Isn't that beautiful, right? So, in a sense, this is what we are

going to talk about. We're going to talk about these

polyhedral cuts, which are essentially these facets of the convex hull, okay?

If get all of them, we would just use linear programming and we would be very

happy, okay? Now, these cuts, okay.

They're valid, okay. They don't remove any solution.

I took the convex envelope of all the interger points.

I'm not removing any of them. They are all included, right?

And then they are also as strong as possible, right?

And so, if I have all of them, I could just use linear programming and I would

find the optimal solution you know, using the linear program.

Okay? So, so, why are these you know,

polyhedral cuts very interesting? Because they are exploiting the problem

structure. In that sense, they are completely

octagonal to the syntactic cut that we saw in the last lecture, which were based

on the tableau. Okay?

So, basically what we're going to do here is look at specific constraints or

specific properties of the problem, and then generate these polyhedral cuts.

Okay? Now they share of course the same spirit

as the syntactic cuts. Right?

So they are valid, they don't remove any solutions.

Okay? They're always valid, they're just

strengthening the, the formulation of the MIP.

They are also cutting; we want to generate those that are cutting the

optimal solution of the linear relaxation.

Right? So such that you know, we actually you

know, get closer and closer to the, the integer solution.

Okay? And of course we don't need to generate

all of them, we can generate enough such that we can solve the problem

effectively. Okay?

Now a particle in a, in a particle context, an application may use many of

them. It's like you know, global constraints in

constraint programming. You may look at very different classes of

code and generate these cuts. Okay?

They will exploit different sub-structure of the application.

Okay, so why is a facet of this convex hull [UNKNOWN]?

That is a critical thing that we're left to actually define.

And so, what your going to see is that the mathematics behind it is easy.

Of course, you know, what the, the whole creativity is going to be finding this

facet and then proving this simple proof. Okay?

So, this simple technique. So, to find a facet, what you need to do

is essentially find f you know, n, a finally independent solution, so point

which satisfy the facet at the equality. They have to be on that facet, these

points. Okay?

And so a final independence is, is, is a simple thing to show.

What you need to do is that, if you have a set of n points, they're going to be a

finely independent, if when you extend with one more coordinate, then put a 1

for that coordinate for every one of them, then you can show that they are

linearly independent. And of course linearly independence you

know what this is. It's simp, simply saying that, if you

have a, a combinations of all these points using alpha 1 alpha n, okay, for

these end points. And if that is equal to 0, then all these

alphas have to be 0. So, in a sense, you know, showing that a

facet is that, that, that's, that's something, so many quality is a facet is

basically going to be trying to find these points and showing that they are

finely independent. Okay?

So, let's look at the particular example that we saw before.

Okay, so this is warehouse location. Remember, we had to open or close these

warehouses, okay? And once they are open, they can serve

customers. The goal was to minimize the cost of

opening this warehouse and then serving the customers.

Okay? So we had this beautiful MIP model.

We have a very strong relaxation here, which was basically using this very

simple constraint, right? Simple inequalities, two variables,

stating that you know, you can only serve a customer C with warehouse is W, if

warehouse W is open. You know, warehouse W is open means xw is

equal to 1. And then obviously, and then obviously,

and then obviously what you need to have is, you need to have you know, ywc, small

or equal to the value of xw. That's what you see over here.

Okay? So in a sense, what you can wonder is how

these inequalities facets of the convex hull?

Okay? So these simple inequalities are the

facets. And so, what you need to do is you need

to look at these constraints, okay, at equalities, instead of in, an ine,

inequalities, and find n points okay, such that you know, they are, find n

points which are finely independent. Okay?

So I'm going to cheat a little bit here, I'm going to assume that we have only one

warehouse. Okay?

And then you can generate these points here which satisfy these constraints and

inequality. Okay?

And so, what we are looking here is we're looking at warehouse w, and we are

looking at customer one. Right?

And we are looking at this very simple constraint, and what you see at the end

points is that we're going to assume that you know, xw is zero.

Okay? if xw is zero, then y1w, has to be 1w1,

yw1 has to be zero and that's the first point that you see there.

And all the other customers, we put them to zero as well.

Okay, so we satisfy that constraints of the equality.

Now, when this guy is 1, okay. So when xw is 1, okay.

So then, to satisfy the constraints of the equality, we have to serve the

customers by, with that particular warehouse.

And there's a second point that you see. And then the other point that you see

that will simply be saying, ooh, but I can take these other customers, and say

that he's using that warehouse as well. It doesn't impact that var, that

constraint, but we, we can find different points, knowing they're going to be

finely independent. Of course you can generalize that if you

have many warehouses by you know, finding the right values for these different

warehouses. Okay?

So, putting a one for every one of them in every one of these points.

Okay? Well, to, to get essentially n points.

Okay? So, in a sense, this is the kind of ways

you prove that something is a facet. You look for these points.

Okay? And you are trying to show that they

satisfy the constraints of the inequality and they are independent of each other,

of each other. Okay?

So, let me look at another example now, because this was kind of too easy.

Right? So, the formulation that we had, had

already some of the facets. So, we're going to look at another

problem where it's not so obvious. And this problem is called node packing.

Okay? And the key idea is once again, you have

a graph you know, with a variety of vertices and then you have edges between

these vertices. And what you want to do is to find the

largest number of nodes such that any two of them is not connected by an edge.

Okay? So we want to pack as many, that's why

its called node packing right? You want to pack as many vertices as you

can, but they can't be connected by an edge.

Okay? So this is a packing you know, this is a

beautiful packing with one vertex. Okay?

And of course as soon as I take you know, vertex 1, I can't take anything else

because they are all connected to it. Okay?

So this is not a packing which has two nodes.

Okay, okay? So, it has nodes four and nodes six.

Okay? So these two guys are not connected with

each other, and therefore this is a packing.

Okay? Now, a question that you may ask is that,

is there a better packing? Is there a packing where I can actually

pack three nodes? Okay?

So look at this graph and try to think, and you see this mapping with three

nodes? Okay?

So, we're going to come back to that. Okay?

But what we're going to first do is to look at node packing and see if we can

express it as a MIP. Okay?

And once again, it's always the same story.

Right? What are going to be the decision

variables, and what are going to be the constraints.

Okay? So in this particular case, the decision

variables are easy. We're going to decide, if a particular

node is actually inside the packing or not.

Okay? So a particular vertex is inside the

node, and then we leave constraints forl you know, linking two nodes.

And what we know is that if we take this node, any node which is adjacent to it,

which means that there is an edge between them, cannot be inside a packing.

So, we'll put this linear inequality to tell you know, to tell the MIP that you

know, this node and that node, because they are linked by an edge, cannot be

inside a packing at the same time. Okay?

And this is the formula that you see here.

Okay? So, you see that the you know, x1 to x6

are basically asso, are variable which are associated with every one of the

vertices. Okay?

They're basically telling you if that particular vertex is in the packing.

And then you see this very simple inequalities that are basically telling

you, ooh you know, node x1 and node x2 cannot be packed you know, at the same

time because they are linked by an edge. So, essentially for every one of the

edges you have these simple inequalities. And obviously every one of these

variables has to be zero and one, and the linear relaxation here is basically

going to relax this, ins by, by saying that they are between zero and one.

Okay? So, this is the linearly relaxation of

the problem. Okay?

So when you look at this, you can think ooh, but what is the linear relaxation

going to to do? And what I told you last time is that the

linear relaxation is going to be evil, it's never going to do what you want.

Right? It's always going to try to maximize in

this particular case, you maximize the largest path of the packing here.

Okay? So, it's going to try always to do to do

better. Okay?

And in this particular case. Okay?

The linear relaxation. Okay?

Is basically going to assign one half to everyone of the decision variables.

And then for value of the objective function is going to be six time one half

which is three. So, it's basically telling you that the

maximum you will ever be able to do is to pack three nodes on this simple example.

Okay? But of course this is terrible.

Right? So this, this is [UNKNOWN] fraction, in

the worst fractional way. Right?

So can we do better than that? Okay?

So now we have to think. You know, you look at these problem

[INAUDIBLE] and, and you're asking yourself, is there any kind of properties

of the solution that would strengthen my relaxation?

Okay? So, we want to find you know, a property

of the solution. So essentially of, constraints which is

going to satisfy by all of the integer solutions.

Okay? And this particular case, there is a very

useful concept that we find everywhere in optimization, which is cliques, the

concept of cliques. Okay?

So when you look at this thing here. Okay?

So, you see this you know, you see basically two cliques.

Okay? And so, when you see a clique like that

with four vertices, and all the vertices in a clique are connected to all the

other ones, how many vertices can you actually you know, select in a packet?

Well, at most one, because that par, if you select one, it's connected to

everything else. So you can basically now select any of

the other vertices. Okay.

So as soon as you find a clique, you know exactly that you can at most, that you

can select at most one of the vertices in that clique.

Okay? So what are we going to do, is take the

formulation and add all the clique constraints that were present in the

instances that we looked at. Okay?

So this is and so we can do actually better than that, we're going to look at

the maximal clique. And what is the maximal clique?

It's the largest clique that we can find. Okay?

So, it's a clique that essentially cannot be extended further.

Okay? So, when you see, when you see this click

here, you can say oh, but there is a clique with three vertices.

Okay? But that click can extended, so we won't

state that popular clique. We're going to take the clique with four

vertices instead, because it's larger and therefore it's more constraining.

Okay? So we're going to take this maximal

clique for every one of these, well, for every you know, collections of vertices.

Okay? So, as soon as you have a maximal clique.

Okay? Let's say for node one to five, you can

state the constraint which says that most one of these nodes can be selected inside

inside, in, in the solution. Okay?

So, in this particular case, we said that the sum of x1 to x5 has to smaller or

equal to one. Okay?

So now, note what we can show actually, we can show that actually a, a maximal

clique is a facet. Okay?

And if you want to actually know what the proof is doing, is the kind of, the kind

of reasoning for finding all these finely independent points that you will have to

do is to say ooh, but if I have clique like this, and if I ever note a vertex.

Okay? We know that since this clique is

going to be maximal, that it cannot be, there is at least one edge with one of

these vertices that, that is not there. Okay?

So, it's missing at least one edge with one of these vertices.

And that's how you will be able to find this finely independent point.

Okay? So, t?his is essentially the MIP node

with this, with this, with all this constraints.

Okay? So, we remove all the other ones, because

they were basically subsumed by the clique constraints.

Okay? They were smaller than the clique

constraints, so the fact that x1 and x2 could not be assigned the same value here

is subsumed by the fact that we know that there is a clique between x1, x2, and x3.

And therefore you know, it subsumed the other one.

So we basically have all the clique constraints here.

Okay? And that's the new, that's the new linear

relaxation steps we are going to use. Okay?

Now, once again, you know, this linear relaxation is really clever.

What is it going to do? Well, one of the things that it's

going to notice of course is that x1 is very, is, is all over the place.

So to, so to, so if we give a value to x1 essentially, it's goning take you know,

some value for every one of these constraints.

So, what it's going to do is basically ignore x1, put x1 to 0 and then once

again you will have a completely fractional value for all of the other

constraints. Okay?

So when you do this, you know, what is the, what is the linear relaxation giving

you? It's giving you five times 1 half.

Okay? Which is what?

Which is 2.5. So we know already that the best packing

we will ever be able to do is two. Okay?

So that's, that's essentially tell, it's, it's actually a strong relaxation in this

part here in the case. Okay?

but one of the things you have to remember here, okay, is that we don't

have the convexal of the node parking you know, the node parking port, you know,

solution points. Right?

Because this solution is [UNKNOWN] fractional.

So what that means is that, these clique constraints are not completely

characterizing the problem, you will need all the kinds of cuts.

Okay? And they're so beautiful cuts that you

can generate, as well for doing this. But you know, this was just illustrating

some of the cuts. You improve the linear relaxation you

know, nicely, but they are not necessarily capturing all the [UNKNOWN].

Okay? So, let me show, let me talk a lot about

a bit another example that we saw before. Okay?

So, this was graph coloring. Okay?

So, we saw that you know, what, so what we were looking for is how many colors do

we to actually need to color this map of Europe.

Okay? Well, a subse a small subset of Europe.

Okay? And so one of the things that we have is

having this linear you know, this MIP representation, where we would assign you

know, decision variable zero one variable for to, to decide whether a particular

vertex 'v' was assigned color C. Okay?

So, so when you look at this relaxation, what we had was the fact that you know,

the objective function had to be greater, greater than all the possible colors.

That you know, every, every, every country had to be assigned to exactly one

color. And that you know, adjacent country could

not be assigned to the same color. That's what we had.

Okay? So, the, the, this linear relaxation was

actually pretty terrible, so we, we, the only thing that we had was we need at

least two colors. Okay?

Which is kind of obvious. Right?

So, if two countries are adjacent, you need two colors.

Okay? So can we do better?

Can we actually find you know, some new constraints that can actually strengthen

this linear relaxation. Okay?

And so what we saw you know, what we just saw is the concept of clique.

Right? So, can you actually you know, use that

concept in this particular case as well. Okay?

So, let's, let's look at this graph coloring problem in the sense of the

graph. Okay?

So you see basically you know, basically you have an edge between, you have

basically a vertex for everyone of the country and then a link as soon as they

are adjacent. And what you see now is that there are a

bunch of cliques. So this is a clique of size three, this

is a clique of size four. Okay?

So you see that there are basically plenty of cliques there.

And once again, once you have that, you, you can do some kind of reasoning about

the number of colors that you need in a clique.

Okay? So if you have a clique of size three.

Okay? How many colors do you need?

Well, all these vertices are adjacent to each other, so you need at least three

colors. If you have four you know, how many

colors do you need? Okay?

So in a sense, what you can do is add these clique constraints, and these are

some of the constraints that you can use. Okay?

So here this is adding a, a, a three clique, a three clique.

Okay? And what this is basically telling you is

that if you look at the value of these vertices that are in a clique, and if you

sum the value that they have, they have to be greater or equal to three.

Why three? Because we have color zero, we have color

one, we have color two, that's the smallest thing we can have with three

colors. And that's three.

Okay? If we have a four clique, we can do

exactly the same thing, but now it's zero, one, two, and three, which is 6.

So we're basically sticking all these four vertices.

Okay? And we are basically saying that the sum

of these colors that they have okay, has to be at least six.

Okay? So we can put these constraints in there,

once again clique constraints, and we can see what the relaxation is going to do.

In this particular case, what you can see is that you know, for the part for, for,

when you add these three you know, these three clique constraints the number nodes

becomes.. You know, the, you know, the optimal

solution takes nine node, to find the optimal solution you need nine nodes and

the proof of optimality require 41 nodes. When you use this four cliques.

Okay? So you get five nodes, and you get four

for finding the optimal solution and nine nodes for proving optimality.

Okay? So, what is nice is that you need at

least three you know, you, you basically can add these constraints, the strength

from the relaxation, they shorten the proof of optimality and finding the

optimal solution. Okay?

So, this is basically giving you the concept of polyhedral cut.

Okay? So the way to think about it is that we

are looking at constraints, okay, that are actually going to tighten the

relaxation by using the structure of your problem.

Okay? You can show that they are facet of the

convex hull, that's really the best thing you can do, because this catalyst is

strong as possible in the sense. If you had all of them, you would just

use linear programming. Okay?

So what will do, what we'll do next time is look at more interesting ways of

generating these polyhedral cuts, and we'll look at them for you know, also

[UNKNOWN] the traveling sales man plan. Thank you.