0:00

Ladies and gentlemen good morning or good afternoon or good evening.

we are moving to the third part of this class which is mathematical programming.

And this lecture is going to be the first part of that third part.

And that's going to be about linear programming.

Okay. So, linear programming what we are going

to do in the lecture is first find out what is a linear program, and then talk

about convexity and geometry. Okay, so let me give you the context

here. So, linear programming was invented by

George Dantzig in 1947, and it's one of the most fundamental truths in, in

combinatorial optimization. It's one of the growing, it has been a

growing area for, for decades. And actually, people are still working

very actively in this, and I, as I will told you, you know, there are still some

very interesting open issues in that area.

But the, the really important thing for us is that this is a really fundamental

tool for doing a lot of different things in optimization.

Okay. The other thing which is very nice in

linear programming is that you have, you have these two views.

You have the geometrical views. You have the algebraic view.

And they have beautiful connection between them.

So, we'll switch typically from, you know, the algebraic view to the

geometric, to the geometry view, and so on and so forth.

And today, mostly we'll talk about the geometry.

But the next lecture is going to be about linking the two things together, okay.

So, fundamental tools, beautiful connection between, you know, visual, you

know, geometrical intuition, and then the algebraic the algebraic, you know,

expressions of, of the problem. This is what a linear program is, okay?

So, what you see at the top is basically minimizing a linear objective function.

Okay? Subject to a set of linear inequalities,

okay? Now, one of the things that you have to

see here is that the variables will have to be non-negative, all of them.

So, in this expression, okay, so the x i are the variables, okay?

Real numbers. And the ci's, the ai's, and the bi's are

the constant. Okay.

So, in this particular expression, obviously you have n variables.

You have n constraints. Okay.

the variables, as I said, are non-negative.

They have to take, you know, a non-negative, you know, value.

Okay. And all the constraints here are

inequalities. Okay.

So, that's the basic form of a linear program.

Okay. So, when we think about linear

programming, we think about this. Okay, minimizing, you know, a set of a

linear objective function subject to a set of linear inequalities.

Okay, so, here are the answer. Can I maximize?

Yes you can. Okay, so the way you would do that is

simply by replacing the maximization of your linear objective by the negation,

the, the minus of that particular objective, and you will minimize that.

Okay? So, if you minimize, you know, minus your

objective function. Okay?

So you will a sense, in a sense, maximize.

So, if you want to maximize, just minimize and negate the entire, all the

coefficients of your objective function. Now, can a variable xi take both positive

and negative values? Can you actually represent that?

You can, okay? And here is how you do it.

You take variable xi, and everywhere inside your linear program, you replace

it by a difference of two variables. xi plus and xi minus.

Okay. So, essentially you take this term here

and you replace xi everywhere by that particular term, okay?

Now, where does it work? Okay.

So, both of these variables have to take the non-negative values.

Okay, that's the, you know, that's the linear programming does.

All the variables have to take non-linear, non-negative values.

But when you look at this difference, this difference can take both positive

and negative values. You want a positive value, give it to xi

plus. Okay.

And the other terms can be zero. You want a negative value?

Give a positive value to xi minus, a zero to xi plus, and you get a negative value.

Okay? So as, in a sense, they're easy, right?

So, you have a variable, it can take both positive and negative values.

What you do is you replace them by this term everywhere inside the linear

program. Okay?

Now, what if you have an equality constraint.

Very simple as well. You take that equations, you replace it

by two inequalities. Okay.

In one smaller equal, one greater equal, the smaller equal you can, you know, you

just transform it into a smaller equal as well.

And you have, essentially, an equation at that point.

Okay. So, very simple.

Okay. Now, what if my variable can take integer

value? That you can't do it in linear program.

Okay. So, this is something that we'll talk

about later in the class. Okay.

It's going to be what we call mixed teacher program.

But this not a linear program. And you'll see, there is a fundamental

difference between these two. Okay?

Now, what about, you know, if I have a non-linear constraint?

Okay? Well, what we are talking about here is a

linear program. It's not a non-linear program.

It's called a linear program for a reason.

We only want to have linear, you know, linear constraints and a linear objective

function. And, and that give me the opportunity to

tell you a story about George Dantzig, of course.

You know, when George Dantzig presented this, he was, you know, invented, you

know, linear programming, he was a graduate student.

And, you know, when he wanted to present these results, he went to this

conference, and there were plenty of, you know, big shot in optimization there.

And so, he presented this results. Okay, and then there was this big shock

in optimization, this big German guy with the big, deep voice, right.

And so, he stood up and said Mr. Dantzig, you very well know that the

world is non-linear. Okay?

And so, Dantzig was like shaking up a little bit, how am I going to answer that

questions. And then, you know, that particular

confront, [UNKNOWN], and you know, the genius in Computer Science and Applied

Mathematics and Mathematics in general, was there and he stood up and said

mister, you know, George Dantzig stated this axioms.

Okay. He told you when you can apply, you know,

the method. If you have a problem that [UNKNOWN] use

it, if your problem doesn't, don't use it.

And I think, you know, that was really classy and I'm sure, you know, George

Dantzig appreciate that a lot. Okay, so, essentially, no linear

constraints, no variable that can take integer value, that's outside the scope

of linear programming. Okay?

The rest here, you can express essentially, you know, the way you want.

Okay? Now, let me talk about geometry.

Right? So, we saw, we just saw the algebraic

form of what a linear program is. So, I want to took a little bit what this

means from a geometrical standpoint. So, I'm going to start with convex sets,

okay. And so, I gave you a couple of sets here,

okay, set of points like, you know, the points inside a circle, inside a square.

Inside this, you know, this kind of apple, you know, square where you know

the, the, the corner are rounded right, and then the spot is the star.

Okay. So, which one of these are convex set?

Okay. So, the intuition is very simple.

Take two points, draw a line between them, okay.

If all the points between these two, if all the points in the line between them

are inside the, the set, okay, that means that the set is convex.

Okay. This is convex.

Okay, we are happy, okay. So, this is the star I'm showing.

Let's take two points. Let's draw a line between them.

What you get? You know, there are some points here

which are all aside the set. Therefore, this is not a convex set.

Okay, not happy. Okay, now this is 2D.

2D is really boring, so let's move to 3D. Okay, and this is 3D.

Okay. So, you see this this field, okay.

It's a convex set, okay. You take the two points.

You draw a line between them. And look.

You know, whatever you look at it, okay. All the point in between these, all the

points in this lines are inside the convex set.

That's what the convex set is all about, okay.

So, let's move to the next one. Okay, here is the next one, you know,

okay? So you see this thing.

It looks like a convex set, right? But look at this line.

I've selected two points, I drew the line between them, I go to zip.

Whoops. You know, you see there, you see, you can

see clearly, you know, the line goes outside the, the set.

This is not a convex set. Okay?

So, beautiful thing here. Okay?

You can look, you know, you take two points, you draw a line between them,

it's outside a set, that's not a convex set.

Okay? So, this is once again, you know, the

geometric intuition. What I want to show you now is the

algebraic expression. Okay?

So, if you set up point v1 to a set of points v1 to vn, okay.

So, we will define the convex combinations of these points.

And the way you define these convex combination, and that's essentially the

points which are on this line, right. So, so that of course, an arbitrary

dimension. Okay.

So, what you do is you multiply, you multiply every one of these by these

Lambdas. Lambda 1, Lambda 2, Lambda n, if you have

n, you know, points. And, and that's going to be your convex

combination of all these points. v1 want to be n.

Provided, of course, that the sum of these Lambda is one and they are all

greater is equal to zero. Okay.

So, if I have an expression like this okay, and I make sure that all the

Lambdas are equal to 1. And all of them greater are equal to

zero, then I have a convex combination of all the points v1 to vn.

Okay. So, this is how I can define a convex

set. Okay?

So, why? Because essentially, now a convex set can

be defined in the following fashion. So you say a convex set, okay, a set

essentially is convex. If you take points of these things and

all the convex combination of the points that you have in the set are inside the

set. Okay?

So, basically take a bunch of points, okay, these points are going to be from

convex. If whatever, you know, convex combination

you take, those points are inside the set.

Okay? So, that's a convex set.

Now, so this is an interesting result that we're going to use all the time.

Okay, so essentially, you know, the intersection of a set, of convex sets,

okay, is itself convex. Okay?

And so is the proof. Okay?

So very, very simple proof. So, so let's, you know, let's denote by

this notation here, the intersection of all the set S i, okay, from 1 to n.

Okay? So, we take this intersection of all

these sets. Okay?

And, and, of all the set S i. So right, that you see.

9:46

And so, what are we going to do is we are going to consider, okay, a bunch of

points inside this intersection. Okay?

So, let's say v v 1 to v n are in, are the points inside that intersection.

Okay? So what it follows, obviously, because

they are inside the intersection, that basically means that they are inside

every one of the sets. Okay?

So the points inside a intersection, by definitions, are inside every one of

these sets, okay? Now, what do we know about the sets, we

know that the sets are convex, okay? So, essentially what that means is that

all the convex combination of these points are also in the set i, and that's

varied if it is S1, S2, S3 for all the sets right.

So now what do I know? I know that every convex combination of

these points are in every one of the sets.

And therefore, they are in the intersection of this set.

Okay? And, of the sets, and that, and that

means what? That means that essentially, all the

convex combinations of the points in that, inside the intersection are inside

the intersection. And therefore, the intersection is

convex, okay? Now why this is interesting?

Okay, so look, when we look at linear programs we have these inequalities,

right? And these inequalities, okay, so they are

basically representing half spaces, and these half spaces are convex.

You can prove that. Take the definition that I've shown you

before, and try to prove that this is convex.

And the proof is very simple, right? Three lines.

Okay? But you can prove, essentially, that

every one of these half spaces are convex.

And so, when you look at the linear programs, and the constraints of the

linear programs, what you have is essentially a bunch of these half spaces

that are defining the feasible region of the linear program.

And so, all of them, you know, are convex, and the intersection of them is

also convex. Okay.

And so, this is, this is called a polyhedron.

Okay. The intersection of all these half space.

And if it's this finite, that means it doesn't go to infinity in one of the

dimensions, it's called a polytope. Okay?

So, big idea here is what, you know, we are dealing with convex set.

Every one of these sets is, every one of these inequalities is basically defining

your convex set which is called a half space.

You have many of them, so for having a solution you have to study by all of

them. That means you have to be in the

intersection of all of them and I told you that a intersection of a set of

convex set was convex, And therefore, we know that the intersection of these

things is going to be convex. Okay?

So, let's look at these things a little bit geometrically, right?

So, this is this is a hyperplane, okay? So, this is hyperplane, and that define,

you know, a half space. And let's assume that, you know, we are

looking at the solution on this side. That's part of our feasible region.

Then, I can take another one. Right?

So I take another half space. Okay?

And I, you know, refine my feasible region, and the little guy over there,

that's called a vertex. Okay?

And you're going to see today, I'm going to really be obsessed with these

vertices. Okay?

But that's another one. I can take another one, and then the

colors are getting uglier and uglier, and uglier, but I'm getting those

intersection of the set. Actually I don't really know in there,

you know, ugly because I'm colorblind, I don't see anything here, but anyway I was

told that they are really ugly. So we are not going to represent them,

we're going to represent the feasible region like this.

So this is essentially all of my upper planes.

They are defining half place, half space everywhere okay, think of them as

inequalities. And then, this is the visible regions of

the linear program. Okay?

You see the algebraic version of every one of the inequalities that I have been

mentioning here. Okay?

So this is in 2D, right, so it's boring, so we're going to see 3D as well in a

moment, right. But before that what I want to do is

telling you a little bit more definitions here, so a face okay we'll talk about

face and facets. So, face is basically the intersection of

finitely hyperplanes, okay. And so, if we are in dimension n minus 1,

we'll call, we'll call up, you know, we'll talk about facets.

And this, you know, you'll see later on why this facet are important, not right

now, not really for linear program, but later on they will become very important.

And then, of course, when you are in dimension 0, you will have what, you

know, what we call vertices okay. And your, your going to see I'm

completely obsessed, today I'm going to be completely obsessed with these

vertices, okay? So, this is in 3D.

Okay, so this is the first hyperplane we see here.

Okay, a single hyperplane. And we're going to increase the number of

hyperplanes there, but you see the first one over there, right, in three

dimensional. Okay, and now look at this.

This is two hyperplanes, okay, so you see.

You know, the top, and the, you know, the, mostly vertical and mostly

horizontal hyperplanes. You see now that these two hyperplanes

intersect, the intersection of these two hyperplanes is a line.

Okay? You see the line clearly over here.

Okay? And notice this is a really beautiful

fort. Okay?

So, you see these three hyperplanes now. Okay?

So, two, when two of them meet here at these lines that you can see, okay, here

or there. And, of course, when the, the, all three

of them meets, you get this unique point that you see at the back of the picture

there. Okay, so that's essentially what you get

when you have a finite intersection of this hyperplane.

Okay, in 2D, when you have two, you have a line.

When you have three, you have a vertex, well you have a point, okay.

Okay, so what you see on the screen now is a huge number, not a huge number, a

nice number of hyperplanes. And you get a very nice polytope that you

see over there, okay. So, you can see all the facets, you can

see all the vertices. You can see, you know, this entire

polytope. and that's essentially the kind of

objects that, you know, we are going to manipulate inside, you know, linear

programming. Okay?

So, we have this polytope, okay, nice polytope, and we add a new hyperplane.

Okay? So you can see the new hyperplane.

So, intersect with that polytope. Okay?

So, that's what we want to look at, here. Okay?

So, we have the polytope, we have a new hyperplane, okay, that's a new

constraints in your problem, right? And so, what's going to happen?

And this is what happens. Look at these new polytopes, now.

We have a new facet, okay? And a bunch of new vertices here that you

can see. You know, this is from the top, this is

from the side, okay? So, all these, so that what this

hyperplane did was cutting through the polytopes, and then adding a new facet,

okay? And then, adding of a couple of new

vertices as well, okay? Once again, what you see there is

basically what all these you know hyperplanes are creating.

And they are creating this beautiful polytopes with a bunch of facets and a

number of vertices here. Okay.

Now, this is a very important result. Okay.

So, every point, okay, in aside a polytopes, the polytope that I've just

shown you which are defined by these hyperplanes and housespaces is a convex

combination of the vertices. Okay.

So, that's a very interesting result. The basic thing is that every point

inside this polytope, right, can be expressed as a combination of the

vertices. I have this point here, okay it's a

combination of these two vertices, I have this point, it's a combination of these

three vertices here. Okay.

So, that's a very interesting result. And we're going to use it for, you know,

actually proving some very interesting results in, in the theory of linear

programs. Okay.

Now why, why am I so obsessed with these vertices?

And this is the reason why. If you have a nice linear program that I,

as I, as I, as I have shown you so far, okay.

What you know, okay, is that at least one of the point, where the objective value,

this guy here, right, is minimal. Okay, is going to be at the vertex.

It doesn't mean that, you know, there are no other points where the solution is

optimal. Okay?

But there will be at least one of them which is located at a vertex, okay?

So, in a sense, that's why we are obsessed with these vertices, right?

So, because we know that one, at least one of the optimal solutions, okay, is

going to be at one of these vertices. Okay and I'm going to show you the proof.

Okay, but essentially what so, so lets, first let's look at it in a, in a, in a

geometry in a geometry equate right? So, this is a hyperplane right?

So, I'm basically saying, you know, I have the, I, I take the objective

function, and I assume that it's equal to a positive value B.

Okay, that defines an hyperplane and you see this hyperplane here, right?

So, what I'm trying to do in a sense is minimize is minimize that b, right?

So, in a sense I'm trying to take this hyper plane okay and bring it down, okay?

And so, this is what this is what we going to do okay so when we optimize

[SOUND]. We get to a particle point with this

hyper, this value for B is going to be minimal.

And so, this is a point that you see over there.

And that point is on a vertex. Okay.

And we're going to call it B star. And so, you'll see optimal solutions, I

always going to have a little star, in, in, in these lectures.

Okay. So, in a sense, once again we took, we

took that, that hyperplace and we tried to minimize the value of b because that's

the value of the objective function. And so, we move that hyperplane, and when

it's minimal you can be sure that at least one of these points is a vertex.

18:38

So, once again, you know, we take this thing, [SOUND], we bring it down.

It's beautiful. Okay.

And we have the optimal solution there. Now this is boring.

Okay. So let's do it in 3D.

And now, ladies and gentlemen, the 3D version right?

So, what you see there is more beautiful polytopes.

And you see this hyperplane there, which is defining the objective function, okay?

And what you are trying to do in this one is maximize, okay?

So, we're going to take his hyperplane and do exactly what we did in 2D's,

right? So this point, it is equal to a constant,

and we are going to try to drive it towards one of the vertices of this

polytope. And this is the result, right.

So, you see this hyperplane now and you see it intersecting finally with this

polytope, okay. And this is probably the best angle here,

right. So, you see that?

It's intersecting with that vertices, okay.

Everywhere else it doesn't touch the polytope, okay.

And so this is, ooh, this is also very nice, okay?

So, you can see that now, this object, it has been maximized, and it's basically a

topological vertex of this polytope. So, I told you, you know, that at least

one of the optimal solution is located on a vertex, and this is the proof, okay?

So, it's a beautiful proof, very simple proof, okay?

And that's why I want to go into, you know, it's something that you can

understand inside the lecture. Okay?

So, let's talk about x star, which is the minimum value.

That's where, you know, the, the, the, that's where the linear program is

optimal. Okay?

This is the point somewhere, okay, and we assume that this, you know, that, that,

we basically, you know, this is the optimal value.

This is one of the optimal solution. Okay?

Now, we know, we don't know where this point is.

Right? So, we want to show that it's on a

vertex. Right?

So, we, we take that point, okay, and we know that every point inside a polytope

can be expressed as a convex combination of the vertices.

Okay? So, let's assume that these vertices are

v1 to vt. We can reexpress x star, okay, this

optimal point, as a convex combination of these vertices.

And we have this, you know, these Lambda 1, Lambda 2, Lambda n, Lambda t, and we

know that some of these Lambdas has to be equal to 1.

And we're going to to use that information later on.

Okay? So now, the objective value at optimality

is c times, you know, x star of the optimal solution.

And therefore, you know, we can reexpress it in terms of the vertices and the

Lambdas by, by simply multiplying that, you know right inside by c.

Okay? So, we basically have at that point that

cx star is equal Lambda 1 times, you know, c1, c times by v1 plus, you know,

the same thing for the other vertices. Okay.

And so, at this point, you know, we just have done, you know, some very simple,

you know algebraic manipulation, you know, knowing that every point inside

the, every point inside the polytope is the convex combination of the, the

vertices. So, what I'm going to do now is assume

that this point, you know, this x star is not on the vertex, okay.

And I'm going to assume that, it's, it's not a vertex, so what I'm assuming is

essentially that cx star is always smaller than c times Vi.

Okay, with Vi is one of the verticies, it's going to be smaller than all of the

vertices. Okay?

And what I'm going to do is derive a contradiction of course.

Okay? And that will prove that it'll actually,

it has to be at least on one of these vertices.

Okay? And how do we get the contradiction?

Well, let's start, let's start with cx star.

So, we start with cx star. Okay?

And then, we simply rewrite, you know, the formula that is shown before, but

know at this point you see c, c v 1. What do we do?

We, we made the hypothesis that cx star was smaller than c Vi.

So, if we can replace every one of these c Vi, okay, by cx star, and we all know

that cx star is going to be greater than that, because I'm replacing every one of

these component by something which is smaller.

Okay? Now, I factorize the cx star, okay, and I

get the sum of the Lambdas and cx star. I know that because this is a convex

combination that the sum of the Lambdas are equal to 1, so I get cx star.

So, what I get is the formula, well, well, it's basically the contradiction

here. I have that cx star is greater than cx

star which is really difficult to do. Right.

And therefore, you know, one of the hypothesis that I took, you know, was

wrong, and this is the hypothesis that the value of the objective was, you know,

smaller then all the vertices. So, I know that essentially, you know,

the optimal solution has to be on a vertex, okay, very simple proof right.

So, at this point still, so we know we have a lot of information, right?

So, we know that the, the solution of a linear program is located to one of these

vertices. Okay, so now, I have a geometry

categorism to do that, right? I can solve the linear program

geometrically. I just draw this polytope, okay, and then

I inumerate all the vertices for every one of them, I compute a value of the

objective function. And that's going to be giving me the

optimal solution. Okay, so that's the algebraic view, okay.

So, what we're going to do next time is to show how this algebraic, this

geometrical view, can be, you know, can be translated to an algebraic view which

we will be able to solve in a little bit, you know, in a little bit easier fashion.

Okay, thank you.