Hello everybody, welcome back to the linear programming unit. Today we're going to talk with proofs from the duality lecture. So remember last time we showed that each linear program, we could associate a dual program. Which is basically attempting to find a non-negative combination of our constraints that put a bound on the objective function. So in particular, we have the duality theorem, which says a linear program and its dual always have the same numerical answer. Today we're going to prove that. So before we get into the proof, let's talk a little bit about intuition. So one way to think like this is the region defined by linear constraints is sort of a well. It's bounded by these linear walls. And you need to find, minimize say a linear constraint. Say you want to minimize the height of a point inside this well. Now, if you understand some physics, one way to do this is you just take a ball that's pulled down by gravity, you drop it into the well and gravity will pull it down to the lowest point. Now when it reaches this point the ball is at rest. And that means the forces acting upon it need to balance. So the force of gravity pulling it down needs to be balanced by the normal forces from the walls pushing it back up. Now when you write this down, you have a linear combination of these normal vectors pointing orthogonally to these walls, has to equal the downwards pointing vector from gravity. And if you worked out what this means in terms of the equations, the downward pointing vector, that's the direction in terms of your objective that points in the direction of your objective, and the normal pointing vectors, those are showed by the vectors that correspond to the equation to finding the walls later on. And so, if you sort of work out exactly what this means and put it in terms, it exactly is linear programming. You actually have a solution in a dual program that matches your primal. And if you'll even note which walls you use, only the walls that the ball is actually touching, that actually gives you a taste of why complementary slackness might be true. You only get to use the walls that the ball's actually touching, which are the ones for which your equations are tight. In any case, let's look at an actual proof. So the first thing that we're going to do is, instead of looking at an optimization problem, we're going to look at a solvability problem. We'd like to say when is there a solution to Ax at least b and x.v at most t. So, we claim there's a solution to the system unless some combination of these constraints yields a contradiction, yields the equation 0 bigger than 1. Of course, if you yield that from a combination of your constraints, it's clear your constraints cannot all simultaneously be satisfied. But it turns out that this is sort of if and only if. And if we can prove this, that will actually be enough to give us the original problem. Because basically, the only way we can do that is if combining the constraints not including the last one, we can conclude in its constraints that x.v needs to be strictly bigger than zero. Okay, so let's see how that works. Now suppose that we have a bunch of constraints and now we want to look at all possible combinations of constraints. So c1E1 plus c2E2 plus dot dot dot plus cmEm, where the ci are some non-negative constants. Now the thing to note is that the set of inequalities that can be derived in this way, the set C, is actually a convex set. Now what happens if the equation zero bigger than 1 is not in this set C? Well, you've got a point that's not in the convex set. That means that there has to be a separating hyperplane. Now, what this separating hyperplane is is a little bit weird. It's a sort of a hyperplane in the set of linear inequalities but it turns out that from this hyperplane, you can extract solution to the original system of equations. So this is a little bit abstract, so let's look at something slightly more concrete. We have a system of linear inequalities, all the form something x plus something y is at least 1, and we want to know is there a solution to this system? Well, we're going to plot these equations. Here we've got a bunch of equation in the from ax plus by is at least 1. And we've plotted all the values of a. Now, it's clear that in none of these equations is the contradiction 0 bigger than 1. Now would be the point of the origin. We also have considered linear combinations of these. It turns out the linear combinations of these equations you can get are exactly the gray region here. Now it's still the case that 0 bigger than 1 is not in this region, but it's a convex region so there has to be a separating hyperplane. Here we have the separate a plus b is bigger than 0. So what does this mean? Well it means that all of our equations were of the from ax plus by at least 1 with a plus b being at least zero. Now what this means though is if we take x equals y equals the same big number, ax plus by is equal to a plus b times this big number. Since a plus b is positive and this other number is big, that's actually more than 1. And so we actually have a solution. If x equals y is a big number, and in particular x equals y equals 1 gives us the solution. And so we're able to do here is, we said, well 0 bigger than 1 is not a linear combination. We have a separating hyperplane and by looking at the form of this hyperplane, it allowed us to find an actual solution to our system. And this is sort of how it works in general. Okay so that's the Proof of Duality, we should also look at Complementary Slackness. Remember what this said is we have our linear program and it's dual. And if you have the solution to the dual, where Yi is strictly bigger than zero. This can happen only if the ith equation in the primal is tight. To prove this is really actually not that hard. You take a solution x to the primal and a matching solution y to the dual. So the best you can do in the primal is x.v equals t. But by duality, you get a matching lower bounds in the dual. So there's a combination of these linear inequalities, Ei. So the sum of yi Ei, yields the inequality x.v is at least t. Now each of these equations Ei, those were true equations for that value of x, and the final inequality that we get is actually tight. So we have the sum of a bunch of inequalities, it gives us a tight inequality. The only way that that can happen is if the inequalities that we used are tight. And so that means for each i, either the inequality Ei is tight, or yi = 0. And that proves complementary slackness. Okay, so that's all we have for this lecture. Come back next time and we will go back and start talking about some formulations and different ways of looking at linear programming problems.