Right, so let's update the Lagrange multipliers again.
So that goes up to eight,
it's going to change the values of positions on this yellow violations place.
And we'll evaluate.
And okay, now we've got a new neighbor with not a local minima anymore,
and we're going to go down to that neighbor that we were at before.
And if we value it, it's a local minimum.
Right, so we're going to go through this process again.
We have the second constraint is violated.
We updated Lagrange multiplier, so that's updating our Lagrangian function.
And that's also going to change all of the values in the red squares.
And then we look at our local neighborhood and we find this is new sources, better.
So we'll move there, and it's a local minimum, again!
All right, we'll update the Lagrange multiplier once more.
And that'll update everything, and we'll look at our neighbors.
And finally we find a place which is not, [LAUGH] Not this person we've been doing,
not these two.
So we've finally moved out of that.
And we find that this also has a better local neighbor, and
now we're at a local minimum.
And this time the difference is that all the constraints are satisfied.
And in fact, we found the global minimum.
So eventually, you can see it took us a long time to get out of this area.
But eventually we got out because the Lagrange multipliers eventually made these
areas not attractive anymore,
even though they looked attractive in the the original objective function.
So let's have a thought about what we've done.
So you can see the different initial points can matter strongly in this
Discrete Lagrange Multiplier method, right?
So it's just like other local search methods where we start changes where
we end up.
And we can only find constrained local minima,
we've got no guarantee that we find the global minima.
So we need to think about combining this with other methods like restart, so that
if we find a constrained local minima, then we can go and look for another one.
And we haven't talked about cycling and plateaus.
So If we're on a plateau which violates a constraint,
we're going to keep making those positions less, and less attractive.
And so eventually, we'll move out of them.
But if we're in a plateau which has no constraint violations,
then those plateaus are local minima, and
we just have all the same problems with plateaus that we've seen before.
But we can combine this with tabu lists as one way of handling plateaus.
So Lagrange multipliers have lots of considerations to use, so
how fast should we update the Lagrange multipliers?
So that's this value of alpha.
How often should we update Lagrange multipliers?
So we have shown an example where we basically only update them at local
minima.
But you can also update them more frequently,
you can update them every step in the local search.
So that's a possibility.
And another question is, how do you split constraints?
So if you split constraints to get more multipliers,
then you have this disadvantage, you've go to keep track of more things.
You've got more constraints that you gotta evaluate, and
more multipliers to keep track of.
But the advantage is you have a less rugged search space.
So if I have a single multiplier for alldifferent,
then if I keep on violating alldifferent, then it seems like it's very important,
and it's multiplier would get bigger, and bigger.
But it might be that I was violating alldifferent because of some inequalities
early on in the search, and later on, I'm violating for different inequalities.
And so by having a multiplier for each inequality,
I've got a finer-grained tracking of which constraints are difficult to solve.
Another thing is, should we reduce multipliers, ever?
Typically we end up doing this just for, at least for arithmetic stability.
And another reason is maybe constraints which were really important and
hard to solve early on in the search space have now become easy, and
we don't really have to worry about them.
And actually, they just make it harder to find the constraint local
minima that we're looking for.
Right, in summary,
thIs Discrete Lagrange Multiplier method is a principled way to adjust penalties.
So this is really important because we've already seen that this
penalty-based approach is really the only final way that a local search method
can handle constraints, arbitrary constraints.
And so this DLM search will find a constrained local minima,
given an infinite amount of time.
But it'll run around, and eventually it will do that.
It's still not a complete answer, right?
We need to know how fast to adjust the multipliers?
We need to think about how many multipliers we need, or
how do we think about multipliers for our constraints?
When to adjust the multipliers?
And when we should possibly make them get smaller?
And we can combine them with other methods to increase exploration and avoid cycling.