0:00

Okay. Discrete optimization.

Welcome back, guys.

I'm going to talk about something phenomenal today.

I'm going to talk about Large Neighborhood Search.

It's a technique which is amazing, truly amazing, we use it all of the time.

Okay? So what is it?

So basically what I'm going to show you is that this is

a hybridization of local search and

constraint programming or local search and MIP.

Okay.

It's basically combining these two

techniques although different kinds of functionalities.

But when you put them together

you get something which is way larger than the sum of the parts.

Okay?

So let me give you the first iteration using CP, that's where it originated from.

It's also where it's mostly used these days.

But what you do is, you start, you start.

You combine CP and local search.

And you start with the first step which is finding a feasible solution using CP.

You know by now that finding, you know

CP is very good for finding feasible solution.

It takes going

through all this to find that. Okay?

Then the second step that you do is you select a neighborhood.

You take the solution that you got from CP and you select a neighborhood

and, and what you're going to see is

that this neighborhood is going to be large, okay?

Not a tiny neighborhood where you, you know, swap things.

And then what you're going to do is you're going to use CP third

to actually optimize that neighborhood, finding

the best neighbor in that neighborhood.

Okay.

And then the last part is basically repeating them forever.

Okay, as long as you have time, you will do

that for improving the quality of the solutions that you have.

Okay, so this is the last bit of research,

it's a combination of constrained programming and local search.

But the local move, a big move, you explore a big

search space then you use Constraint Programming to explore it, okay?

What is the neighborhood, you're going to tell me?

What is this neighborhood?

What you're going to do for instance, this is only one example

but this is a, this is useful for a variety of applications.

So what you're going to do is you're

going to look at the solution and fix the set of variables.

You believe that these these variables are nice.

Okay?

So you keep them.

And what you do is you, the neighborhood will be finding out

values for the remaining variables, the one that you are not fixing.

Okay?

So in a sense, you find the solution. Fix some of the variables.

Okay?

Let the other one free and find a better value for these other variables

such that you improve the quality of the solution that you have found so far.

And then repeat.

Find another set of variables that you fix.

Okay?

And reoptimize that particular neighborhood.

Okay? And you iterate these two steps.

Okay?

Now, how do you choose a subset of variables?

Typically, this is problem specific.

You will look at the structure.

Okay?

And, you know, and you will extract something

from the structure of the problem that tells

you, ooh, I want to keep that particular,

particular part fixed and then explore the rest.

I'll show you an example

later on that is very intuitive.

Typically they are properties like, you know,

you know, special, you know, special locations or,

or temporary locations temporary notions that are

going to be very useful finding these, these neighborhoods.

Okay?

So typically explore the problem structure although in some particular case you

can have a completely random neighborhood that's going to behave very well.

Okay? It really depends.

2:53

So why do we use LNS? Because two reasons.

First, CP is really good for finding feasible

solution and for optimizing very small combinatorial space.

Okay?

So in a sense, you exploited two strengths of CP for finding a high quality solution

and you expl-, you exploit large, you know,

you exploit, you know local search for scalability.

Okay?

So in a sense, you exploited two facets

of these two techniques and put them together.

Of course, you can generalize that to MIP.

Okay? You can find

a feasible solution using MIP and you can exploit the neighborhood using MIP.

That works as well.

Okay?

Essentially any kind of method you can actually plug in instead of CP or

instead of MIP inside, inside of a

large neighborhood search is basically technology independent.

It's the basic idea of finding a feasible solution, defining the

neighborhood, finding a good neighbor in

that neighborhood, and repeating the process.

Okay?

So let me illustrate that with a very interesting example.

Okay? So this is from industrial engineering.

It's a problem with a robot.

Okay?

So a robot has to visit a set of

locations and do work at every one of these locations.

Okay?

It has a service time that it has to spend at every location.

That's the time that it takes to actually perform the task at that location.

It also has a time window for when

it can apply this service at that particular location.

And what we want to do is

also, there is also distance between the various locations.

So, time window, service time, distance on how you can travel between the

locations, and that distance is actually

asymmetric, which has complicated the problem tremendously.

And so what you want to do is find a path for

the robot to visit all the locations, that's called a Hamiltonian path.

Okay?

And we have to make sure that we

satisfy all the time window constraints, when we can

actually do the service at a various location.

And we also minimize the total travel distance.

These two constraints are actually have a lot of tension between them.

And that's what made the problem very difficult.

So this how you can moderate

using a, the scheduling, the girlfriend-based scheduling.

So I don't expect you to understand all the details

but let me just give you a high level overview here.

You model every activity that the robot

has to perform as an activity.

And it is about your service time, which is the duration that it takes.

We model the robot itself as a utility resource.

It's a vehicle here, okay?

And we have a transition time made for us here which basically tells

us how much time it takes to move from one location to another one.

And so the optimization is really minimizing the sum of the transition

times for all the location, subject to the time window constraints and

the fact that the robot can only be at one particular place in time, okay?

So the key aspect that I want to show you here

is that this is a very, very simple models, model okay?

So it doesn't require a lot of sophistication.

It's a very natural modeling of the problem.

Now, how do we apply large neighborhood search on this?

Well let's look at what the schedule look like.

So this is, is every one of these lines here, I know is basically the time window

of a particular a particular location that a task that we have to achieve and

the blue stuff is essentially when the robot

is there performing the particular, the particular service.

So.

This is one particular solution the only thing that you

don't see here is spacial the, the, the spacial information.

And so what I'm going to show you

is basically how large neighborhood search can works.

So look at this, what we going to do is to say, wow, I'm going to fix everything

outside this box.

So this part of the schedule and that part the schedule.

Assume that this is essentially the path of the robot.

And what I will do is basically say,

Ooh, I want to reoptimize that subsequence, find

the best subsequence, you know, from this particular

task to that particular task and reoptimize that.

Okay?

Then I get a better solution.

And then I move that window and do that

at another part of the schedule and keep repeating that.

That's what large neighborhood search is about.

Take a sub part of the problem and

reoptimize that, keep the rest completely fixed.

Okay?

So this is very structure based.

You are basically looking at which tasks are consecutive inside a schedule.

You can do something completely random.

Pick a random set of tasks and fix the rest

and reoptimize the, the task that you have reoptimized randomly.

That's also large neighborhood search.

Okay?

So you can imagine a variety of neighborhoods and you

can reoptimize every one and you can alternate between them.

That's what large neighborhood search is about.

Okay.

So, so let me give you the sense of

how beneficial this can be for for some particular problem.

So, so as I told you this is a,

a industrial engineering problem, a real problem with variables.

Now it was solved initially with a branch-and-cut algorithm, very

sophisticated algorithm by the Seuss team in, in Berlin, that's

one of the top group in mixed integer programming, and

the wrote 50 pages of algorithm and theorems and prove all

kinds of polyhedral case for solving this, and this is the best solution they found.

And for instance, you know, 48 there is not an optimal solution.

They can't prove optimality.

This problems are so hard that they couldn't prove optimality at that time.

Okay?

So what we did was applying the very simple model that

we have seen, and putting large neighborhood search on top it.

And we were looking at how good, you know, how

quickly we could find solutions that were beating these numbers.

And essentially

in five minutes of CPU time and this is like a decade ago right?

So we could essentially improve almost all the problems in five minutes and

find better, better solution using the

very simple, modern, and large neighborhood search.

And to actually improve the larger instances

we needed about you know ten minutes.

Okay?

And so this is the kind of value of

large neighborhood search, it's a technique that is extremely good.

For finding high quality solution

quickly and this is evidence of that, a simple case study of that, okay?

So, use it.

It's really nice for a lot of problems if you want

to scale, if you want to find high quality solution quickly, okay?

So, use it as I said.

This is a very interesting technique and you know, I'll see you next time.

Thank you very much.