So, welcome to module two from course three.

Jimmy, what's happening in the story?

Well, Zhuge Liang was still with Shanon.

He helped Shanon to nurture a Gen Mu so that Shanon

will be able to reap the best Yao Shao in the shortest time.

And after that, Zhuge Liang met up with Hou Yi.

For Hou Yi, the biggest problem was there were too many suns in the sky.

Ten of them actually. Ten.

So Hou Yi had to shoot down a few of them.

And Jugu Lan also helped Hou Yi

to solve one of the drought problem of his village as well.

Because of the abnormal climate change,

many of these hibernated monsters,

they were waken up,

and then they were roaming around and attacking villages.

So Zhuge Liang and Hou Yi joined hand to arrange for armies to protect the villages.

Now, in terms of the technical stuff,

I think we are still talking about constraint programming in solving technology.

We are, we are. But this time,

in last week, we looked at satisfaction problems.

In this module, we're looking at optimization problems.

Right.

Now, optimization by CP is incredibly naive.

Basically, look for a solution,

and once you find it, add a constraint saying,

"Find me a better solution."

Well, also, we're essentially solving optimization using satisfaction.

Exactly! And there's two kind of ways of doing that.

Either the retrial optimization,

which is sort of like when we find a solution,

start from the start, go back,

and restart with this new constraint.

Or we can use branch and bound, where we're going to

do one search of the whole big search strip,

but we're going to keep track of the best solution we found so far,

so we don't end up maybe doing too much work.

Either of them could be better in different circumstances.

I remember in module number one,

we were talking about search strategy.

It turns out that search strategy is even more important in solving optimization problem.

Absolutely. Because basically, when we're doing an optimization problem,

we want to find that first solution want to be good, right?

The sooner we find good solutions,

actually we can reduce the amount of search we need to do in the rest

of the search. Basically, do less.

And so, if we find a good solution first,

we actually end up doing less work.

And so, it's even more important to have

good search strategies when we're doing optimization problems.

So we also look at heavy-tailed behavior of

search when we are trying to solve difficult problems.

Right. So heavy-tailed behavior remember is where sometimes I can find a solution very,

very quickly, I can solve the problem very,

very easily, but then a very similar problem,

almost identical problem with just slightly different data,

can take really, really long time.

How can we deal with heavy-tailed phenomenon?

So, we saw that we can use restart here.

So, if we find that we're taking a long time,

we can guess that we're in one of these bad cases and just

restart the search and hope that we then fall into one of the easy cases.

And we see actually that if we use restart,

we can do a lot better because we avoid getting

stuck in these bad cases where we've made bad decisions early on in the search.

Okay.

We can't get out of them.

So I suppose when we're doing restart,

in the next run of the search,

we better make use of

a different search strategy or maybe even a randomized search strategy.

Absolutely. So if we're doing the restart,

we're definitely going to be doing exactly the same decision we did last time.

So we want to use some kind of randomization

or some kind of search which depends on data that we've collected,

so that we try a different thing in each time.

And there we saw our new kind of way of doing that.

Right. That's failure-driven kind of search strategy, right?

Exactly. And these failure-driven approaches where we basically

concentrate on the variables which are causing us the most difficulty,

causing the most failure, turned out to be very,

very powerful because they make the search on the hard part of the problem.

And then once that's solved,

then maybe we can solve the rest without difficulty.

And it tends to make, also,

the search base very small.

So these failure-driven searches are

very important addition to our tool bag of different searches.

So one of the examples is [inaudible] degree.

Absolutely. And that was the basic one that we used throughout this.

But there are other ones and better ones too.

And it combines and work very well with restart.

Absolutely.

Then finally, we looked inside some of our propagators.

Of course, one of the great advantages of constraint programming solvers is they

have global propagators that encode some common and rural sub-structure for you.

You don't have to think about them. And then they use

complicated algorithms inside to do their inference.

Again, I suppose the trade-off is the strength

of the propagation and the speed we can do the propagation.

Exactly.

And so, we looked at all different ways we could do the most propagation possible,

it's a domain propagator,

it used a complicated matching algorithm, right?

But then we also looked at cumulative,

which is for doing scheduling kinds of problems.

And there, it's a very,

very powerful propagator, but it actually doesn't do the most propagation possible.

In fact, we never use that kind of thing.

So when we get to big, hard problems,

we're just trying to get this good trade-off,

how much propagation, for how much speed.

So actually, in dealing with global constraints and global propagators,

many at time, we do not enforce the strongest propagation possible.

Yes. Rarely, in fact.

Right.

alldifferent is one of the cases where we do but that's not one of the most common cases.

How about the workshop?

Yes. So what's happening in the workshop, Jimmy?

Well, Shanon prepared the herbs and now,

he has to turn the herbs into potions.

So he has to hire a whole bunch of alchemists.

And his job is to really assign the number of alchemists who has to work every day.

And of course, he would like to minimize

the number of alchemists that he will have to hire.

Right. Again, this is a different kind of workshop than we've seen in previous modules.

We're going to give you the model of this time.

And we're going to actually give you a model with three copies of every constraint.

Because there are three different viewpoints on the decisions we can make.

And you're going to have to do two things simultaneously.

You're going to have to pick which of

the representations you should use to every constraint,

and what's the best search strategy to

put those constraints into work and find the solutions as quickly as possible.

That is multiple modeling that we learned in course number one.

Exactly. Multiple modeling in action and multiple modeling interacting with search.

And so, you'll be able to check which is the right way to do

the search and which is the right way to represent decisions.

So I suppose in this workshop here,

our learners will be able to experiment with some of

these advanced search strategies including [inaudible] degree and even restart.

Absolutely.

Different restart policies.

Exactly. We expect you to play around with

all the new kinds of search that we taught you in this module.