So technically, what are we seeing this week?

So this week is all about functions.

And we started off looking at the assignment problem,

which is about building an injective function.

So the assignment problem, we have one set of objects, the domain, and

another set of objects, the co-domain.

And we're basically trying to assign one of these co-domain objects to each of

the objective domain, and that's an injective function.

And it really actually is the basis of where the all different

constraint comes from because that's what is encoded.

>> But assignment problems are easy to solve.

>> Yes, so if you just have a pure assignment problem, then

actually there's very specific techniques, that's mostly matching techniques,

which will solve those problems very, very quickly.

The problem is, in the real world, nothing is likely to be a pure assignment problem.

And so there'll be side-constraints, and

then we'll need to use more general technologies

like the discrete optimization technology which we're talking about in this course.

>> So you mean a side constraint will essentially break the easiness of and

assignment problem.

>> Yes, absolutely, the summer prom can become arbitrarily difficult if we just

added just a few, what looks like simple side constraints.

>> We can also look at the general functions.

>> Yes, so we move on to general functions and

now we can think of those functions in two different ways.

So we can think as a mapping from the domain to the co-domain,

just as a normal sort of functional viewpoint.

>> Yeah. >> But

we can also think of them as a partitioning,

where we're basically taking the sets of the domain and

partitioning them by the value that they're given in the co-domains.

So it's really breaking up the domain into sets all labeled by one of the co-domains.

>> Okay, so this module is all about functions.

>> Yep. >> We've also learned

some modeling techniques as well, right?

One of them is common sub-expressions.

>> Yes, so common sub-expressions are very important if we have the same

thing that we're trying to use in multiple places in our model, and we should often

give it a name, give it an intermediate variable, and reuse that variable so

that we're telling our solver not to basically compute the same thing twice.

>> So comments of expressions are not good things so we should eliminate them.

>> Absolutely.

Now, many think we'll try to do this for you but it's much

better if you do it yourself, then you're guaranteed to know they're eliminated.

Sometimes minuting won't be able to determine that something is really

the same thing.

We also saw a new global constraint called global cardinality.

>> So global cardinality is great kind of extension of all different,

very much attached to the set partitioning viewer functions, which allows us to count

how many values in our domain are being assigned to each partition.

>> So we going to see more of global cardinality and

its relatives in chapters to come.

>> Absolutely, that’s an important global constraint.

>> We also saw symmetries, a special kind of symmetries called value symmetries.

>> Yes, so value symmetries are very common discreet optimisation problems

where two values are interchangeable.

We saw it in the clustering problem where the names of the two clusters really

didn't matter and in order to defeat these symmetries,

we introduced the value proceed chain global constraint.

Which is really designed exactly for

this, to get rid of asymmetries to only leave one possible answer.