We'll talk a little later in the course about some attempts people have made to

link factorization to other forms of content, like keywords, to try

to come up with more explainable matrix based techniques for recommendation.

So, take aways as we do this intro, there's this clever and useful approach.

Where we can address our problems of synonymy, of different things

expressing the same taste, deal with our problem of overfitting,

get some computational advantages, at least at run-time.

If we can build one of these matrix factorization or

reduce dimensionality approaches.

Now SVD itself is growing in use, but in fact some of the other

approaches that we're going to talk about have come up to the point where they rival

the item-item algorithm for the most popular algorithms being used today.

You'll hear more about those in upcoming lectures.

So we move forward, you're going to see a bunch of the details that

come together with this, including how we prepare the matrix,

gradient descent approaches, and probabilistic factorization.

And we have a whole bunch of guest lectures later in this course that look

at the next step as we hybridise matrix factorization with other techniques.

I hope you'll find it interesting.