Welcome to the fourth and final course in this specialization focused on using advanced techniques in TensorFlow. In this course you'll leverage the techniques you applied in previous courses to solve some exciting problems in generative deep learning. What is generative deep learning? I still remember the first GAN paper written by my former student, Ian Goodfellow, which generated these highly pixelated images of people that DRM half-meter. Now, several years later, GANs are generating images of people that are really hard to tell if they were real or fake. In addition to GANs, there are other flavors of generative deep learning algorithms as well, such as VAEs, or variational autoencoders, and also sound transfer. So in this course, you've learned how to build all of these different types of models using TensorFlow. I am really excited to have with us again, Lorenz, to teach this course. Thanks Andrew. I'm really excited to be here. I think this is one of my favorite areas in deep learning is like generative deep Learning. It's such a huge and broad area with, I think with so much still to be discovered. This course is really fun and that it gives a second Introduction to several of the major areas in generative deep-learning. We'll start with style transfer, and we've probably all seen those images of where somebody can take a photograph and make it in the style of Picasso and Matisse, or other artists, and we'll decompose that, break it down and see how that's done so that we can start creating our own. Then after that, as you've mentioned, encoders and variational autoencoders, we're going to look into them, see how they're done. We'll have a lot of fun with that when in that wheel, the exercise at the end of that one will be where the learner, where you will take up an animate faces and you'll train a dataset around animate you faces and see how to generate and create your own anime faces. They're a fun and interesting one because first of all, they're drawn rather than photos. Secondly, like animate tends to have such a huge variety in like eye color, and hair color, and stuff like that, and we'll see like Epoch by epoch how neural network is learning how to create these and create realistic looking ones. Yeah. Lorenz, I know you've been a fan of anime for long time. Ever since you made that trip, sometime back to Japan, and you came back and gave me this gift of a Japanese comic anime book that teaches Machine Learning. I took Japanese lessons for four years when I was in high school, but unfortunately, my Japanese isn't fluent enough to read this, but I thought it was a cool, fun way to the writing of machine learning book. So thank you for this gift. I think Andrew, your Japanese is probably much better than mine, but at least I could enjoy the book by looking at the diagrams and seeing things like loss functions and neural networks actually have my copy right here. As you can see, it's right here. It's a great book. Yeah, I actually cleared out the bookstore, I only had two copies left, so I got one for myself and one for you that day. Then one of the things for you, the learner as you're doing this course is if you want to do some anime stuff, is that as we're creating variational autoencoders, one of the exercises is actually to generate anime faces. It's a lot of fun as you actually see the anime faces evolve Epoch by epoch as the network is learning. It was really interesting as I was creating it, because the one that I created the first several dozen epochs that every face had red hair and blue eyes. But then over time it began to change and learn, and as you know, like animated character is one of the nice things about them as the variety of hair color, there's many blue hair and green hair and that stuff as well as eye color. So one of the things like generative adversarial networks, the things that they can do are incredible. As you mentioned earlier on, being able to generate very realistic faces. We're not going to go that deep in the course, but what we are going to do is we're going to look at the overall architecture. We're going to look at how the generator can generate things and how the discriminator can be used to try and learn the difference between fake images and real images, and then have a feedback loop around that so that the generated images become more and more realistic. We'll do that in the final exercise that the learner that you're going to do is the based on their hands EMNIST datasets. If you did the earliest specialization, you may remember that one of the exercises there was for classifying hands in the hands EMNIST dataset. So now what you're going to do with your GAN is that you're going to generate hands, and then you're going to pass them to a classifier to see if it can recognize them. One of the most interesting developments to me in the rise of generative deep learning algorithms is how, in addition to generating really interesting, sometimes beautiful artwork or images, they're also being used to generate data for supervised learning algorithms. So take your medical imaging, we can never seem to get enough data of medical images, your X-rays, pathology size, and many research groups are now using generative algorithms to synthesize x-rays that never existed, to feed into supervised learning algorithm and does a sprucing the performance of your confidence being used for medical diagnosis. That's amazing. It's one of the things that it is sometimes difficult and very expensive to get data. So a great use, like I mentioned for these generative type algorithms is to create realistic data. One of the things that I know you've thought a lot about Lorenz is that, in this course, we'll be handling learners very powerful tools, and we've seen in the media, father mentions of defects and other potentially adverse users of these technologies. So one of the things I hope is that everyone learning this will only use them in ways that are honest, transparent, ethical, and that ended move humanity forward. Absolutely. I agree. I think one of my driving forces for this course would be to have as many people learn how these work as possible. I think when such powerful tools were only in the hands of a few, that the people who would misuse them are possibly magnified. But when we can make these tools and these techniques and how to use them and how to maybe spot how they could be misused available to as many people as possible and the knowledge of how to create them and debug them available to as many people as possible, that hopefully would be one way of mitigating bad use of them. So this course will have lots of exciting examples. I hope all of you enjoyed learning how to use TensorFlow to build generative deep learning algorithms.