Welcome to the specialization on advanced techniques with TensorFlow. You're probably familiar with a sentence called sequential machine learning models that say, "Neural networks or models where you provide some input and that input is run step-by-step through a sequence of layers, like a feed-forward neural network to get a prediction." It turns out, for more complex applications, you may need to build more complex neural networks as well. For example, models that may use multiple inputs or multiple outputs or models with loops in them where the processing is ingesting the linear sequence or models with some loss function. In this specialization, you get hands on experience building these kinds of complex models using search settings and in some of the more advanced applications. You get to learn to do them using TensorFlow. To teach the specialization, I'm delighted to introduce again, Laurence Moroney, who is a developer advocate at Google, instructor of the other tensive specializations and also author of the wonderful book AI and ML for coders. Hi Lawrence. Hey Andrew, how you're doing? Thanks so much and thanks also for your contribution to the book, the forward was very popular, people loved it and I really appreciate it. You're exactly right when you mentioned the course and what's in the course. To me it's particularly exciting because when I first started on my machine learning journey, I like many developers learned about how to build things in sequence, to build layers in a neural network or convolutions in a sequence. But then I had the question, how do I do, for example, something like object detection. Because to train with object detection, I need to detect an object and its bounding box so I'd have multiple outputs. But with a sequential API like that, I couldn't figure out how that would be done. That's why I'm really super excited about this specialization because in many ways I'd like to describe it as we're taking a small step backwards in order to take a huge led forward. That huge lead forward is to see exotic models exactly as you described. Often when I pick up archive papers and start reading about a particular model, I see there's multiple inputs, there's multiple outputs, there's loops and there's other things that are going on. The nice thing about this course is we're going to get hands-on. We're going to learn how to use something called a functional API in TensorFlow to allow us to do exactly that. Then once we've done that, then we'll go forward into doing things like distributed training or looking at things like the; as I mentioned, object detection and things like autoencoders, variational autoencoders and all of them that have these more exotic architectures. It's a really exciting time to be doing this. I'm really, really looking forward to what learners will learn from it. This specialization, we'll teach you upto some of the more advanced features in TensorFlow and in the deep learning world today. In detail, in Course 1, you create custom models, layers and [inaudible] functions. This is exciting because you no longer be limited to the models and layers in cost function types that apply into TensorFlow. Then in Course 2, we're going to actually crack open the training loop. We're probably used to seeing things like model.fit. Then everything is just done for us. We'll actually crack that open. We'll take a look under the hood and we'll see how things work within the training loop. From there we'll be able to explore how things are distributed training work and as things like distribution strategies for training across multiple GPUs or multiple cores in a TPU. We'll see within the training loop how things like the loss can be reduced across each of the cores as your training. That really allows us to take that to the next level. Then as we get down to course three will be doing some interesting stuff. [inaudible] is all those things that when you need to scale up your model is absolutely important thing to do. So is incredible that you learn how do this yourself using TensorFlow. Then in Course 3, you take the techniques you learned in courses 1 and 2 and apply them to solving advanced computer vision problems. Problems in image segmentation, object detection, mode interpretation, all these techniques that I found very useful in many commercial applications. You get the practice building all these things yourself as well. I had way too much fun working on that course like with object detection, I built a zombie detector. That's going to be one of the things that the learners will do. Then that then prepares you for Course 4 and in Course 4, we're going to look at some of the generative deep learning that's out there. We'll look at things like style transfer and neural style transfer, as well as autoencoders, variational autoencoders. Then we'll wrap it up with a taster of GANs. There's an amazing specialization from deep learning AI that really goes deep into GANs. When people would be looking at in course four on GANs will really whet our appetite to be able to do that. We'll be building some fun stuff with GANs also,. Lawrence this is an advance TensorFlow specialization, whether the prequisite [inaudible]? That's a great question, Andrew. Thank you. I really want to keep it as accessible as possible to developers. Obviously you're going to need to know a bit of Python. You're going to need to know some TensorFlow. Because we do go back to basics in the beginning of the course when we're looking at the functional API. Then you don't need to be a deep expert in TensorFlow, but I would strongly recommend that people have done the previous specialization, the TensorFlow developer specialization with deep learning at AI, because that will give them really the basic understanding of things like the sequential models and convolutions that we'll be looking at in detail during that Course 3. Thanks Lawrence. I think this body of knowledge will be very useful to many learners. Let's get started.