Callbacks are a useful piece of functionality in Tensorflow that lets you have control over the training process, there's two main flavor of callback. There's the built in callbacks that are pre built in functions that allow you to do things like saving checkpoints early, stopping on this custom callbacks where you can override the callback class to do whatever you want. In this video, I'm going to look at the built in callbacks, and then later you could learn how to do the custom ones. So in summary, callbacks are designed to give you some type of functionality, while you're training every epoch, you can effectively have code that executes to perform a task. What that task is is up to you, there's a tf.keras.callbacks.Callback class that you'll subclass, so the pattern you've been looking at in this course for sub classing existing objects will also work for this. They're particularly useful in helping you understand the model state during training, saving you valuable time is your optimizing your model. Here's the anatomy of a callback class, as with any class in python, you define it to extend an existing class and you have local variables initialized in the init function. For callbacks, you have the epoch begin function that you can override, which, as its name suggests, gets called at the beginning of every epoch. Of course, similarly, there's the epoch end function that gets called at the end of each epoch. You may have seen or used these already, and we did a little bit of them in the tensorflow in practice specialization, if you've studied that. But there are lots of others, including those that could be used when you're training, testing or running predictions with the model. So, for example, when you run a prediction, you can have a callback that happens at the beginning or the end by calling the unpredict begin method or the unpredict end method, respectively, you could do similar for on training and on testing. Also, when running batches, you can have unpredict batch begin or unpredict batch end so that you can execute code batch by batch while predicting, and of course, you could do similar for training and testing. This, of course, leads to the question, where would you use callbacks? Well, the model methods that involve training, evaluation of prediction used them, you simply specify them, using the callbacks equal parameter. So now let's take a look at some of the built in callbacks, we'll start with TensorBoard, which, if you aren't familiar with it, provides a suite of visualization tools for tensorflow. Unless you visualize your experiments and track metrics like loss and accuracy, Acela's viewing the model graph you can learn more about attentive load torque slash centerboard To use the tense aboard callback is super simple. You simply define it and then start training, it's defined by creating an instance of the tense aboard call back and specifying the desired log directory. Tensorflow then saves the details to that directory, and then, when tensorboard is pointed at the logs directory, it does it's thing such as plotting accuracy and loss. It can even be used by co lab by loading it as an extension, as you can see here. Next, take a look at the model checkpoints where the models details can be saved out, epoch by epoch for later inspection, or we can monitor progress through them. The model checkpoint class saves the model details for you with a lot of parameters that you can use to fine tune it. So let's look at some examples, here's an example of using it in the model fit method. Using the callbacks parameter, I specified that I want a model checkpoint with the model file being called modeled on h5. Then, during the training process, you can see that the model is getting saved out epoch by epoch. If I don't want the entire model structure and only the weights, I could do so by specifying the save weights only parameter to be true. Or if I only want to save when I reach optimal values, I could do so by specifying save best, only to be true, then whatever value I specify in the monitor parameter will be saved whenever it's optimized. As you can see here in the first epoch, the value started as infinite, and it ended at 0.65278 so get safe, then in the second epoch, it improved, so it got saved and so on. If at some point you're val loss starts to increase, the model checkpoints, of course, would not be saved. In these examples, I've been showing the native care aster h5 format, but of course, you can also use a save model, which is the standard tensorflow format. And as the name of the file is just specified using text, you can actually form at the values within the name, so you could have separate weights saved out pair epoch in separate h5 files. Simply by using the epoch value or other metrics such as the validation lost value you conform at the file name so you can see here the last two digits of the epoch are used, so the file is wait 01 wait 02 and so on. The next built in call back, you can use his early stopping, which is useful for helping you stop training when it hits a metric that you want, where, for example, Tenney pox might be enough, but you're training for 100. It can also be used the other way, if there's not enough of a noticeable improvement, early stopping could end training, saving him a lot of time. So, for example, let's look at a scenario that could be used to prevent over fitting, here we want to explore validation loss on ensure that it continues to go down. So we set the monitor property to validation loss, and then we'll set patients to three, the idea here is that once we hit the best value will log that and we'll wait for this number of epochs ie three to see if the values improve. So here you can see that at Epoch 15, the validation loss was at its smallest, after which it began to increase by epoch 18, 3 epochs later, it was still worse than at 15. So training stops, if you don't want to lose the weight values from the best epoch you can set, restore best waits to be true. So in our case, even though we stopped at 18 will have the weight restored to where they were at 15. There's other parameters you can play with, but the mode one is crucial to ensure that you're following your monitor values correctly for loss that you want to minimize. So you would then set the mode to min, for others, they might require you to maximize the value so you could change the mode with this property. Another super useful callback is the CSVlogger which, as its name suggests, will log your training results out to a CSV file. So, for example, when using it like this, you'll have a file containing the epoch number, accuracy, loss, validation, accuracy and validation lost stored for you. So that's it for this quick look at the different built in callbacks, in the next video, you'll see how to create custom callbacks on, after that, you can try the code out for yourself.