Let's talk a little bit about No-code tools and how they're making machine learning. Deep learning more accessible for more and more people so that people can take ideas and implement them straight turn them directly into products and solutions that can be widely used. A great example of a No-code tool that makes Deep Learning relatively easy is Teachable Machine. And so I'm showing it to you here, this is again this is a google front end here that basically relies on its tensorflow engine. Which is a package that contains a lot of, logic for deep learning. And so teachable machines called that because you can teach the machine, like any machine learning algorithm. Is a great front end to be able to use to try to play around a little bit with deep learning. So we're going to use teachable machine here. So, as you see, you can train a computer in this context to recognise images, sounds, poses, I'm going to get started and I'll show you how it works. All right, so are on the teachable machine here, we have a number of different models. More let's us says there are more coming soon. But we have some options here to, do some different kinds of deep learning based on images, mage classification. If you want to do audio classification, pose classification, for instance, if you want to teach it to recognize poses. We'll start with images will try and image project i'm going to click on that. Okay, so we have here is an interface where we can essentially train the machine the model to classify, recognize different types of images. There can be a number of different categories of images. I'm just going to to do to here and I'll start with a simple example which is teaching it to recognize cats versus dogs. So, as we talked about with machine learning, what you need as a starting point is a data set and that data set has to have a lot of examples. So it's going to have a lot of examples in this case of images and those images of pets and those pets will be classified as being a cat or a dog. And once I feed that data into the deep learning algorithm, it should from that point on, be able to learn or identify which any future images, what a cat is or what a dog is. And so the way I'm going to do that, is I'm going to upload images of cats into category one. I'm going to upload images of dogs into category two. And so that's basically implicitly saying, here are a bunch of images of cats and dogs to start with as a training data set, i'm going to upload this. So I'm going to put my dogs into category one and I'll just upload here a number of. All right, so now these pictures of dogs are uploading to teachable machine, I'm going to do the same thing down here for the cats. And the more I can do, the better the model will perform. I'm going to start with a relatively small use a relatively small number of images here, just for example sake. But in general for a engine like this, the more images and more examples you can provide to it, the better and better it's going to perform. So now I've basically given the teacher with the machine examples of two types of images dogs and cats. And I'm going to train the model and I'm going to click on this just to show you what's going on there ,a variety of things. These are called hyper parameters that you can you don't need to do anything but showing you what's under the hood, so to speak. But these are just hyper parameters that you can tune to make the model work in different ways. When I train the model now, it's going to start to prepare the training data and it's going to run the model. It's going to take a while to think about this and now it's running the model working through the images. So it's going to say the model is trained now and what I'm going to do is click on file here. And now what I can do is have a train model that can basically recognize cats or dogs. And if I drop an image in here of a cat or a dog, it should be able to automatically recognize whether it's a cat or dog. And this is not of course one of these images, that's the training data. But if I drop a brand new image in here of a cat or dog, they should do a reasonable job of recognizing a cat versus dog. Let's go back to a folder, i'm going to click over, but I have a test data set available as well. I'm going to click on that and if I say, this looks like a dog picture. So if I choose that for upload, It is going to say this is class one, which is again a dog with a relatively high degree of certainty. If I choose another image here, let's see this is again a dog. I'll do that one as well, then I'll do a cat maybe after that. This one is 100% sure it's a dog was no human in the picture of that time. So maybe it was a little bit easier for it to identify. Now if I drop over to cat in this case, it'll flip over and recognize that's a cat. So the point is that with an interface like this had to do very little in the way of coding and do very little in the way of under really understanding the machine learning algorithm. It's really about getting the data into shape and then passing it into the interface in a way that I can then train itself. And then from that point on, you can just start to deploy it on data. You can even export the model in a way that it can be used more widely. So a lot of applications like this that are making it easier and easier to use deep learning for people who have ideas but don't necessarily have the machine learning expertise