There are a lot of AI and machine learning techniques today. And while supervised learning, that is learning A to B mappings, is the most valuable one, at least economically today, there are many other techniques that are worth knowing about. Let's take a look. The best known example of unsupervised learning is clustering, here's an example. Let's say you run a grocery store that specializes in selling potato chips. And you collect data on different customers, and keep track of how many different packets of potato chips a single customer buys, as well as what's the average price per package that person paid for their potato chips. So you sell some low end, cheaper potato chips, as well as some high end, more expensive packets of potato chips. And different people may buy different numbers of potato chip packets, in a typical trip to your grocery store. Given data like this, a clustering algorithm will say that it looks like you have two clusters in your data. Some of your customers tend to buy relatively inexpensive potato chips, but buy a lot of packets. If your grocery store is near a college campus, for example, you may find a lot of college students that are buying cheaper potato chips packets, but they sure buy a lot of them. And there's a second cluster in this data of a different group of shoppers that buy fewer packets of potato chips, but buy more expensive packets. A clustering algorithm looks at data like this, and automatically groups the data into two clusters or more clusters, and is commonly used for market segmentation. And will help you discover things like that if you have a college student audience that buys a certain type of potato chips, and a working professional audience that buys fewer potato chips but is willing to pay more. And this can help you market differently to these market segments. The reason this is called unsupervised learning is the following. Whereas supervised learning algorithms run an A to B mapping, and you have to tell the algorithm what is the output B that you want, an unsupervised learning algorithm doesn't tell the AI system exactly what it wants. Instead it gives the AI system a bunch of data such as this customer data, and it tells the AI to find something interesting in the data, find something meaningful in the data. And in this case, a clustering algorithm doesn't know in advance that there's a college student demographic and a working professional demographic. Instead, it just tries to find what are the different market segments without being told in advance what they are. So unsupervised learning algorithms, given data without any specific design output labels, without the target label B, can automatically find something interesting about the data. One example of unsupervised learning that I have worked on was the slightly infamous Google cat. In this project, I had my team run a unsupervised learning algorithm on a very large set of YouTube videos, and we asked the algorithm, tell us what you find in YouTube videos. And one of the many things that are found in YouTube videos was cats, because somewhat stereotypically, YouTube apparently has a lot of cat videos. But it was a remarkable result that without telling it in advance that it should find cats, the AI system, the unsupervised learning algorithm, was able to discover the concept of a cat all by itself. Just by watching a lot of YouTube videos and discovering that, boy, there are a lot of cats in YouTube videos. It's hard to visualize exactly what an AI algorithm is thinking sometimes, but this picture on the right is a visualization of the concept of the cat that the system had learned. Even though supervised learning is an incredibly valuable and powerful technique, one of the criticisms of supervised learning is that it just needs so much labeled data. For example, if you're trying to use supervise learning to get the AI system to recognized coffee mugs, then you may give it 1000 pictures of coffee mug, or 10,000 pictures of mug coffee mug. And that's just a lot of picture of coffee mugs coffee mug we will be giving our AI systems. For those of you that are parents, I can almost guarantee to you that no parent on this planet, no matter how loving and caring, has ever pointed out 10,000 unique coffee mugs to their children, to try to teach the children what is a coffee mug. So, AI systems today require much more labeled data to learn than when a human child or than would most animals. Which is why AI researchers hold a lot of hope out for unsupervised learning as way, maybe in the future, for AI to learn much more effectively in a more human like way, and more biological like way for much less labeled data. Now, we have pretty much no idea how the biological brain works, and so to realize this vision, we'll take major breakthroughs in AI than none of us know yet today how to realize. But many of us hold a lot of hope for the future of unsupervised learning. Having said that, unsupervised learning is valuable today. There are some specific applications and natural language processing, for example, where unsupervised learning helps the quality of web search quite a bit, for example. But the value today of unsupervised learning is so a lot smaller than the value created through supervised learning. Another important AI technique is transfer learning. Let's look an example. Let's say you bought a self driving car, and you've trained your AI system to detect cars. But you didn't deploy your vehicle to a new city and somehow this new city has a lot of golf carts travelling round, and so you need to also build a golf cart detection system. You may have your car detection system with a lot of images, say 100,000 images, but in this new city where you just start operating, you may have a much smaller number of images of golf carts. Transfer learning is the technology that lets you from a task A, such as car detection, and use the knowledge to help you on a different task B, such as golf cart detection. Where transfer learning really shine is if having learn from a very large dataset of car detection, task A, you can now do pretty well on golf cart detection, even though you have a much smaller golf cart dataset. Because some of the knowledge it has learned from the first task, of what the vehicles look like, what the wheels look like, how the vehicles move. Maybe that will be useful also for golf cart detection. Transfer learning doesn't get a lot of press, but it is one of the very valuable techniques in AI today. And for example, many computer vision systems are built using transfer learning, and this makes a big difference to their performance. You may also have heard of a technique called reinforcement learning. So, what is reinforcement learning? Let me illustrate with another example. This is a picture of the Stanford autonomous helicopter. So it's instrumented with GPS, accelerometers, and a compass, so it always knows where it is. And let's say you want to write a program to make it fly by itself. It's hard to used supervised learning input/output, A to B mappings, because it's very difficult to specify what is the optimal way. What is the best way to fly the helicopter when it is in a certain, given position. Reinforcement learning offers a different solution. I think of reinforcement learning as similar to how you might train a pet dog to behave. My family, when I was growing up, had a pet dog. So, how do you train a dog to behave itself? Well, we let the dog do whatever it wanted to do, and then whenever it behaved well we would praise it. You go, good dog, and whenever it does something bad you would go, bad dog. And overtime it learns to do more of the, good dog, things, and fear of the bad dog things. Reinforcement learning takes the same principle and applies it to a helicopter or two other things. So, we would have the helicopter flying around in a simulator so it could crash without hurting anyone. But we would let the AI fly the helicopter however once, and whenever it flew the helicopter well, we will go, good helicopter. And when if it crash, we go bad helicopter. And then there was the AI's job to learn how to fly the helicopter to get more of the good helicopter rewards, and fewer of the bad helicopter negative rewards. More formally, a reinforcement learning algorithm uses a reward signal, to tell the AI when it's doing well or poorly. This means that whenever it's doing well, you give it a large positive number to give it a large positive reward. And whenever it's doing really bad job, you send it a negative number to give it a negative reward. And it's the AI's job to the automatically learn to behave so as to maximize the rewards. So, good dog responds to giving a positive number, and bad dog or bad helicopter, corresponds to you giving a negative number. And that the AI will learn to get more of the behaviors that results in large positive numbers, or in large positive rewards. Let me show you a video of the Stanford autonomous helicopter after we did this. This is a video of the helicopter flying under reinforcements learning control. I was the cameraman that day, and when you zoom out the camera, you see the trees pointing the sky. So, we actually gave it a reward signal which rewarded the helicopter flying upside down. And using reinforcement learning, we built one of the most capable autonomous helicopters in the world. In addition to robotic control, reinforcement learning has also had a lot of traction in playing games, games such as Othello or checkers or chess or Go. You might have heard of AlphaGo, which did a very good job of playing Go using reinforcement learning. And reinforcement learning has also been very effective at playing video games. One of the weaknesses of reinforcement learning algorithms is that they can require a huge amount of data. So, if you are playing a video game, a reinforcement learning algorithm can play, essentially, an infinite number of video games. Because it's just a computer playing a computer game, and get a huge amount of data to learn how to behave better. Or for playing games like checkers, or other games. It can play a lot of games against itself, and for free, get a huge amount of data to feed in its reinforcement learning algorithm. In the case of the autonomous helicopter, we had simulator for the helicopter, so it could fly in simulation for a long time to figure out what works and what doesn't work for flying a helicopter. There so a lot of exciting research work being done to make reinforcement learning work, even for settings where you may not have an accurate simulator. Whereas harder to get this huge amounts of data. Despite the huge amount of media attention on reinforcement learning, at least today it is creating significantly less economic value than supervised learning. But there may be breakthroughs in the future that could change that. And AI is advancing so rapidly that all of us certainly hope that there will be breakthroughs in all of these areas that we're talking about. GANs, or generative adversarial networks, are another exciting new AI technique. They were created by my former student Ian Goodfellow. GANs are very good at synthesizing new images from scratch. Let me show you a video generated by a team from NVIDIA, that used GANs to synthesize pictures of celebrities. And these are all pictures of people that had never existed before. But by learning what's celebrities look like from a databases celebrity images, it's able to synthesize all these brand new pictures. There's exciting work by different things right now on applying GANs to the entertainment industry. Everything ranging from computer graphics to computer games, to media, and to just making up new contents like this from scratch. Finally, the knowledge graph is another important AI technique that I think it's very underrated. If you do a search on Google of Leonardo da Vinci, you might find this set of results with this panel on the right of information about da Vinci. If you do a search on Ada Lovelace, you'll also similarly find a panel of additional information on the right. This information is drawn from a Knowledge Graph, which basically means a database that lists people and key information about these people. Such as their birthday, when they pass away, their bio, and other properties of these individuals. Today different companies have built knowledge graphs of many different types of things not just people. But also they built these data bases of movies, of celebrities, of hotels, of airports, of scenic attractions, and on and on and on. For example, a Knowledge Graph with hotel information may have a big database of hotels as well as key information about these hotels. So that if you look them up on the map you can find the right information relatively quickly. The term Knowledge Graph was initially popularized by Google, but this concept has spread to many other companies. Interestingly, even though Knowledge Graphs are creating a lot of economic value for multiple large companies at this point, this is one subject that is relatively little-studied in academia. And so, the number of research papers you see on Knowledge Graphs, seems to be disproportionately small, relative to the actual economic impact today. But, depending on what industry vertical you work in, perhaps some of the techniques for building Knowledge Graphs, will also be useful for building a big database of information about something relevant to your company. In this video you learned about unsupervised learning, transfer learning, reinforcement learning, GANs, and knowledge graphs. It seems like a lot, doesn't it? I hope that some of these ideas will also be useful to your projects, and that knowing what these algorithms are, will make it easier for you to fruitful discussions with AI engineers. In this week we've talk a lot about how AI can affect companies. Maybe how you could use AI to affect your company. AI is also having a huge impact on society. So, how can we understand the impact that AI is having on society, as well as make sure that we do ethical things? And that we use AI only to help people and make people better off? And next week we'll talk about AI and society. Thanks for sticking with me up to this point, I look forward to seeing you in the final week of videos for this course.