We now move up the pyramid to knowledge. Now, you recall from our discussion on information that a graph like this is information. Let me show you how it turns into knowledge. Child comes into clinic with a growth curve that you see in the left-hand side. Now, recall that the bottom curve is the third percentile, so this child started growing kind of a little small but then drop percentiles. Now, he's way below the third percentile. So, this child is very small both in height and weight for age. Then when we see story like this, we get worried that maybe the parents are starving the child, maybe the child has a metabolic disease, or some other chronic illness. So, I'm about to pick up the telephone and call protective services to say, "We have a child who has failure to thrive and we're worried about." But I'm looking at the patient and then realize, "Wait a minute." The pink graph then working with was normed on white children from Denver, Colorado in the 1960s. The patient in front of me is from overseas and it's a 1990s. Does that growth chart apply to this child? Now, back then I would have had just come out and I'm scrambling looking through an old search engine called AltaVista for the growth chart I find in that, and I'm able to make an argument that this child actually is much closer to the normal growth chart for themselves. In fact, the parents who I'm seeing are short. So, let's just call this normal for this family rather than protective services etc. Today, the WHO has a growth chart that you see in the middle, and I've plotted the exact same point in this growth chart, and you see in fact that this child is not abnormal for age. So, there's a lot of what is the source of knowledge and see a little bit of wisdom in terms of when not to use the knowledge that you have. Predictive analytics is all the rage today. A big goal is to take patient's data and then come up with a formula that predicts what the patient is at risk for. So, here is the results of a study on a such a formula for congenital diaphragmatic hernia. This is where the diaphragm in the fetus does not close. So, the intestines rise up into the chest, which leads the the lungs not developing normally, which means the child is born that the ability to switch oxygen for carbon dioxide, which is not a good thing. So, in this case, they wanted to come up with a risk score for this fetus, whether or not it's likely to either have diaphragmatic hernia, or once born, what's the mortality rate? So, you can see here the lowest risk is about 10 percent which is not a good mortality rate to have, but the score pretty much accurately predicted it, similarly for the middle risk and the highest risk. So, the two points come out with this. Number one is that a risk's score both has a number attached to it and somebody has to interpret whether 10 percent is high or low, but even more importantly is that these folks do the right thing and validated their formula. He didn't just have people use it only nearly and the take a message for this is that if you're gonna come up with doing any AI, machine learning, deep learning, I don't care what you call it, if you don't validate it, if you don't find it that it's been validated, you should not use it. There are a lot of risk calculators that are more common than what I just showed you. This is one come from the cancer community. The point they simply want to make here is that a lot of the data necessarily for this risk calculation comes from types of things that we've seen already. Some of them are atomic data like the genetic test, or results from laboratory, or some specific experiences. Some of them would maybe have to come from unstructured data and medical records. So, getting the risk calculator to calculate the risk for an individual patient from the EHR may not be straightforward, but we do have these validated risk calculators. You notice once you come up with a calculation, it compares the patient's risk to the population. In this case, the 1.8 percent is the same in both cases. The implication being you shouldn't do anything. What I like in terms of knowledge, is that the machine goes one step further and says that that 1.86 percent of mortality is equivalent to 98.14 percent of survival. This is a cognitive issue called framing. We know that if I tell you, you have a 10 percent mortality risk, you'll feel much worse, and if I tell you you have a 90 percent chance of survival, you know the numbers are actually exactly the same. So, that's the application of knowledge in how data are displayed. Now, the type of a device, of the type of knowledge that we often want to want to see in health IT is this sort of active advice and active alert. In this case, what's happened is a lab result has come back for staph aureus and you see the report in texts on the left-hand side. Somehow, this machine was able to read the text and tell you, "Okay. I know that this is staph aureus. I'm able to read the laboratory report that tells me the sensitivities and I as a computer, I'm highlighting the first row, which is this organism sensitive to oxacillin, and at the bottom, I as the computer, are displaying for you the order that I think you should take, which is to order oxacillin at a certain dosage. So, I as a computer, I've done a lot of work. I've interpreted the lab result, I presented it in a way that you can understand and I am telling you what I think you ought to do. That's knowledge for action. What I like about this particular example is that even though this comes from Korea just a few years ago, the example actually goes back to the late 1880s in Utah, where they were able to get this level of functionality all the way back then. I mentioned the images before as you can have knowledge applied to images as well. So, on the left-hand side, you see an MRI of a knee, if you didn't know that, and on the right-hand side, you see in the machines identifying the parts of knee. So, the green thing is the femur, the red thing to the left is your kneecap, and the red thing to the bottom is the tibia. Now, there's a lot of calculation going on and figuring out where the knee is it and so it knows a lot about anatomy, knows a lot about MRIs, and knows a lot about outlining. Now, you could argue, does the machine know or is it just processing information? Not going to get into that philosophical argument, I think there's clear implicit knowledge about knees and images while with the metabolic example, it's filled a bit more explicit. It knows what to treat and how to treat it. So, knowledge is judgment. What you should do or what it means comparing to other people, implications, a wisdom as we pointed out is not always following that advice. There's knowledge how to do something like how to treat a patient or not that. I know that that's a knee. We've talked about implicit versus explicit knowledge and this is subtle. If I show you that outline of the knee, many people say, "Oh, the machine knows about knees." The way I said it, but it may not be really explicit knowledge, and you'd argue again that the active advice is bit more explicit. Knowledge depends on meaning. So, if you had ibuprofen in the allergy box and then I have a rule that says don't treat people with allergies, it's going to treat the data in the allergy box different from the ibuprofen data in medication box because it's not an allergy and, therefore, the rule that allergies doesn't apply. So, this whole notion of what knowledge is, where it is, who's got it, and how it works is the area of entirely different course, which you'll be hearing about later. So now, you've seen data information knowledge, and I hope you have a better sense of what we mean by these different things, and as we go forward, we'll see about where to whether you can identify them and how to exchange them.