Now that you're a little bit more familiar with the framework and the OAP, we can talk about the action part and help us get a utility decision. Most people assume this is strictly an economics consideration or a patient outcomes' consideration, which yes, these do play a part, but it's also about the other costs like time, legal, collateral harm, all of which can be neglected and difficult to measure. Let's use an example to illustrate these steps. Suppose we would like to design a machine learning project to reduce deaths from sepsis in the ICU. We could choose a classification and science approach which might be to use EHR data to classify ICU patients into different sepsis risk categories for future work and other factors and signals that may signify sepsis treatments, or we could choose an action delivery approach, which might be to use historical ICU data to inform adjustment of par quantities of relevant sepsis medications, staff into considerations, and equipment. But for this example, let's consider a prediction practice sepsis model that can ingest real time patient data and then predict the patient is likely to develop sepsis at the point of care. The output will be on the prediction and the action will be an alert to the clinical team if positive above a certain threshold. Now it sounds like we have a general direction, and we're able to define our question with a category framework. Now let's explore our idea little bit further and review the output action pair. Our output here will be a sepsis diagnosis, which will be the label that we will use to build our model. Sounds easy, right? Well, taking a quick time out, what exactly is the definition of sepsis? Well actually it depends on what we're tending to do with our model, which is why we categorized it first and why we're spending time on this now. In this case sepsis has distinctly different definitions depending on what you're attempting to address. One definition is sepsis-3, which is a medically accurate consensus definition that uses specific clinical criteria and it's applied to a patient as a formal diagnosis, and it's designed by medical experts for clinical use. Another definition of sepsis is the Medicare sepsis Identifier SEP-1. This is a quality measure based on hospitals who rely on billing and quality reporting to the government. In contrast to the other definition, this measure does not represent medically useful sepsis definitions and it would be problematic for the model we're building to use this definition, why? This definition considers only a sub sample of patients in a given hospital and so you would not have all the sepsis patients labeled with this approach. What would have happened had you developed your clinical classification model based on the quality measure sepsis definition as output rather than the clinical one. Well, if it worked at all, it would be unlikely to improve care and could maybe even have unintended harmful consequences. But on the other hand, suppose we were trying to build a model that identifies sepsis patients for automated quality accounting and reporting to the government as we had mentioned before, this would be an action or care delivery classification. Our categorization in this case would be that we would want to use the non-clinical medicare SEP-1 definition as our output label to build our machine learning model with a data set that reflects that label. If we had instead used the clinical definition for that accounting model, it would have caused an over reporting of in hospital sepsis leading to major problems with hospital system accreditation with the government. But because we just took this course, we know that based on our categorization exercise we want to build a model that predicts sepsis for a medical practice intervention and our output will need to be based on medically relevant information. Our categories helped us think about the proper label and cohort selection with a good use case and the output. Now, we need to talk about the action part or what I like to refer to again as the so then what part of the OAP planning? First off, the likely action pair with a positive sepsis prediction will be immediate escalation in medical therapy but that action also has cost, the cost of the therapy but also the cost of the false positives or the unneeded treatment costs which are less visible like alert fatigue to the provider and other things. Further, the action of a false negative prediction also has costs. There might be patients with sepsis that will have treatment delayed even if the further alert labels them to a false negative and so there is no assurance that the true positive information will necessarily change primary or even secondary outcomes in your model. As you can see the action based on the machine learning model, even if it can be successfully created to perform the task, it was trained to do, could still lead to worse and more expensive care if the output action pairing is not optimized. In other words, that OAP utility analysis really helps us to think about this so then what question and affords a rough understanding of the minimum acceptable performance and how that output would lead to action in many possible scenarios and what would be the overall utility of that model. It also helps to answer the question we started out with whether the problem is worth solving in the first place using machine learning. You can see how useful it is before even starting this project to start thinking about the terms how one would act given a models output by considering the utility in a healthcare environment. While model evaluation typically focuses on metrics such as positive predictive value, sensitivity recall, specificity and calibration, the constraints on the action triggered by the model often can have much larger influence in determining model utility. This is because there are many factors affecting the clinical utility of a predictive model that include leading time offered by the prediction, the existence of a mitigating action, the costs of intervening and the cost of false positives and false negatives, the logistics of the intervention, even incentives both for the individual and the health care system. Once there is agreement about the potential problem worth solving and utility of the model, there are still a few other considerations. If every model results in more cognitive or physical load to the caregiver provider, it's not hard to imagine this contributing to burnout. For alerts to work as intended, there also needs to be an effort to de-implement ones that aren't working or out of date. Finally, it's important to consider how cognitive biases play a role in study or project design and a lack of clear analytic thinking can lead to wasted resources. We'll talk a lot more about this later on. We must understand after being satisfied with the modeling approach and the OAP that in a clinical trial setting holding true to the concept that if it does not meet a set minimum requirement, you must resolve to pull the plug on the model. To make this point one more time, there is no question that a failure to spend significant time and effort planning and thinking about whether the problem is worth solving and how the resulting model will be deployed, will be a regret that will haunt you far down the road.