This week you'll learn about methods to interpret the AI models you've already built for diagnosis and prognosis. You'll learn about methods for interpreting the tree-based models that you've built in Course 2, and the deep learning models you built in Course 1. In this lesson, you will learn about one method for the interpretation of machine learning models. This method will allow you to interpret a model by finding out how much each feature contributed to the model. Let's say we had a prognostic model that use blood pressure, BP, and age to get the risk of death. Let's see how we can find out the importance of each of these features to the model. The first method we will look at to determine feature importance is the drop-column method. In this method, we have our original model which uses blood pressure and age as input. We now train two other models, one that uses only age as input, and another that only uses blood pressure as input. We'll refer to these models by their input using this set notation. We can evaluate each of these prognostic models on the test set using a metric such as the C-index. The model with both blood pressure and age gets the highest C-index, followed by the model with just age, followed by the model with just blood pressure. Now, we can determine how important age was, or BP was, by looking at the difference in the performance of the model with and without the age feature. So we take the difference in the C-index, 0.90 and 0.82, to get a difference of 0.08. Similar to look at the importance for blood pressure, BP, we look at the difference in the model that uses both features versus one that doesn't include BP to get 0.90 minus, this time, 0.85, and find that that the difference is 0.05. Thus we're able to realize that age has higher feature importance than blood pressure. This method is called the drop-column method because we're dropping a feature to build an extra model. Because features are usually represented as columns in a table, hence the name drop-column method. Here's a tabular illustration of this method. Where we have our full model trained on both the age and blood pressure features and the outcome, then we drop the BP column to form the second model. Finally, we drop the age column to form the third model. The challenge with this method is that we have to build multiple models. With two features, we have to build these two additional models, with three features, we have to build three additional models. So we would have to build as many extra models as there are features and this can get computationally very expensive.