So in this last section in this lesson, we're just going to mention some of the other commonly used nonparametric tests. I wanna start off with the Kruskal-Wallis test. Now, a beautiful article here by Parazzi et al, they looked at the ventilatory abnormalities in cystic fibrosis patients on the treadmill in 2015. And for their comparisons between the groups they used the Mann-Whitney U test and the Kruskal-Wallis test. So, again, they are comparing the medians between groups. Now, it's very similar to the Mann-Whitney U, where the Mann-Whitney U uses rank sums for two groups. The Kruskal-Wallis is analogous to the one way ANOVA, so we're just going to use more than two groups, so if we're comparing the medians of more than two groups. So again, there are two methods to go about it, they're different tables. But what you're gonna get in the end is a p-value. The proper way to go about it, remember if you are comparing more than three groups, you do Kruskal-Wallis first. If it is categorical, ordinal categorical data, or numerical data that is not normally distributed, does not come from a normal population distribution, do the Kruskal-Wallis test first. If you find a p value that is less than your alpha value, then go ahead and do the pairwise analysis because Kruskal-Wallis as with the ANOVA test is not going to tell you which groups are different from which other groups. It's just going to tell you as a whole there is a difference somewhere. Only when you find a p-value that is significant, then go ahead and do pairwise examination between two sets of groups using the Mann-Whitney-U test or the Wilcoxon rank sums test. Then we get to the Wilcoxon signed-rank test, so that is different from the Mann-Whitney U. This is signed rank. It's also called the Wilcoxon t-test. And that is really analogous to a normal t-test where we're looking at paired data. So it's the same set of patients with measurements before and after an event, or identical twins, as we've discussed. And remember, this is going to combine signs and ranks. First we're going to add the signs to a set of values. And then we're going to rank the absolute values in case they are negatives. Spearman's rank correlation, article there by Bello et al, you can look at that. They looked at the knowledge of pregnant women about birth defects in 2013. So they used the Spearman's rank correlation that is a form of a correlation and really analogous to our linear regression. When we do have both sets of numerical values being normally distributed, if one of them or both of them do not come from a population with a normal distribution, we use Spearman's rank. More specifically, it is analogous to the Pearson's product moment correlation. Remember that went from negative one to positive one, and we're gonna find exactly the same for Spearman's rank. The very last one, Kendall's rank correlation. You can read the article by Paul and colleagues. They looked at platelet aggregation in the Journal of Applied Basic Medical Research. Just comparing the platelets there, and they used the Kendall's rank correlation, so, a bit different from Spearman's rank. Spearman's rank is very accurate if you find, end up not rejecting the null hypothesis. If p-value is more than the alpha value, say for instance, the 0.05, then Spearman's rank correlation is very accurate. As soon as you drop down and become significant, it loses a bit of its correctness if I can say that. And there is a more sophisticated way to do the ranking that uses this Kendall's rank correlation proper. Perhaps more proper way to do your correlation, if your numerical sets of values are not normally distributed, specifically, if you do find with your Spearman's rank a significant difference. So, those would the most common non-parametric tests, actually, quite a bit of fun, and quite useful. And you wonder, without access to the data, so that we can all look at the data ourselves without open data. You've got to wonder how many times in your life you perhaps would have read the results of the t-test where it was not proper to use a parametric test. Nonparametric tests, very good tests, we mentioned in the beginning they do lose a bit of accuracy when it comes to small differences between groups. Really not that much, quite a clever way of looking at the data, safe way of looking at the data. And as soon as the values that we are looking at do not come from a population with a normal distribution, we have to use nonparametric tests. And of course those are going to be the only tests we can use if we're talking about ordinal categorical data. As long as you can order the set of values from smallest to highest, you can use nonparametric tests. Look out for them in the literature. They are quite interesting type of tests.