Hello everybody, I'm so glad you made it to module three. In this module, we're going to start loosening up some of the assumptions we've been using so far to do hypothesis testing. Now first up in this first lesson or video from module three, I have to review and may be introduced to you depending on your background, a couple of continuous distributions. The point of this video is to introduce you to the T distribution and the chi square distribution. So if you're familiar with those and want to skip the video, go for it. But since we're here for completeness, I did want to put in a couple of other distributions that we've already talked about just to round it out. And I'll touch on them very briefly. First up the normal distribution. We've seen this way too much. It has two Parameters, a mean mu which lives between minus infinity and infinity, and a variant sigma squared which is positive. And then the pdf looks like this and of course, you know that this is the bell curve. And if you do some calculus on this pdf to try to find this point of inflection where you go from, concave down to concave up, you'll find that this is exactly one standard deviation away from the mean. And so now you can really see the role of that variance or it's square root, the standard deviation. A larger variants, a larger standard deviation. When your variance is greater than one and that means that it's going to flatten out. It's going to go out further before it starts like changing its concavity. [SOUND] Another quick review, the continuous distribution. And I actually have a point of reviewing this one. So even if you're comfortable with the exponential distribution, there is something you might want to know here and it's really about the notation I'm going to be using. So it has one parameter and I called it a rate parameter. You may not have learned about the exponential distribution that way, and let me get to this in just a moment. So far, we've used this pdf. And this is a pdf that looks like this, you can see up here, if you plug in X equals zero, you get to the zero, which is one. So the function evaluated at X equals zero And then it goes down like an exponential. Now the mean or expected value mu is the expected value of X. And this in general is the integral from minus infinity to infinity of X times the pdf. And I plugged in the pdf and that is going to restrict the limits of integration. Because the pdf is actually zero for negative x. So if you really wanted to write this, you could write to integrals. You can say it's the integral from minus infinity up to zero of x times zero, which is going to go away and then the rest. So this is an integration by parts. We can also do something I call integrating without integrating, which is coming up and that means that you have a lot of pdf knowledge. And you can use that to your advantage to figure out some intervals. But right now, integration by parts, you get one over λ. The variance is the expected squared deviation of this exponential random variable from its mean. This menu is the expected value of X. And so if you square this out and run the expectation through because linear operator and rewrite the mu as the expected value of X, you will eventually get down to this. The expected value of x squared minus the expected value of X, all squared. So the expected value of x squared is the Integral from 0 to infinity of x squared times the exponential pdf. And if you did it you would get something, and you would then subtract one over lambda squared. And you would be left with one overlap to square, which means you've got to overland a squared for this expectation, which I didn't bother doing in these slides. The main point of me bringing this up again is not just to summarize what we know, but it is this. I write the exponential distribution typically like this. Some people write it like this with lambda in a different role. So this is sort of reparameterize. Now if I write it like this, we saw that the expected value was one over lambda. So if I write it like this, the expected value is then lambda. The mean is then lambda. I called lambda a reparameter. But if you parameterize it this way it's called a mean parameter. And there's this inverse relationship. So the exponential distribution is usually used or often used to denote inter arrival times or inter event times. Customers coming into a store or something like that. And if the customers come in at a rate of 3/h, then the expected time between customers on average is 1/3 of an hour. There's this inverse relationship. So people write when they want you to note the exponential distribution for a random variable X. They write x squiggly line, has the distribution, E XP for exponential and they put a lambda inside. But some of them in this at some of the mean this. And it's hard to know what they're talking about. So if you are reading an internet resource or textbook and you see this, you need to go back and figure out which pdf they mean by that. I am not going to write exponential with a lambda inside. I'm going to be more specific. No one really writes this stuff. So this is sort of just for this course. But I'm going to say X is exponential with rate equal to lambda. If I mean this pdf. And otherwise mean equals lambda. And here's a secret I'm actually never going to use this one. So just focus on the formula and the terminology in the red box. The next continuous distribution on our list is the gamma distribution. This has two parameters. One is known as a shape parameter for reasons that will become evident shortly. And the other one is either known as a scale parameter or an inverse scale parameter. This parameter is much like it's analogous to the one in the exponential distribution and people use it in two different ways. So I'm going to call the parameters alpha and beta, which is really common. Alpha is going to be known as a shape parameter. And in my parameterIzation, which is also a lot of people's parameterIzation, beta is known as an inverse scale parameter. You don't have to worry about that. You just have to know that one parameter is alpha and maybe know,that that controls the shape of the distribution. And the second parameter is beta. But you don't have to worry about the words scale and inverse scale, at least for this course. So if I have a gamma distribution with parameters alpha and beta, I mean the pdf looks like this. What is this thing down here? Give me a second. Let's go to the next slide because first I wanted to tell you how I would denote this random variable. If I want to talk about a random variable X with this gamma distribution, I would write X squiggly line has the distribution. This is a capital letter gamma, and then I would put alpha, beta inside. Now there's a capital gamma in my pdf as well. And these are different things. And the reason you know, that is one has to arguments and one has one argument. So when there's two arguments in the gamma, it's referring to the name of a distribution. And when there is one argument to the gamma, this is referring to something known as a gamma function. And I will define that for you in a few slides. But it is some kind of function and we're evaluating it at a number alpha, which just makes it a fixed number. So right now this thing in front is just some kind of constant. In fact it's the constant that makes this pdf integrate to one. Okay, alpha was called a shape parameter. You see the alpha up here with X to the alpha -1. So in the case that alpha is greater than one, you have X to a positive power. And that means when you plug in zero for X, this pdf is equal to zero. And when you plug in higher values for X, it starts to increase, but eventually the exponential to the negative X will take over. So this has a shape like this when X is 0, it starts at zero and then it goes up and eventually the e the minus beta X goes down faster than the X to the alpha -1 is going up and we get this kind of shape. We can play with the alphas and betas to move the shape around. I mean it's always going to be anchored in the same places, but the hump, the mode of the distribution, we can move that around depending on our alpha and beta. When alpha equals one club in and up here you've got X to the zero, that's one. So I've got a bunch of constants and then in e to the- beta X. And this pdf lives on the domain from 0 to infinity. So it's a constant times e to the minus beta X. What does that constant have to be? We already know it has to equal beta because this is looking like an exponential distribution with great data. And the true pdf would be beta e to the- X. And if we know that's a pdf, so we know that integrates to 1. So you can't multiply by something else and have it integrate to one, it you will be multiplying one by something. And so for that reason I know that this entire quantity one over this gamma function of alpha beta to the alpha when alpha is one, this is going to equal beta. So I have a beta there, beta to the one, I can see the beta. This means that the gamma function of alpha had better be one. And I'm going to define it soon. But let's look at alpha less than one. When our shape parameter is less than one, this X to the alpha -1 is X to a negative exponents. So that's one over X to something positive and 1/0 is blowing up, it's going off to infinity. So this is really shooting up to an ascent owed on the y axis. So all of this depends on something called a gamma function. I want to tell you what it is for completeness and I really think you should know. But I also know that you can get away without knowing this, as long as you're not freaked out by all of those symbols in the pdf. Because really they're just a constant. The gamma function for a positive argument alpha is defined as this integral and this integral is kind of close to what we had for our gamma pdf. If you put a beta to the alpha in front here and make this e to the minus beta X, then that's what we had before. And if you try to do that integral by making a U substitution and letting you be beta times X. You can simplify it down to this. So if you were to integrate the gamma pdf and you put the integral from zero to infinity on the pdf and you pull the one over gamma of alpha outside of the integral, the rest of it, it'll have betas. But with the U substitution will turn into this integral which will be gamma of alpha. So you have one minus gamma of alpha times gamma of alpha, which is one. This is just the constant that makes the pdf integrate to one. And it's got lots of cool properties, three of which I want to mention here. The first one is that gamma of 1 is 1. Super easy to see when you look at this integral, you get X to the zero. So you're left with only the integral from zero to infinity of e to the minus X dx and then integrates to 1. And although that's an easy integral to do, you don't have to do it. This is sort of the integrating without integrating thing I've been talking about because each of the minus X is the exponential pdf with lambda equals one. So we know it has to integrate to one. So again plug one in for alpha this integral is one. Therefore we know that gamma of 1 is 1. Next up we're going to get a recursive form for a game of alpha. So assuming that alpha is greater than one if you take this integral here and integrate by parts. So you may recall that the integral of UdV is U times v minus the integral of VDU. If you do that integration by parts and let you be the X to the alpha minus one and VbE to the minus X. Then you're going to get that first piece which is going to be evaluated at zero and infinity. And the second piece has the integral of EDU. And if you is X to the alpha minus one, the D U. Is the derivative alpha minus one X. To the alpha minus two dx. And so that's going to bring α -1 out front and that is going to leave you with an X to the alpha minus two times e to the minus X. Integrated that's your integral of of the second part, the VDU. And that's going to get you the gamma function back. It's just that your alpha has come down by one. So rather than that integral being gamma of alpha, it's going to be gamma of alpha minus 1. So again, integration by parts, you get the first piece which you evaluate from zero to infinity, which is going to turn out to be zero and goes away. And the second piece which is an integral, which is alpha minus one times the same integral again. But with alpha minus one in place of alpha, Okay. And the third property of the gamma function is that if you put in an integer and a positive integer, so and equals 123 on up. The previous property still holds because that's an example of an alpha. So gamma of an we know from the previous property is N -1 times gamma of an -1. And then using that second property again on gamma of n -1, we know that gamma of n -1 Is n -2, gamma of n -2. So putting that stuff together, we now have gamma of n is n -1 times n-2 times gamma of m -2. And if you rehearse the gamma of n minus two down, you'll get an minus three gamma and minus three, and if you keep going until you hit the bottom and that is when end minus whatever gives you one then the last term. And this long product is going to be one. In other words, gamma event is n minus one times n minus two times n minus three down to one, which is n minus one factorial. It doesn't make sense to use, factorial is up here because these are real numbers and we don't define factorial for real numbers. If you start subtracting one from your alpha, you're not going to hit bottom. You're going to skip over one and zero and go on forever. So this factorial fact is just a special case of the previous fact. Okay, we are ready to find the mean and variance I'll keep it brief. The mean we're going to call Mu as a generic symbol for the mean it's the expected value of x. That means you integrate from minus infinity to infinity x times the PDF for x. Now the PDF is 0 for negative Xs, so that part zeros out and you get the integral from zero to infinity of x times. So this is the integral we want to compute. If I look at just the X part of this PDF, this part kind of looks like a gamma PDF with parameters alpha plus 1 because remember the exponent on x is the alpha minus 1. So we're seeing at alpha which is alpha plus 1 minus 1. And the other parameter looks like beta, but it's not quite right. The constants aren't right. So I'm going to pull these constant out and put in what I want to see and then put in reciprocal of what I've put in so that I really don't change anything and that looks like this. If my alpha is really alpha plus 1, then my beta needs to be to the alpha plus 1. And yet I only had baited to the alpha. So I'm going to write data to the alpha minus 1. And then on 1 over beta out front to compensate and I'm going to change the 1 over gamma of alpha 21 over gamma of alpha plus 1. Pull out the 1 over gamma of alpha and put a gamma of alpha plus 1 in the numerator to compensate. And now the pentagram is exactly a gamma PDF with parameters alpha plus 1 and beta. And we know it integrates to 1. So what was this integral? It was the mean of the distribution. We now know that the mean of the distribution is just this stuff out front and we can simplify it with our new tricks gamma of alpha plus 1. I don't want to use pictorials here because this is probably not an integer. But I can use that second property in our properties of gamma functions to say that gamma of alpha plus 1 is alpha times gamma of alpha. And then that gamma of alpha will cancel with this gamma of alpha and we get alpha or a beta, which is what I claimed the integral to be a couple of slides ago. So this is what I mean by integrating without integrating you might find use for this even outside of probability and statistics. The more probability and statistics, you learn, the more PDFs you learn and know and feel comfortable with, the more other functions you can possibly manipulate to look like a known PDF. And you can say the integral of this part is 1 and so we didn't have to do any work, certainly no integration by parts or anything like that. Pretty cool, okay, the variance sigma squared is the variance of x. This is the expected square deviation of x from its mean Mu. Mu is the expected value of x. Square this outrun the expectation through like the linear operator is simplify, replace the mean Mu with the expected value of X and you go. So this guy over here, this expected value of x we just computed that was alpha over beta. The expected value of x squared can compute in the same way. Put x squared in front of the PDF, then bring it in and combine it with the x to the alpha minus 1, giving you an c to the alpha plus 1. And then say that kind of looks like a gamma PDF with parameter alpha plus 2. So if you have some extra time, you might want to try it and see if you can't get down to this final answer here. Now for the disclaimer, just like with the exponential distribution, the gamma distribution is parameterized two ways in terms of the beta. So remember my exponential distribution was land to eat the minus lambda x and some people, and maybe even you before this course parameterize it as 1 over lambda either minus x over lambda. And then that changes things. That changes your mean and variance where I got 1 over lambda, you would get lambda as you mean if you parameterized it the other way, same thing for the beta happened. So a lot of people in the world, I'm going to guess it's almost half and half. A lot of people parameters the gamma PDF like I do and a lot of people replace all of the betas with 1 over beta. And so if our mean came out to be alpha over beta, their mean is going to come out to be alpha times beta. You need to flip the beta over in the mean and the variance. So again, just be careful if you are using books and resources and people to help you. They might be parameterizing the gamma distribution a little bit differently. And yet they'll still write x squiggly line, gamma of alpha and beta. Okay, so here is a new distribution or I think a lot of you, some of you, I'm sure you've seen it. And that is the chi-squared distribution. So I'm going to say X has a high square distribution and write it like this. X squiggly line has the distribution, a capital chi-squared and then in here put a parameter and there's one parameter for the chi-squared distribution and it's usually denoted by an end. It is a positive integer parameter and and certainly looks like a positive integer. And it is known as the degrees of freedom parameter for this distribution. And the reason for that terminology will be coming up well, in this module somewhere. So I didn't write a PDF but I don't really need to because the chi-squared end distribution is defined as a specific gamma. It's defined to be the gamma distribution with parameters n over two and 1 half. Again, be careful in another book or paper or a friend of yours might have it being the gamma distribution with parameters n over two n2 because they use the beta in a different way. Okay, so you can write off the PDF if you want, take your gamma PDF and plug in the n over 2 for alpha and the 1 half for beta. And also we can use it to get the mean and variance for the chi-squared right away. Does that mean for our gamma was alpha over beta, or alpha is now n over 2 and our beta 1 over 2. So if you divide things cancel and you just get n. The variance was alpha over beta squared and if you plug in our new alpha and beta, you get 2n. So really nice clean mean and variance for this chi-square distribution. And the chi-square distribution is really central to so many things in statistics. So many things can be transformed into chi-squared random variables and then brought back to a common core of knowledge. And we're going to be using that in this course as well. So we're going to be transforming around and variables into chi-squared distributions and seeing this distribution a lot more than you might expect coming into this course. Okay, so here is a kind of generic curve. I didn't really plot the chi-squared and in fact it might not look like this because n could be 1. If n is 1 then that means it's a gamma 1 half, 1 half. And that means you have an alpha less than 1 half. So you have that that kind of extreme assume total the Y axis. So this is just a generic PDF. And for our hypothesis testing, we're going to want to cut off alpha area in the top tail or the bottom tail or split between two tails. And we're going to use very similar notation to RZ critical value. When we wanted to cut off area alpha in the upper tail of the standard normal distribution, we call that lower case C sub alpha. In this case we're going to call it high squared sub alpha, but we need to say a little more because. There's only one standard normal distribution, but there's a lot of chi squares, there's the chi squared of one, the chi squared of two, the chi squared of free. So we need to build that into our notation as well. So I'm going to write chi squared sub alpha, n to denote the critical value that cuts off area alpha to the right for the chi squared and pdf. You will see this many ways, you might see high scores of alpha of n and even other ways, but this is consistent with the notation we've been using already in this course. Okay, here's the last new continuous distribution for this video. Although we're not done because after this, I have the coolest relationships between all of these to talk about. And it is known as the t-distribution where it comes from is if you take a standard normal random variable and divide by the square root of a chi square random variable, that is itself divided by 8° of freedom assuming that standard normal and that chi squared or independent. This has a pdf. We can work it out. I'm going to show you on the next slide. This is a pdf that is rarely used because you can't work with it explicitly. So tables and software, but this is how you define a t-distribution. It's also known as the student's t-distribution. It was first developed by a statistician named William Gossip and he was working for the Guinness Brewery in Ireland in insert your here. And they were doing testing on small sample batches of beer and he was doing statistics for them and we're going to use some of those very techniques today. And for some reason the brewery didn't want other breweries to know. They had a statistician on board that they actually at, I don't know why they wouldn't want them to know that, but, William silly gossip, published under the pseudonym student. And so the t-distribution is sometimes called student's t-distribution. You can find this pdf if you took the course before this, we actually didn't do these kind of by various transformations we transformed from, we took around and variable and we put it through a function and we found the pdf for the resulting random variable. But I did talk about how to do transformations from two random variables to two more. And it involved a Jacoby in so I call it the Jacoby in method. You don't need to know it here. But If you want to find the pdf, for T, I would label it T1 and then T2 would be any other function of ZNW. Not really because some are much easier to work with than others, but in theory any other function. And you would take the joint pdf for a GNW. And that's just because they're independent. The product of the marginal pdf's and you would transform to the joint pdf for T1 and T2. And then you would remember that you didn't care about teaching, you made that up. So you would integrate it out to get the marginal pdf for T1 or T in this case. Please don't worry about it, I just wanted to give you some ideas about how these things go, but the pdf ends up looking like that. What? So we write x squiggly line T of N and is a positive integer and is again known as a degrees of freedom parameter which again, I will eventually talk about why we call it that. So right now it's just the name is like the rate parameter, the mean parameter, the degrees of freedom parameter. And the pdf looks like this and it's hard to work with CIA gamma function here and here and some I don't know what's going on in that exponents. But this takes on values from minus infinity to infinity. And it's pretty easy to see, aside from the fact that there's horrible constants here, it's pretty easy to see that it's symmetric about zero because the x only appears in here as an x square. In fact, it's a bell curve that is centered at zero and is flatter than the normal distribution but gets skinnier and taller with the larger degrees of freedom. So I'm kind of depicting that here. Here's the standard normal curve, this is a T 10 and this is a T 20 and it's just going up and up and trying to approach that standard normal. This is actually a convergence in distribution which we talked about in our review of the central limit theorem if the CDS converge and they're nice and you take the derivatives, those will also converge. You can't always pull a derivative through a limit, but you can in this case. So the pdf's of the t-distribution converge to the pdf of the standard normal as as the degrees of freedom parameter gets large. And this is not going to totally convince you, but you may recall the limit definition of the function E to the X. It looks like this. So if I go back to the T pdf, you'll see that I kind of have this, I've got a negative exponent, but I could take that out and raise the whole thing to the -1. So something to a power to another power, you get to multiply those powers. So that would get rid of the negative. And I do have to deal with the fact that n plus 1/2 is not quite n and I will replace the x in the definition with the x squared. And you can show that this term here by this kind of thing, we know and doing the right substitution. That term approaches each of the minus x squared, which is getting into the normal distribution territory. If we wanted to we could prove this. But we're not going to do that. Okay, we're going to use the same notation for critical values. If I want to cut off some area alpha in the upper right tail of the T curve, I would use a T. It's already lower case, I would use a T sub alpha. And then you also have to say which t-distribution? There's more than one. It depends on n. So I'm going to say T sub alpha, n some people say T sub alpha of n. It's all good is what you want in this course. You can use whatever you want as long as, well, maybe not in this course because because of the platform, a lot of things have specific answers. But if you're writing out a longer answer in mathematics or statistics, you can use whatever you want. If you draw the picture and indicate what you're trying to cut off and where. Okay, fun facts. Now, this section is going to be both the most exciting thing you've seen in the last 20 minutes and disappointing. The fun facts are really cool facts, relationships between distributions. Yet it might be disappointing to you because we can't prove everything in here. And it's not because it's beyond us to do it. It's just outside of the scope of this course and we can spend. Two or Three modules doing this stuff. So there are some fun facts in here that we will prove today. There are some that were proven in the precursor reports to this course second course in this specialization and then there's a couple that I'm just not going to be able to prove for you just because of time, because it would mean tangents into talking about things like moment generating functions, things that we're not really talking about in this course. The first one is that if you start off with a standard normal random variable and you square it, that distribution is chi squared with one degree of freedom. The second fun fact is if you have X one through XK, I usually have an N but I'm reserving my N here if you have an X one through XK independent random variables not necessarily identically distributed independent. And if x I has the chi square distribution with some degrees of freedom Ni so they all have different degrees of freedom and that's why they're not identically distributed but ii D is a special case of this, you can certainly make all the Ni the same. Then if you add up these Xs this new random variable will have a high square distribution where you get to add up all the individual degrees of freedom. You would show this using moment generating functions and that's just for your information, because we're not going to show it in here, but you can go back and check the other course. And as I was saying, in particular, the N can all be the same and they are going to be all equal to one so if I add up now N random variables with each with one degree of freedom and they're independent. So iid identical chi squared one degree of freedom distributions than the sum is a chi squared with N degrees of freedom and that N comes from one plus one plus one plus 1 N times. Suppose you have X with a gamma distribution with parameters alpha and beta I'm going to stop giving the disclaimer about the different ways people parameterise, this if you take a positive real number C and multiply X by C, you will still get a gamma distribution. This is not true of all distributions multiplying by constant does not necessarily preserve the distribution. For example, the Bernoulli distribution takes on the value zero and one and if you multiply by the constant 2.7, then it takes on the values 2.7 and one, which is not a Bernoulli distribution anymore it's similar but not a Bernoulli. So, two interesting facts here one is if you multiply a gamma random variable by a positive constant, you get another gamma distribution and the other is what happens to that C ends up going under the second parameter so we get a gamma distribution with parameters alpha and beta over C. I'm not trying to sell you on that other course, but this was proved in the previous course I'm not trying to sell you, I'm just trying to make myself feel better for not telling you how to prove this right now. Okay? Fun fact number wherever we are if you have a gamma distribution and you plug in alpha equals one so plug that in here, we know that this gamma function gamma of one is one and beta to the one is beta and X to the one minus one is X to the zero, which is one. So we're left with beta eats the minus beta X which is an exponential distribution with rate beta so the exponential distribution is a special case of the gamma distribution. Now we can sort of go the other way I mean I'm not going to say that gamma is a special case of the exponential, but check this out if X one through Xn are iid Exponential with great lambda if you add them up, get a gamma distribution. The first parameter is the number of things you added together so in this case N and the second parameter matches that of the exponential distribution, assuming you don't have mixed parameterizations. Given how we've parameterized the exponential and the gamma another fun fact is if you have X One through X and iid from a gamma distribution with parameters alpha and beta and you add those up, you get a big fat gamma distribution. This is maybe not surprising given what we just witnessed because it's like the sum of a whole lot of exponential kind of in fact you will get a gamma distribution with parameters N alpha and beta and you can show this using moment generating functions. So given those fun facts, here are some things we know if you have X one through Xn iid Exponential with rate lambda, then what is the distribution of the sample mean X bar. So let's look at this in kind of two parts the sum as I goes from one to n of the excess has a gamma distribution with parameters N and lambda. The number of things we added up and the exponential parameter put a one over N in front of that, that's like the constant C we worked with before and that constant C ends up under the beta parameter. So the sum is gamma and lambda and the sum times one over N is gamma and lambda that lambda gets divided by one over N which flips up three and a gamma N lambda. We can do something similar starting with the gamma distribution so if we have iid random variables and N of them each with distribution, gamma of alpha and beta than the some of them is gamma with parameters N alpha and beta. And then put a one over N in front of that multiply that's like the C earlier on in the slides and that goes under the beta parameter so we get beta divided by one over N which flips up and ultimately we get gamma of N alpha and beta. I said earlier that the chi square distribution is really central to all of statistics and it's going to be important that we can tie things in to this kind of core knowledge base and so this is a transformation. We are going to need an hypothesis stastic, suppose I have X one through Xn iid exponential with rate N and I'm looking at the sample mean which we already know as a gamma distribution with parameters N and N Lambda and suppose I want this to look like a chi square distribution. Can we do it? Can we transform this random variable the chi square distribution is a gamma with parameters N over two and one half this N is okay there because I can write that N as two N divided by two. So we're sort of halfway on our way to a chi square distribution with two N degrees of freedom but to get this part to be one half, I need to get rid of the N, get rid of the lambda and divide by two. And I can do that by multiplying X bar by two N Lambda that's the C for this gamma distribution and that goes under the second parameter in the gamma distribution, canceling the N canceling the lambda and putting them one half in there. So we have this gamma which we can write like this changing N to two and over two And now it's a chi square with two N degrees of freedom. The original point of this video was to introduce you to the T distribution in the chi square distribution I ended up summarizing other things that you probably know, but hopefully you've got a few new tidbits of information even out of the summary of the exponential and gamma distributions. In the next video, we're not going to get back to hypothesis testing just yet the biggest thing we're going to be doing in this entire module is the a hypothesis test for the mean mu of a normal distribution when the variant sigma squared is unknown. We're going to replace that because we don't know it, so we can't use it with a sample variance if you don't remember what that is, don't worry, I'll be refreshing you on that and the sample variance is related to the chi square distribution and makes something else related to a T distribution. And so these are two distributions that we're going to use in hypothesis testing going forward in this course. So we're going to get back to hypothesis testing in the third video, but in the second one, we have kind of a theoretical discussion about the sample variance. So, I will see you there.