This is a five-section course as part of a two-course sequence in Research Methods in Psychology. This course deals with experimental methods whereas the other course dealt with descriptive methods.

Loading...

From the course by Georgia Institute of Technology

Experimental Research Methods in Psychology

ratings

This is a five-section course as part of a two-course sequence in Research Methods in Psychology. This course deals with experimental methods whereas the other course dealt with descriptive methods.

From the lesson

Threats to Internal Validity

- Dr. Anderson D. SmithRegents’ Professor Emeritus

School of Psychology

Hi, Anderson Smith and we're talking about threats to internal validity.

And last time, we talked about maturation and history.

Things that can change an environment and

people that could threaten the internal validity, the relationship that we're

trying to find between an independent variable and independent variable.

And now, I want to talk about regression.

A statistical concept that's very important in looking at differences.

There's a tendency for

extreme measures to be closer to the mean on subsequent measurements.

It's just a statistical happening.

It's going to happen.

As if I have an extreme measure on trial one, then it's going to be closer to

the mean on trial two in dependent on whether raise on manipulate.

If you have a somewhat non-random sample,

you also get increase regression to the mean.

And if I have possibility of having more extreme scores,

I will have increase regression to the mean.

So for example, if we look, this is actual study that was done in 2007.

We're looking batting averages between 2005 and 2006.

And notice that if you have an extreme batting average,

it tends to be closer to the mean the year later.

So, these four batting averages are actually closer

to the mean simply due to statistics.

It's just going to happen, because the probability of being closer to the mean is

always going to be greater than being further away from the mean.

Because you're already at an extreme score.

So, regression to the mean.

It was first discussed by Francis Galton who was knighted in the 1980s and

he was probably one of the early statisticians.

He actually was the first to talk about correlations, for example.

And he discovered, just to give you an example that the correlation between

the father's height and the son's height was 0.50.

So there's a correlation with the height of the father and

the height of the son, and that's about 0.50.

And he assumed that 50% was what he called the heritability level,

the probability that there's a relationship between the height.

But he also talked about regression to the mean.

The higher you are in height or the lower you are in height,

the changes are going to be greater.

Because of simply regression to the mean.

For example, he found that his father's height was one standard deviation below

average, then the prediction of the height of the son would be 50% times

1 of the standard deviation below average.

And the further away it is, the lower is,

then the greater that difference is going to be.

So the difference is going to be smaller, depending how extreme the scores are.

If the father's height was two standard deviations above the average,

that is a more extreme score, then the son's height was predicted to be 0.50x2 or

1 standard deviation above average.

So the bigger the difference,

depends upon the original level of one of the variables.

So tall children tend to have tall children, but not all that tall.

Because of regression to the mean.

One of the things he studied, Sir Francis Galton was intelligence and

he believed that it wasn't regression to the mean.

It was regression to mediocrity.

That is the kids tend to be more like the mean and

plus very smart parents, less smart kids.

Regression to the mediocrity.

So, the tendency for extreme measures to be closer to the mean on subsequent

measurements is what we mean by that.

And the reasons are that if we have extreme measures,

the probability of getting a more extreme measure simply is small.

It's a statistical artifact and that we need to be closer to the mean,

because that's the only direction that it can go in or

that's the primary direction it can go in.

A little example of everyday life.

If you pick a card and it's the queen,

the probability that your partner will pick a card that's higher is very small.

The probability of picking a card that's lower is great,

simply because there are not mini cards higher than a queen.

That mini card's lower than the queen.

So, regression to the mean likely to occur.

So average goes up or down, depending upon what the extreme score is and

this is just an example.

Looking at three point shooting by half and some ball games, basketball games.

And overall, you can see scores that are going up.

Scores are going down.

But overall, you can see that it's regression to the mean.

The mean is here and the variance around the mean is smaller here is out there.

Regression to the mean.

Having nothing to do with how well they are basketball players and

this is another showing the same thing.

A regression to the mean, score on day one, score on day two and while there

are some extreme scores that really are related to whatever it is we're measuring.

Overall, the higher your score on day one, the lower your score on day two.

And the higher you'll score on day one, I should say,

the higher the score on fay one, the lower the score on day two.

So, you get this regression to the mean.

Sometimes, it's called the football effect distribution.

The other thing is when you have a trend that looks like this line.

In fact, the scores sort of vary around the line in this sort of circular,

cyclic kind of way.

Because you have not only the increase in the trend of the measure over time, but

you also have regression of the mean that's occurring.

So for example, here's the actual data from incidence rate of childhood I

diabetes in Western Australia and

you can see that the probability incidence rate of diabetes is going up with time.

But you also see the actual data show these cyclic variations

in the scores and that's a collection of both the average.

That is the trend which is what we're probably interested in doing and

studying this particular phenomenon, but it also shows a regression to the mean.

The fact that if you get above the mean greatly, it tends to regress to the mean.

It shoots below that and

then it's sort of constantly regressing to the mean over time.

Thank you.

Coursera provides universal access to the world’s best education,
partnering with top universities and organizations to offer courses online.