This course covers the analysis of Functional Magnetic Resonance Imaging (fMRI) data. It is a continuation of the course “Principles of fMRI, Part 1”

Loading...

Del curso dictado por Johns Hopkins University

Principles of fMRI 2

87 calificaciones

This course covers the analysis of Functional Magnetic Resonance Imaging (fMRI) data. It is a continuation of the course “Principles of fMRI, Part 1”

De la lección

Week 3

This week we will focus on brain connectivity.

- Martin Lindquist, PhD, MScProfessor, Biostatistics

Bloomberg School of Public Health | Johns Hopkins University - Tor WagerPhD

Department of Psychology and Neuroscience, The Institute of Cognitive Science | University of Colorado at Boulder

Hi, in this module we're going to be talking about functional connectivity.

So, to recap,

functional connectivity is defined as the undirected association between two or

more fMRI time series and/or performance and physiological variables.

So functional connectivity makes statements about the structure of

relationship among different brain regions.

And they usually don't make any assumptions about the underlying biology.

So here again is the example we showed last time where

the VMPFC is correlated with these three other regions.

That would be sort of a typical functional connectivity result.

Methods for performing functional connectivity include seed analysis and

inverse covariance methods, which we'll be covering in this module.

And multivariate decomposition methods which we'll be covering in

the next module.

Let's take a look at the simplest form of functional connectivity which is

bivariate connectivity.

Here we're interested to see whether or not Region A is related to Region B.

This provides information about relationships among the two regions and

can be performed on time series data within a subject,

where individual differences on contrast maps one per subject.

So basically, one way of doing this is to calculate the cross correlation

between time series from two separate brain regions.

So we extract a time series from Region 1 and from Region 2 and

calculate their correlation, which we're going to be denoting as r here.

Sometimes we also transform them using the Fisher transformation from the correlation

r to the Fisher transform correlation z.

Now we can do this for multiple subjects.

In this cartoon we do it for n different subjects and

then perform group analysis on the z scored.

We usually do it on the z scores rather than on the original correlation

because the z's are normally distributed.

That's one of the effects of doing the Fisher transformation.

In seed analysis, this cross-correlation is computed between

the time course from some predetermined region, which is called the seed region,

which we're particularly interested in, and all other voxels in the brain.

Here this allows the researcher to find regions that are correlated with activity

in the seed region.

And the seed time course can also be a performance or physiological variable.

Here's an example the results of a seed analysis where we're

looking at the correlation between brain activity and heart-rate.

So here we're taking the heart-rate as the seed time course and

looking at where in the brain we have correlations with heart-rate.

And so it turns out in the ventromedial prefrontal cortex

is very highly correlated with heart rate, in this example.

Here's a classic paper on resting state fMRI,

where they took activation from the PCC, and performed a seed analysis.

And found that there was regions that were both positively, and

negatively correlated with the PCC, in resting state scans.

So one of the main issues with time series connectivity as we've described above

is the fact that there may be different hemodynamic lags in different regions.

So just because two regions have the same neuronal activation,

it doesn't mean that necessarily their hemodynamic lags are going to line up.

So time series from different regions might not match up even if their neuronal

activity patterns match up.

If the lags are estimated from the data,

the temporal order may be caused by either vascular or neuronal response.

Here the vascular response it tends to be uninteresting while the neuronal

response is what we are really interested in getting at.

The beta series approach by Rissman et al.,

can be used to minimize issues of inter-region neurovascular coupling.

The procedure that's used in the beta series approach is first to fit a GLM

to obtain separate parameter estimates for each individual trial, and

thereafter compute the correlation between these estimates across voxels.

So here's a cartoon of that.

So we extract the data from Region 1 and Region 2 again.

But now rather than taking the cross correlation between the entire time

series, we fit a GLM with a separate regressor for each trial.

And so now we get a trial specific estimate of the amplitudes.

These are these black dots that you see in the time course.

And now we take the correlation between those values of those black dots and

we do that for each subject.

We can calculate the Fisher transform correlation again and

perform group analysis.

So here, this is a little bit different from the cross correlation that we did

across the entire time series.

By now we're only doing it on the amplitude across different trials.

So one way of studying differences between groups of subjects

is to use some trait as a covariate in between subject analysis.

So maybe your score on some test or some personality trait or

just some general trait about the subjects in question.

These types of analysis of individual differences

allow researchers to study relationships between brain and behavior.

So here's an example of a study of individual differences.

Here, for each subject, we have a contrast image.

And we also have a seed value.

So let's say that this is some score on a test that the subjects performed.

So their resiliency or distress or something like that.

So for each subject we have a seed value.

So X1 is the score for subject 1, X2 is the score for

subject 2, all the way to XN, which is the score for subject N.

Now what we do is we take contrast data from a single voxel.

And we take that from all the subject's data, and

now we make it a scatter plot of the seed value on one axis,

and the contrast image value from that voxel on the other axis.

Now we can calculate the correlation between those values, and

we can put that into the voxel space in the group analysis.

What we'll get is now the correlation between the seed value and

the contrast value for each voxel in the brain.

So far, we've just been talking about correlation.

Often we're also introduced in something called partial correlation.

So the correlation between two regions after the effect of all other regions

have been removed.

And this is important because it helps protect against so-called

illusory correlations between regions.

For example we might have two regions, A and

C, which are uncorrelated after controlling for B.

However, if we look at the simple correlation between them,

we're going to get them all, A, B and C, all be highly correlated with each other

even though A and C are only related to each other through B.

And that's what we want to guard against by computing partial correlations.

For example, let's let A be the life expectancy in a country and

C be the number of TVs per capita in that country.

A and C are highly correlated with each other but

they are not really directly linked, though they are linked to each other

by a third variable B which is maybe the quality of life in the countries.

Once we control for quality of life, the number of TVs and

life expectancy that relationship might be disappear.

And so that's what partial correlation seeks to do.

And in the brain setting, we might think that the relationship between

two regions A and C, might disappear once we control for a region B.

And so, that can be important when we're trying to make networks and graphs of

the brain because we want to kind of guard against these illusory correlations.

So for multivariate normal data, there exists a very interesting

duality between the inverse co-variance matrix with the precision matrix and

the graph representing the relationship between the regions.

So here a conditional independence between variables or

regions corresponds to zero entries in the precision matrix.

And we can use techniques such as the graphical lasso or

GLASSO to estimate sparse precision matrices and graphs.

So for example if we go back to this example that we looked at

the previous slide, here we had a relationship between A and B and B and

C, but A and C were conditionally independent of each other.

So if we had a graph like this, the corresponding inverse co-variance matrix

or precision matrix would have zero elements in the elements that

correspond to the link between A and C and C and A, as we see in the example here.

And so this is very nice because if we can estimate the sigma inverse,

the inverse covariance matrix, we can find elements that are close to 0.

And then kind of prune those edges off and

get a more sparse representation of the brain network.

Okay, that's the end of this module.

We started talking about functional connectivity and

we talked about seed analysis and we talked about inverse covariance matrix.

In the next module we'll continue talking about functional connectivity and

we will talk about multi-variant decomposition techniques.

Okay, I'll see you then, bye.

Coursera brinda acceso universal a la mejor educación del mundo, al asociarse con las mejores universidades y organizaciones, para ofrecer cursos en línea.