Dear Pierre,
>Y is the time series of a region of interest and we search for brain
>voxels, the activity of which regresses with the activity of the region of
>intrerest. For the region of interest we apply the linear model:
>Y=X*BETA+Xc*BETAc+error (X is the part of the matrix related to the task
>that is studied in the experiment and Xc is related to the confounds)
>We can then search for the voxels which regress with:
>a) Y
>b) X*BETA
>c) X*BETA+error
>d) error
>
>I am asking for the meaning of the different regressions. I think (but I am
>not sure of it) that:
>a) is not generally relevant because confound effects are not removed.
>However, if we are interested by correlation between brain areas (as an
>example for structural models), the cause of a variation of the activity is
>not the main point: whatever the cause of variation in region of interest,
>what voxels are regressing with region of interest activity?
>b) have an interest when we search for voxel whose activity is correlated
>with the part of activity (of the region of interest) that is related to
>the task (represented by X)
>d) have an interest when we search for voxels whose activity is correlated
>with the part of activity that is not related to the task. If error is
>random noise, we will not found voxels whose activity regress (except if
>the noise is the same in different areas of the brain). If error is not
>only random noise but also the effect of a factor ( that is not taken in
>account in the design matrix) we can found voxels that regress with the
>error of the region of interest. I have the feeling that this can be a good
>approach to search for relation between brain areas that are "task
>independant".
>c) is a mixture of b) and c).
I think you are absolutely right in your analysis. By looking for voxels (Yt)
that regress significantly on Y you are assuming a very simple model of
effective connectivity
Yt = Y*B + e
where B = dYt/dY is a simple measure of 1st-order connectivity between Y
and Yt.
Expanding Y decomposes the coupling into a number of components.
For example the influence of Y on Yt that depends only on the experimental
changes in X is given by B in
Yt = (X*pinv(X)*Y)*B + e
which is exactly the same as the [scaled] direct influence of X. Here
'error'
in Y has been removed and this is similar to your (b). Note that if Y was
in V1
and Yt was in V4 then visual input must go though V1 and this might be an
interesting analysis. Similarly coupling that is independent of X is given by
Yt = (Y - X*pinv(X)*Y)*B + e = error*B + e
which is like (d) assuming Xc = 0.
I think the most useful analysis is like (c). This does not differentiate
between coupling induced by X or error but discounts any confounds
Yt = (X*BETA + error)*B + Xc*Bc + e
= (Y - Xc*pinv(Xc)*Y)*B + Xc*Bc + e
= Y*B + Xc*Bc + e
Notice that, by including the confounds in the second regression analysis, you
do not have to worry about adjusting Y.
I hope this helps - Karl
|