Dear Marilu,
>The thing that I don't understand is what happens if the covariates are
>correlated. For instance, if I enter the picture and word version of a
>semantic test as two separate covariates and the two are correlated to
>each other, what happens when I perform a t-contrast looking at 1 0 and
>1 -1. SPM doesn't seem to protest and tell me that Betas are not
>uniquely specified, but I am not sure what I am looking at. In fact, if
>I enter each covariate separately in two different analyses, the results
>are very different.
Inference in the context of correlated regressors is a generic issue.
The key thing to remember is that test for each regressor alone (e.g. [1 0])
reveals effects that cannot be explained by the rest of the design matrix,
which includes the other regressor. This means the effects are unique to the
regressor and you will not see effects that are common. There are two
approaches
to this situation.
1) Orthogonalise your regressors before entering them into the design matrix.
This allows you to assign the effects in a mutually exclusive way to the two
regressors. However the interpretation changes because the regressors have
changed.
2) Report the F-Test of both regressors and then consider the unique
contribution
with separate T-tests. It could be the case that the SPM{F} shows a very
significant effect but neither of the SPM{T} do. This can happen if the
regressors
are highly correlated and there are no unique components.
I hope this helps - Karl
|