Rafael
> Dear SPM users
> How does the GLM "decorrelate" dependent regressors?
>
it doesn't - following standard equations Y=BX and B=inv(X'X)X'Y you end
up with Beta weights which reflect the unique part of variance of each
regressor, i.e. if you have a shared variance, this one is not explained
by the design matrix (this is the standard variance partition of ANOVAs
etc in SPSS, SAS, Statistica etc .. )
> "Thus, if two covariates are correlated, testing for the significance of the
> parameter associated with one will only test for the part that is not
> present in the second covariate". SPM manual spm_conman.m, Contrasts:
> Non-orthogonal designs, page 7. See also Andrade et al., 1999, Neuroimage.
>
> If two regressor share common variance, testing for one shows only the part
> that is unique to this regressor. Is this true for both of them and how is
> this achieved? A decorrelation like in spm_orth.m is not used (for
> decorrelating prameters within one condition, parametric modulations),
> because the order of regressors in a model does not matter, while the vector
> order in a decorrelation procedure (i.e. spm_orth.m) does matter. Maybe
> correlated regressors won't be changed but the shared variance will be taken
> into account when testing i.e spm{T}, spm{F}? But how?
>
the shared variance ends up in your error term and this a T or F being
effect / error is affected by the shared vairance
more info on GLM (without fMRI on my website
http://www.sbirc.ed.ac.uk/cyril)
hope this helps
Cyril
--
Dr Cyril Pernet,
fMRI Lead Researcher SINAPSE
SFC Brain Imaging Research Center
Division of Clinical Neurosciences
University of Edinburgh
Western General Hospital
Crewe Road
Edinburgh
EH4 2XU
Scotland, UK
[log in to unmask]
tel: +44(0)1315373661
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.
|