At 14:49 10/04/2002 +0200, Christian Büchel wrote in reply to Ian Dobbins
<[log in to unmask]>:
>Dear Ian,
>
> > i've a question about the use of correlated parametric regressors.
> > if one has a covariate and two parametric regressors of that covariate,
> > what is the effect of a correlation between the parametric measures?
> > more specifically, i assume that as with standard regression, if these
> > additional parametric variables are themselves correlated, there is no
>magic
> > bullet to segregate their effects.
>Exactly
>
> > that is, if both are entered into the
> > regression simultaneously with the covariate, their linear dependence
> > will cause them to in essence "null" one another out with respect to any
> > predictive value of either in isolation. in contrast, if they are entered
> > hierarchically, then which ever is first will capture the lion's share of
> > the variance, with the second appearing to have no predictive value.
>
> > is this the correct interpretation and if so, does SPM enter the
>parametrics
> > simultaneously or heirarchically in the design matrix?
The direct answer to Ian's question is that SPM enters the parametrics
simultaneously. This might appear to mean that you cannot assess the
predictive value of each in isolation. However the standard t-statistics
and p-values for the betas for each parametric does tell you whether that
parametric has a predictive value over and above what can be accounted for
by each in isolation. So there is 'relative' rather than 'absolute'
information on predictive values.
>If you use the "parametric modulation" option in SPM, it models both (or
>more depending on the expansion) regressors simultaneously, but the
>parametric modulation (i.e interaction) should be orthogonal with respect to
>the main effect as they are mean corrected before convolution.
I thought Ian's question related to additive models, i.e. without
additional multiplicative regressors.
If the parametrics are *very* highly correlated it might be sensible to
think of using partial orthogonalisation as a precursor. Here you could say
regress parametric V on parametric U and save the residuals as
V.adjusted.for.U. Then use columns U and V.adjusted.for.U in the design
matrix. This can avoid numerical stability problems associated with
collinearity. This will help you assess whether V has predictive value over
and above what it shares with U. Then do the same analysis with columns V
and U.adjusted.for.V.
Ian Nimmo-Smith
Ian Nimmo-Smith
MRC Cognition and Brain Sciences Unit
15 Chaucer Road
Cambridge UK
CB2 2EF
Tel +44 (0) 1223 355294 x 710
Fax +44 (0) 1223 359062
|