Dear Steven and Mike,
> > Steven wrote:
> >
> > * I am using SPM96 to perform a conditions and covariates analysis in
> > a PET FDG study. I have two sessions per subject, one scan per session,
> > and a single task performance measure for each scan. The first session
> > is the control condition, the second session is the active condition.
> >
> > * I would like to test for correlations between the change covariate
> > scores across the two scans/sessions with the change in the PET images.
> > I have followed the recommendation of Andrew Holmes of creating a
> > mean-centered covariate difference score and entered the covariate as
> > -1*Difference/2 and +1*Difference/2.
>
> Mike repsonded
>
> I concur with Klaus Ebmeier's exposition in reply to this, and the
> bottom line on the confusion surrounding the whole issue of interaction
> analyses. Some insight from the authors would be welcome, particularly
> as SPM99 now explicitly allows interaction modelling.
>
> In the example given above we have adopted 2 different approaches
> depending on the version of SPM we are using:
>
> 1. Our approach using SPM94/96 has been to take the difference between
> the two "scores" then enter them as a single covariate multiplied by a
> sign-swap matrix [-1, +1...]. Interactions are sought using [0 0 -1]
> and [0 0 +1] contrasts.
>
> 2. In SPM99 we have entered the scores from the two conditions as a
> single matrix in their native format i.e. [score1_scan1,
> score2_scan1...score1_scan_n, score2_scan_n] Selecting the interaction
> x condition option appears to split this single covariate vector into
> two separate vectors, with a 0 entered against scan 1 for condition 2
> score and a zero against scan 2 for condition 1 score. We then sought
> interactions using [0 0 -1 -1] and [0 0 1 1]
>
> The results for these interactions appear to be identical to those
> derived using approach 1 in SPM96 i.e. [0 0 -1] is equivalent to [0 0
> -1 -1].
Yes they should be. To clarify things; consider the fact that you have
three effects in your model. An effect of subject, an effect of
condition and an effect of covariate. Both the analyses above are
simply assessing the main effect of covariate having removed condition
and subject effects (this removal is implicit in the block partition of
the design matrix and is made explicit by removing the mean from each
pair of scores to give you the deviations from that mean).
In both analyses you are modelling the condition effect so any
condition-specific changes in the score [covariate] do not contribute
to the main effect of covariate (you should ensure this is what you
want).
Put simply the only difference between looking at the correlation
between (i) brain acitvity and covariate and (ii) differences in brain
activity and differences in covariate, is that the mean effect of each
subject is removed from the [partial] correlation. This is important
because you are not looking at an interaction. An interaction is not
modelled in your SPM96 analysis. In the SPM99 analysis it is,
implicitly, by the 'splitting' of the covariate effects into
condition-specific columns. A test of the condition x covariate
interaction would be given with a contrast [0 0 1 -1] and would be
interpreted as a difference in the regression slope of activity on
covariate, under condition 1 relative to condition 2 (having discounted
subject effects).
> I later wrote
>
> Modelling subject or condition-specific effects practically involves
> modelling the (mean centered) covariate entered for each subject or
> condition in its own column. Conceptually this is equivalent to
> modelling subject or condition by covariate interactions. This gives a
> more comprehensive model (in which interaction effects, or the
> differences in regression slopes of rCBF on the covariate, can be
> assessed) at the expense of degrees of freedom used in error variance
> estimation.
>
> and Steven responded
>
> Thank you for the explanation. The fact that a condition-specific fit
> provides a more comprehensive fit by modeling the interaction may
> explain why this approach gives equivalent results to a direct
> correlation of covariate difference scores with difference images,
> whereas not using a condition-specific fit with a single mean centered
> covariate difference score yields no significant results). In both
> approaches the covariate is prepared as recommended by Andrew Holmes in
> a posting of 9/25/98) and entered as a single covariate with alternating
> signs.
Remember the alternating signs are applied to unsigned differences from the
subject-specifc mean so that the resulting covariate is simply the
original one with subject-specific effects removed.
> However, I wish to point out that the SPM96 results do not show a
> difference in the error d.f. between these two approaches in a study
> with 15 subjects, 1 condition/scan, 2 scans/subject, 1 covariate of
> interest collected for each scan (entered as a single covariate),
> proportional normalization
>
> Condition Specific Fit: 2 conditions + 2 covariates + 15 blocks + 0
> confound = 19 parameters, having 17 d.f., giving 13 residual df.
>
> No Fit: 2 conditions + 1 covariate + 15 block + 0 confound = 18
> parameters, having 17 d.f., giving 13 residual df.
I think the problem here is that your condition-specific covariates are
mean corrected per subject, whereas they should be mean corrected over
condotions.
I hope this helps - Karl
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|