Dear Kota,
> I am now cofused by the different results of the following two types of
> analyses:
>
> (a) within-subject analysis performed on the pooled data from 10 subjects,
> setting the contrast [-1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
> [0 0 -1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
> 0] ...and so on
>
>
> (b) a single-subject analysis performed on the data from the
> corresponding one subject,
> setting the contrast [-1 1]
>
> As far as I have experienced, activation revealed by (a) is smaller
> than that revealed by (b). In an extreme case, significant activation
> observed in analysis (b) completely disappears when I switch to
> analysis (a) even though the same significance level (and the same
> subject of course) is chosen.
>
> Here, my questions: (1) Does this indicate the inhomogeneity of error
> variance across those 10 subjects?
It could do. This is a good point. If some subjects had very high
error variance then this would render some subject-specific contrasts
less sensitive.
> (2) If so, is it inappropriate to pool those data together and perform
> multi-subject analysis? Or can I still believe that GLM is robust
> against such violation of homogeneity?
The GLM is robust to violations of homogeneity but it may not be
sufficiently robust in your case. I think this reduces to an emprical
question. I would compare the subject-specific SPMs using separate and
combined models. If you are right some SPMs will show too much
activation and others too little when comparing the SPMs of each
subject. One simple way of checking, anecdotally, for heteroscedasity
of this sort is to simply look at the adjusted data (using spm_plot in
results). Subjects with high levels of error variance should be
apparent on visual inspection. If you have not done so already I would
use proporational scaling for global normalization.
With best wishes - Karl
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|