Dear Kota and Karl,
> Dear Kota,
> > I am now confused by the different results of the following two types of
> > analyses:
> > (a) within-subject analysis performed on the pooled data from 10 subjects,
> > setting the contrast [-1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
> > [0 0 -1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
> > 0] ...and so on
> > (b) a single-subject analysis performed on the data from the
> > corresponding one subject,
> > setting the contrast [-1 1]
> > As far as I have experienced, activation revealed by (a) is smaller
> > than that revealed by (b). In an extreme case, significant activation
> > observed in analysis (b) completely disappears when I switch to
> > analysis (a) even though the same significance level (and the same
> > subject of course) is chosen.
> > Here, my questions: (1) Does this indicate the inhomogeneity of error
> > variance across those 10 subjects?
> It could do. This is a good point. If some subjects had very high
> error variance then this would render some subject-specific contrasts
> less sensitive.
But as Kota presents it, it's a case of seeing more activation (higher
sensitivity?) generally in the separate subject specific analyses than
in the group analysis, indicating lower error variance for individual
subjects than in the combined subjects analysis.
As I understand the way SPM goes about its GLM business, Kota's single
subject analyses use purely within-subject error terms (essentially the
replications-within-conditions variance). The combined subjects
analysis uses a mixed within- and between- subjects error term
(essentially it pools all the above single-subject analysis error terms
and adds in the subjects-by-conditions interaction variance).
There are therefore two potential types of inhomogeneity: (1) different
within-subject error terms, which is what Karl is addressing, and (2)
different (typically larger) subject-by-conditions interaction variance
than within-subject variance. Type (1) inhomogeneity will be greater
sensitivity from subjects than from others; type (2) can mean less
sensitivity from the group analysis than from individual subjects.
> > (2) If so, is it inappropriate to pool those data together and perform
> > multi-subject analysis? Or can I still believe that GLM is robust
> > against such violation of homogeneity?
> The GLM is robust to violations of homogeneity but it may not be
> sufficiently robust in your case. I think this reduces to an empirical
> question. I would compare the subject-specific SPMs using separate and
> combined models. If you are right some SPMs will show too much
> activation and others too little when comparing the SPMs of each
> subject. One simple way of checking, anecdotally, for heteroscedasticity
> of this sort is to simply look at the adjusted data (using spm_plot in
> results). Subjects with high levels of error variance should be
> apparent on visual inspection. If you have not done so already I would
> use proportional scaling for global normalization.
It is arguably plausible that the GLM is robust against type (1)
inhomogeneity, but less plausible in relation to type (2). The random
effect model analysis sets about precisely uncoupling these two sources
of variance. However, in its classical form it discards the smaller
replications-within-condition-within-subject variance and leave just
the subjects-by-conditions variance for assessing the significance of a
condition contrast either for the whole group or for a single subject.
This of course means a loss of sensitivity, which is further compounded
by losing all the degrees of freedom arising from the replications.
An extreme version of Koda's anomaly is that one could have a voxel
where each individual subject shows a significant task vs. rest
activation effect, but that none of these effect survive as a
significant single-subject contrast in the combined subjects analysis -
and possibly even the group task vs. rest effect is sufficiently
diluted to lose significance.
> With best wishes - Karl
MRC Cognition and Brain Sciences Unit
15 Chaucer Road
Cambridge CB2 2EF