Dear David,
"Kareken, David A." wrote:
> I conducted two analyses that I thought should give identical results (but
> apparently not). It is a six subject, two condition (with one replication)
> study (BABA, where B=baseline and A=activation, in that order).
>
> I used two designs:
> 1. Multisub, Cond x Subj & Covar.
>
> Given the subject by condition interactions, the contrast for the entire
> effect across all subjects was modeled as: [-1 1 -1 1 -1 1 -1 1 -1 1 -1 1]
> (i.e., one contrast for each subject, comparing the activation to the
> basleine conditions).
>
> 2. Multisub, Cond & Covar:
> Collapsing across subjects, the contrast was simply [-1 1].
>
> Although the results are quite similar, they are not identical. In
> particular, a theoretically meaningful area emerges in the first design
> (suprathreshold), but not in the second (sigificantly subthreshold, but
> still there).
>
> Can someone tell me why? My first guess is that the variance terms used may
> be different for each. Also, is it inappropriate to use the first (i.e.,
> perhaps too many comparisons in a single model?)?
>
You are right in that the variance maps are different, whereas the average
(across subjects) of parameter estimates from the first model should equal
those of the second (which is effectively what you acknowledege with your
particular contrast vectors).
In the first case you have removed the subject-by-condition variance, i.e. you
have removed the contribution to the error that is caused by subjects
activating differently.
Is it appropriate?
Well, the second case corresponds to a "good old" multi-subject fixed effects
analysis, which means that you consider both within subject (e.g. due to
measurement noise) and between subject variance. What people have now realised
is that the weighting beteen these error sources is correct only in the special
case where you have only one scan per condition and subject (hence Random
effects analyses). Therefore, you are considering both error sources, but if
you have more than one scan per condition and subject (which you have) you are
not weighting them correctly (I assume here you want to extend your inferences
to the population).
For the first case you disregard the between subject variance completely, which
of course is an even larger "weighting error".
Hence, strictly speaking I would say that both models are "inappropriate" if
you want to extend your inferences outside this particular little group of
subjects.
The problem with random effects analyses is that their sensitivity is rather
poor (hence e.g. the pooled variance discussion the last few days) so it is
rather unlikely that you will find much given that already by including between
subject variance with too low weight (as in you second model) you loose the
significance.
If you want to have a go at a random effects analyses you should generate
contrasts for each subject in your first analysis above, and enter the
resulting con*.img images into "Basic models"->"One sample t-test".
Good luck
Jesper
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|