Print

Print


Thilo,

> the "usual" way of calculating a fixed effects model for a group is
> to put the individual time-series of each subject into one big
> first-level model (at least to my knowledge). I have heard the
> computational burden to estimate such a model is quite heavy (of
> course depending on the specific model).  That's basically why I
> have questions with regard to the following 3 points:
>
> 1) In case I already calculated the first level statistics for each
> subject separately: Is there a way (formula) to calculate the fixed
> effects for that group from their single subject's first level
> stats, without putting the whole bunch of time-series in a new big
> model? Maybe by clever use of the information in spmT-, con- and/or
> ResMS-images?


Some of this is easy, some is hard:

The estimated beta's in a grand model will be exactly equal to the
corresponding estimated beta's in separate models, and so will the
estimated contrasts.  That's the easy part, the hard part is the
variance.  The grand model assumes that the variance is the same for
each subject (although the autocorrelation is allowed to vary between
subjects) and produces a pooled variance estimate.  Individual models
do not assume homogeneous variance over subjects, and hence are
actually a bit more reasonable in terms of assumptions.

So could you create the ResMS image of the grand model with the
various ResMS images of each of the single subject models?  Yes, but
getting the details exactly right would require more linear algebra
than can be composed in a brief email.  However, I think you'd find
that the simple average of the individual subject's ResMS images will
come fairly close to the grand model's ResMS.


A totally different approach, but perhaps more valid and sensitive
approach is to persue a meta-analytic approach.  That is, instead of
trying to replicate a grand fixed effects model, take a look at the
other various ways of combining the individual analyses.  For example,
the sum of the T values divided by the square root of the number of
subjects is a standard meta-analytic approach.  The advantage of these
approaches is that they don't assume the variance is the same within
each subject. See ref [1] for a review of fixed effects combining
approaches.

> 2) Another related point is: tests for the conjunction null across
> subjects seem quite simple only with the single-subject's stats at
> hand: finding the minimum t-value in each voxel across subjects and
> then thresholding the resultant image. But what would be the correct
> number of degrees of freedom for thresholding?

The approach you describes assumes that the DF are the same for all T
images.  If they are not, the safe (conservative) thing to do is to
use the minimum DF over all subjects' analyses.

> 3) A similar approach to the one outlined in 2) to test for the
> global null conjunction simply doesn't want to cross my narrow
> mind. Can you imagine one?

Sure.  While the null's are different, the statistic is the
same... i.e. again, create the minimum statistic image over subjects.
So the only question is how you find the threshold.  If N is the
number of subjects, then the threshold is found using DF (if
different, take the minimum) and an alpha threshold of alpha^(1/N),
where alpha is the desired level of the test.

Hope this helps!

-Tom


     -- Thomas Nichols --------------------   Department of Biostatistics
        http://www.sph.umich.edu/~nichols     University of Michigan
        [log in to unmask]                     1420 Washington Heights
     --------------------------------------   Ann Arbor, MI 48109-2029


[1]  Nicole A. Lazar, Beatriz Luna, John A. Sweeney, William F. Eddy
      Combining Brains: A Survey of Methods for Statistical Pooling of
      Information.
      NeuroImage, Vol. 16, No. 2, Jun 2002, pp. 538-550