Print

Print


Hi John,

Assuming no-one replied, I'll have a quick attempt...

> [...]
> 3 groups, 2 time points for each subject.  Mixed-model.
> 
> Therefore I set one factor as ‘group’ with 3 levels.  Set independence 
> to yes and variance to equal.
> 
> The second factor as ‘time’ with 2 levels.  Set independence to no and 
> variance to equal.
> 
> The non-independence in the second factor is what I am having trouble 
> wrapping my mind around.  I know this introduces a non-sphericity 
> correction.  However, If I extract VOI data, I am not sure how the data 
> is being corrected.  I am sure it is, but I am not sure how, or what 
> that exactly means.

There is help in spm_spm.m about this, around line 86. Basically, in
your case the dependence between times is modelled by allowing
off-diagonal entries in the covariance matrix V. ReML is used to
estimate V (from some of the voxels, as explained in spm_spm), and
then a second pass estimates beta etc after whitening the data --
which means multiplying by W where W*W' = inv(V) -- because then W*Y
has a scaled identity covariance matrix, i.e. has independent (white)
errors. [The multiplication by W is done along with any fMRI
time-series filtering in line 183 of spm_regions]

> The xY.y data is not the original values from 
> the subject images, but some type of corrected value.  However, I am 
> fairly certain that if there was no non-sphericity correction, the data 
> extracted would be the original data.

If you select a contrast, then as well as being whitened as described
above, y is also "adjusted". My understanding of this is that the
uninteresting components are removed, where "uninteresting" is defined
as orthogonal to the contrast you are testing.

E.g. as I understand this, if you were interested in the main effect
of time, ignoring the split of your data into three groups, then the
adjusted data would have each group mean subtracted, so that only the
time effect (and the residual error) would be present in y.

> Also, I am also curious as to how SPM5 handles non-independence.  For 
> example, if I have a group at two time points, I can declare 
> non-independence without the groups being equal.  So there does not have 
> to be a matching of subjects like a paired t-test.  So how is this being 
> handled?

Well... I think this is a rather complicated issue, which I don't
think is always that well explained... In particular, it is probably
very badly explained whenever I try to do so ;-)

The underlying mechanism is to allow certain variance components (e.g.
off-diagonal covariance terms between time-points in your case), then
to estimate these (with ReML), and to return Weighted Least Squares
parameter estimates (etc) from using the covariance matrix.

A complication is that different voxels should arguably be allowed to
have different covariance matrices (i.e. ReML should be used for each
voxel), since this would be analogous to allowing each voxel to have a
different balance of time and subject effects in a conventional
fixed-effects paired t-test, which is what happens. However, it would
be very time-consuming to run ReML at each voxel, and I believe there
is also a problem with the estimation of the variance components being
itself quite variable.

So in practice, what SPM does is to average the activated voxels
(using a relatively low uncorrected threshold on an F-contrast for the
effects of interest -- see spm_spm lines around 195, 466, 699 and 802)
and to estimate a single set of variance components from this average.
The result is then assumed to be accurate due to the large number of
voxels averaged over, and a matrix is derived and used to whiten the
data, as described in spm_spm around line 95. The actual covariance
matrix at each voxel is given by the product of that voxel's ResMS and
the single estimated covariance structure. There is more detail on
this "factoring of the spatiotemporal covariance into non-stationary
spatial variance and stationary temporal non-sphericity" in HBF2 ch.9:
   http://www.fil.ion.ucl.ac.uk/spm/doc/books/hbf2/pdfs/Ch9.pdf

So I don't think SPM's modelling of dependence is quite what you'd
expect, which means that if you do have within-subjects (e.g. paired)
data, I think you still want to model (fixed) subject effects, as well
as dependency, as discussed in this post:
   http://www.jiscmail.ac.uk/cgi-bin/webadmin?A2=ind0703&L=SPM&P=37802
rather than relying on purely random subject effects. I think this
then means that you are stuck with the fixed-effects issue of having
to throw away subjects if they have missing data for some levels of
their within-subject factor. E.g. in a paired t-test scenario,
unpaired observations are effectively dropped. Hopefully someone will
correct me there (and elsewhere!) if wrong...

I hope that helps, sorry if it's no clearer than the help in
spm_spm.m! Best,

Ged