Dear Christine and everyone,
the following is a tentative explanation for "better" results in SPM96 than
SPM99, but I think it may also be of general interest. Therefore, even if
you are not interested in SPM96 vs. SPM99, please read on.
>I still have some concerns about the comparability between the results of
>spm96 and spm99.
>I have performed the analysis of different datasets (single subjects,
>simple motor and speech tasks, block design) with spm96 and spm99 with
>identical preprocessing and as similar as possible modelling in the
>statistics section. I always got "better" results using spm96.
>This means the locations of activations were basically the same. However,
>the significance was higher with spm96. At equal thresholds, there were
>(more and especially) larger clusters with spm96 (identical smoothing!).
>Altogether, activation appeared to be more robust with spm96.
>Can anyone comment on this impression? Did anybody else compare the
>results of the analysis using spm96 and spm99 respectively?
The following may, or may not, be relevant to your specific question.
However I think it may be of some interest/importance both to you and
others doing epoch related fMRI.
The way epochs are specified in SPM99 is slightly counterintuitive, which
may lead to mistakes, which may in turn lead to SPM99 results being "worse"
than SPM96 results, where these mistakes were not as easily made.
It has to do with how times (in scans) of epochs are defined, and require a
Consider an fMRI experiment with a TR of 5 seconds and descending slice
acquisition. Furthermore consider an event (being either a true event or
onset of an epoch) occuring at time 6*TR after start of first scan. We now
want to create a regressor to put into our design matrix. From the
perspective of the first slice in the 7th image volume the event only just
occurred and the value of our regressor should be zero. On the other hand
from the perspective of the last slice the event occurred 5 seconds ago,
and the activity in the regressor should be close to its peak value (if it
is a true event).
Hence, from the perspective of the ultimate slices the "expected" value of
the regressor is completely different, but there is only one value that can
actually go into the design matrix. It is easily realised that the "best"
regressor for the last slice is translated one TR backwards in time (up in
the design matrix images) compared with the "best" regressor for the first
slice. So, which one should we pick?
This didn't use to be too much of in issue (in SPM96) since for epochs of
>~5TR the differential sensitivity between top and bottom slices wasn't
that great. However, for event related designs one TR can make the
difference between reasonable sensitivity and no sensitivity at all so this
had to be carefully addressed in SPM99 (See e.g. Henson et al. NeuroImage,
First of all it had to be possible to specify event/epoch onsets in
fractions of TRs, since the precision of specification would otherwise be
0.5*TR which is clearly insufficent for most TRs. Therefore regressors are
created by a supersampling given by the global variable fMRI_T (which can
through the default button) whith the "factory setting" 16. This means that
if you specify an event onset of 7.45TR it will effectively be located at
the nearest 16th of an TR, which in this case is 7.44TR. Conversely, if
fMRI_T was 1 (as implicitly in SPM96) it would effectively be located at 7TR.
Secondly it had to be defined what was meant with time zero.
Should it be defined to mean the start of the first scan (i.e. the
acquisition of the first slice), which would make a lot of intuitive sense?
Should it be defined to mean the middle of the first scan (i.e. the
acquisition of the middle slice), which would also make quite a bit of sense?
Or should it be defined as the end of the first scan (i.e. the acquisition
of the last slice)?
The SPM-authors couldn't make up their mind so it was defined through the
global variable fMRI_T0. However the default value of fMRI_T0 is 1, which
means the BY DEFAULT TIME ZERO IS DEFINED AS THE START OF THE FIRST SCAN.
If fMRI_TO is set equal to fMRI_T it changes the definition to be the end
of the first scan.
This (the default) means that an event occurring 2TR after the start of the
first scan (which is an intuitively appealing time zero for the experiment)
should be specified as having occurred at time 2 (TR). The regressor will
then be ideal for the first slice, and hence implicitly assumes that you
have interpolated to the first slice in the slice timing module.
Here comes the point for the epoch related studies. Assume you have a
variable SOA paradigm with epoch lenghts 5 6 6 6 5 6 alternating between
conditions A (implicit baseline) and B (task). You will now be prompted for
a vector of epoch onsets given in scans. The intuitive (my intuition at
least) way of specifying these would be [6 18 29], i.e. the first epoch has
been preceeded by 5 scans and starts with the sixth. This is WRONG!!!
Remember how an event (true event or onset of epoch) was defined in time
(in TR) since start of first scan. Hence, the first epoch starts at 5TRs,
NOT 6. The vector of onsets should be [5 17 28]. Hence, the definition that
makes specification of event quite intuitive makes the definition of epochs
Furthermore, as we clarified for the events above, this means that the
regressor is ideal for the first slice, and progressively worse for
consecutive slices. It would seem reasonable to sensitize oneself to the
middle slice, and the vector of onsets therefore becomes [4.5 16.5 27.5].
This is quite different from the "intuitive" vector [6 18 29], and can make
quite a bit of difference to the t-values.
My, cautious, recommendation is that you change fMRI_T0 to fMRI_T/2 which
means that you are automatically sensitized to the middle slice.
Given that the default in the slice timing module is the middle slice this
means that events should (as is intuitive) be defined in TRs since start of
Fixed SOA epoch related studies are defined in an intuitive, and gives you
an automatic sensitization to the middle slice.
Variable SOA epoch related studies should still be defined bearing in mind
that epochs are defined in terms of time since start of first scan (rather
than scan#) and will be senitized to the middle slice.
I appologise for this rather lengthy excursion, but I think there is quite
a bit of scope for mistakes here. Especially since reading help texts seems
increasingly unfashionable in this point and click age. Those especially
interested in event related studies will find an earlier mail regarding
these issues in http://www.mailbase.ac.uk/lists/spm/1999-07/0154.html.
>Thank you, Christine Preibisch
Good luck Jesper
Wellcome Dept. of Cognitive Neurology
12 Queen Square
London WC1N 3BG
phone: 44 171 833 7484
fax: 44 171 813 1420