Dear Uwe,
>We measured 12 subjects in an event-related fMRI study
>(1) on a perceptual cueing task in one session and
>(2) on a motor cueing task in another session.
>Both tasks used the same stimuli, responses, and
>stimulus-response mappings. Cues were valid in 150 trials
>and invalid on 50 trials. The only difference between
>tasks consisted in the meaning of the cues.
>
>On the first level we looked in each subject at the
>contrast *invalid cues > valid cues* in each task.
>Now, we want to know whether there are regions that
>are common to the processing of invalidly cued trials
>in each task. Therefore we want to perform a conjunction
>analysis with the contrast *invalid > valid* in the
>motor cueing task and the contrast *invalid > valid*
>in the perceptual cueing task.
>
>As far as I understand your paper, we would use the
>minimum statistic if we could assume that the contrasts are
>congruent. However, I think, we have incongruent contrasts
>because we compare different cognitive tasks.
>Therefore, we might have a case for the supremum
>P value approach (conjunction null). The result of this
>conservative test is that we have 7 small clusters
>(5 with one or two voxels) - a result that is worrying the
>reviewers (saying, *your findings rest on thin ice*).
This speaks to the first of two important points: Your reviewers should
not be making anecdotal inferences like "resting on thin ice" on the
basis of the number of clusters, or voxels per cluster, unless they know
the null distribution of these quantities. You should tell the Handling
Editor this.
The NUMBER OF SIGNIFCANT VOXELS (or clusters) is not a useful
quantity. It is not equivalent to a "SIGNFICANT NUMBER OF VOXELS"
(or clusters above some threshold). This inference would require a
further random field theory analysis at the cluster- or set-level
(see Friston et al 1996).
For example, 7 clusters at a p<0.05 level (corrected) means that you
found 7 x 20 = 140 times the number expected under the null hypothesis.
This is probably extremely significant. Likewise, obtaining a cluster
of 2 voxels by chance could be extremely unlikely.
I am not sure where the practice of reporting or discussing the number
of significant voxels came from but you should avoid it when reporting your
results and discourage its practice when reviewing other peoples papers. In
terms of reporting your results, simply show the SPM at an uncorrected
level of p=0.001 (uncorrected) and report, in the text and tables, maxima
that survive a correction for the search volume. This allows the reader to
see the profile of the SPM that is used for inference and to read about the
inferences per se (in terms of significant maxima).
This inclusive reporting of the SPM precludes complaints that you should
have got more significant voxels!
>As I read your paper, I conclude that we should rather test
>for k > 1. Then the question arises, how to do this in the
>available version of SPM2?
Yes, you want to infer that k = 2 > 1, namely that effects were
present in both contrasts. This is the conjunction null. As we
tried to convey in our paper, simply thresholding both contrasts at
p<0.05 (corrected) and reporting the conjunction is valid but extremely
conservative (insensitive). There are a number of ways you could
proceed. However, given you have 7 maxima that survive this procedure
I do not think you really need to worry - just lower the threshold on
the SPMs that are reported graphically as described above.
If you wanted to pursue a more sensitive analysis you could try the
following procedure: At the second-(between subject)-level you have
two contrasts C1 and C2. Threshold C1 at p=0.001 (uncorrected) and
form a mask. Apply this mask to the images and threshold C2 at p=0.05
(corrected for the [masked] search volume). Save the resulting SPM{T},
which contains voxels that are significant for C2 at p=0.05 corrected.
Now repeat the procedure but swap C1 and C2. Voxels that are found in
both SPM{T} are significant at p=0.05 corrected in both contrasts and
constitute voxels where you can infer k = 2.
I will leave it as a challenge to see if anyone can spot a flaw in this
procedure :)
I hope this helps - Karl
PS There are updates for SPM2 that allow tests of the conjunction null:
Tom - can you comment on this?
Friston KJ, Holmes A, Poline JB, Price CJ, Frith CD. Detecting activations
in PET and fMRI: levels of inference and power. NeuroImage. 1996 4:223-35.
|