Dear SPM-munity,
Using SPM2b to do single subject fmri analysis.
I am attempting to get an understanding of what the Global F Threshold
is doing. Two identical analyses were performed with all parameters the
same, except for the F threshold:
analysis 1: UFp = 0.001
analysis 2: UFp = 0.999
The AR(1) serial correlation model was _not_ used, which I believe
implies that non-sphericity was _not_ estimated (correct me if I'm wrong
here please), and uncorrected p-values (.01), and an extent threshold of
5 was used. T-maps were generated for condition of interest. It should
be noted that the reason for testing such an exagerated global F
threshold was in order to do a sort of 'exploratory' analysis, that is
to say include all voxels that might potentially reflect the model
however weakly.
Analysis 2 (F=.999) showed more areas of activation than Analysis 1
(F=0.001). This seems to make sense. If I understand the global F
threshold, it is a sort of 'culling' procedure. The global F is used to
test for any effects of interest, and _only_ voxels that pass this
initial test are further examined, in the sense that parameter
estimation is done for them. (Once again please correct me if my
understanding is wrong). Therefore, it seems to make sense that using a
more liberal global F would result in more areas of activation.
What _didn't_ make sense was that comparing corresponding voxels between
the two analyses showed that Analysis 1 (UFp=.001) had T-values that
were lower than Analysis 2. (e.g.: 7.5 T-value for Analysis 1 versus
9.5 T-value for the corresponding voxel from Analysis 2) From my
understanding of what's going it should have been the opposite way
around. Since SPM2b uses a pooled variance estimate over the voxels, a
more liberal global F-threshold would have let in 'spurious' voxels that
is to say voxels that don't really match the model very well at all and
as a result would inflate your error term, which would _decrease_ your
T-value for voxels.
I'd appreciate any insight into this matter.
Regards,
Jejo Koola
|