Antonia,
> So I've got a new strategy and I need to know if it is valid. In each
> voxel, I find the probability of each contrast, for example A>B p =
> 0.0015403, C>D p = 0.06045, E>F 0.00239, multiply them all together
> to get an uncorrected probability p = 2.2286e-7 then multiply
> by the number of voxels in my scans (179080) to get a bonferroni
> corrected probabilty p = 0.039 of this voxel being activated in these
> 3 independent contrasts. Is this valid and does it matter that I can't
> quote t or F values or degrees of freedom for my final probability?
This strategy is only valid for inference on the Global Null. That is,
you can only conclude that one or more of these three contrasts show an
effect, P<=0.039, controlling for 179,080 tests.
(As a minor note, P-values are not probabilities in any usual sense.
Instead, they measure the probability of finding data as or more
extreme in all possible future experiments conducted when the null
hypothesis is true... don't you love classical statistics?!)
Rather, to perform a 'manual' conjunction, simply create t-statistic
images for each analysis, and then use ImCalc to find the minimum
statistic image ('min(i1,i2,i3)'). This minimum image then summarizes
your inferences about the conjunction null hypothesis. (This assumes you
have the same DF for each subject; if you have different DF you'll have
to create P-value images for each t image and then create a maximum P
image and work with that.)
The only trick, then, is how to find a corrected threshold for this
ImCalc-created minimum statistic image. For a FDR threshold, you can
submit the minimum T image to this FDR function and related Matlab snippet
http://www.sph.umich.edu/ni-stat/FDR/#FDR.m
and then get a FDR threshold out. (Note, you can also feed in the
P-values; see the script.)
Random Field thresholds are more difficult. Without of a lot of hacking
you won't be able to get the threshold. (Problems include different
DF, smoothness estimation, etc.)
There is a big plus-side to doing separate analyses and manual
conjunctions like this. By having separate analyses, you make
weaker assumptions, namely, you don't assume homogeneous variance
across the groups.
For example, in your other email (Subject: 2nd level multiple
regression and conjunction) you were trying to do a conjunction over
two groups of subjects, one assessed with a correlation and one with a
mean. By putting both groups into one model you were assuming that
the variance was the same (or, even if you used SPM's nonsphericity
modeling, you'd assume the variance between the groups only differed
by a global scalefactor). If you instead modeled the correlation
subjects separately, created a t image I, and then modeled the 2nd
group of subjects and created t image II, you could use the 'manual
conjunction' strategy outlined above and *avoid* assuming homogeneous
variance over the two groups. The only down-side to the manual
approach is that you have fewer DF.
Hope all of this helps.
-Tom
-- Thomas Nichols -------------------- Department of Biostatistics
http://www.sph.umich.edu/~nichols University of Michigan
[log in to unmask] 1420 Washington Heights
-------------------------------------- Ann Arbor, MI 48109-2029
|