Kath,
Very sorry for the delay.
> I have a few questions in relation to the output from FDRill and a
> conceptual question in multiFDR. I am hoping you can help.
>
> (1) In the ordered p-value loglog plots, does the dashed blue line
> represent: i/V x q/c(V)?
> And does the dotted blue line represent where p(i) <= i/V x q/c(V), when
> thresholded at FDR of 0.05 or does this line represent c(V) = 1?
The dashed blue line is the identity. If the null hypothesis is true
everywhere the P-values have a uniform distribution, and hence the
ordered P-values should closely follow the identity.
> (2) Is it possible to run FDRill for the T stats map output saved from a
> conjunction analysis using spm_write_filtered - or will it only work with
> SPM T's?
Good question. First, a note on write_filtered: It is not valid to run
FDRill on images filtered by threshold. The FDR method assumes you've
got a set of statistics which can come from the full range of possible
values.
Now, as for conjunctions, there are different ways of doing it. To
directly impliment the method implied by our 'Conjunction Null work,
do the following: Use ImCalc to create a minimum t image
(e.g. evaluate 'min(i1,i2,i3)'). Submit the minimum t image to FDRill
as if it were a single t image; this will give you inferences on the
'Conjunction Null', that is, a significant result is evidence that all
effects are real.
Now, for the hard question...
> (3) I have run a conjunction analysis using the Conjunction null
> hypothesis (spm2_conj) and although the individual contrasts survive
> FDR correction at 0.05 (t = 2.77; t= 2.79 respectively), the
> conjunction isn't significant (at FDR 0.05) and returns an infinite
> threshold (which by the way is a real bummer because this is my
> executive function analysis). However, when I used multiFDR across
> these two comparisons, I got an FDR threshold of 2.88 across the two
> comparisons. When I redid the conjunction analysis, again using the
> Conjunction null, but instead of specifying the FDR threshold, I
> specified (at p-value adjustment to control) none and used the t-value
> obtained from multiFDR. With this analysis I got the result that I was
> expecting, which was similar to the results for the Conjunction using
> an FDR of 0.01. Do think this is dodgy or do you think it is possibly
> justifiable?
The best method for conjunction inference with FDR control is still an
area of active research. You have described three methods, all which
seem reasonable
1. Make conjunction P-values by assessing min statistic t image as a
single t image; submit conjunction P-values to usual FDR
proceedure, find level q FDR threshold. (This corresponds to the
FDRill/ImCalc method described above.)
2. Make FDR P-values for each effect; take the maximum FDR P-value
over effects, threshold at q. (This is equivalent to
thresholding each effect image at its own FDR q, then taking the
intersection of the suprathreshold regions).
3. Use multFDR to find FDR P-values for all effects jointly, then
proceed as in method #2.
Jonathan Taylor has pointed out that the multFDR proceedure should be
used with caution. Since FDR is controlled over all images jointly, a
signal with large extent in one image could lead to lots of false
positives in another image.
If we avoid method #3, then that leaves #1 & #2. I know that method
#1 is valid, though may have poor sensitivity. My intuition is that #2
is also valid (possibly under suitable assumptions), but I don't as
yet have a result. For what it's worth, my colleagues and I have
published a chapter [1] using approach #2.
> Is it possible that I could justify this method arguing (as you have
> discussed before) that I'm just choosing a threshold that controls
> FDR at or below 0.05 on all contrasts. Thus, the threshold controls
> FDR over the two contrasts in the conjunction analysis at 0.05?
> If you do think this is dodgy... how justifiable is an FDR of 0.1?
Er, well, I prefer to reserve the term 'dodgy' for things that I
believe to be wrong. You can justify #2 on its face validity, but
since I don't have a proof, that's all I can offer right now.
As for a FDR 0.1, there's nothing magical about 0.05 or 0.01 or
0.1... it's whatever false positive rate you can tolerate. If there
are a small number of voxels found, 0.1 may be acceptible, though if
many voxels are found, 0.1 may allow far too many false positives.
Hope all of this helps. Again, sorry for the extended delay.
-Tom
-- Thomas Nichols -------------------- Department of Biostatistics
http://www.sph.umich.edu/~nichols University of Michigan
[log in to unmask] 1420 Washington Heights
-------------------------------------- Ann Arbor, MI 48109-2029
[1] J Jonides, CY Sylvester, SC Lacey, TD Wager, TE Nichols, and E Awh.
``Modules of Working Memory'', pages 113-134 in Principles of
Learning and Memory, RH Kluwe, G Luer, and F Rosler, Eds.
Birkhauser Verlag, Basel, 2003.
|