Hi, Philipp,
> I am doing a combined analysis of T1 (grey and white
> matter) and DTI (FA and
> mean diffusivity=MD. n=2 x 20) data. For DTI I used
> the non-parametric
> toolbox (version 2, SPM2) with 1000 permutations, no
> variance smoothing, and
> FDR on the voxellevel to control for multiple
> testing. The model is a group
> comparison with a covariate of no interest.
>
> Questions:
>
> 1. Does the spm_filtered.img represent the T-map
> thresholded acc. to the
> specified 'q' in the FDR routine?
Since you were using SnPM, I assume you meant
'SnPMt_filtered.img'. If you used an FDR threshold,
then the short answer to your question is yes: the
images represents the T-map thresholded by the
specified FDR threshold. More specifically, when you
specify an FDR threshold, what SnPM does is to find
the corresponding uncorrected P value threshold, and
use that one as the real threshold to do inference. So
when you created the SnPMt_filtered.img, the real
threshold used is the corresponding uncorrected P
value threshold (i.e. SnPMt_filtered.img contains all
the t values of voxels whose uncorrected P values are
smaller than the uncorrected P value threshold set by
the FDR threshold).
> 2. FA-Analysis: I have attached the permutation
> distribution and logplot-
> matlab figures.
> At FDR-q of 0.005 (p=0.002) a huge cluster appears,
> at q = 0.001 no voxel
> 'survives'.
From both the uncorrected P-value histogram and the
P-P plot, it
is apparent you have a very large (spatially
extensive) signal: The
histogram shows a tremendous hump near zero (when, if
the null were
true everywhere it would just look flat from 0 to 1)
and the P-P plot
shows a tremendous departure downward from the blue
identity line (if
the null were true everywhere the red P-values should
follow right
along the identity.)
Put another way, for a large portion of your data
there appears to be
alot of evidence that the null hypothesis is the wrong
hypothesis.
Hence it is not surprising that you get very large
blobs to be
significant.
> So the window for is very narrow - at
> higher thresholds as e. g.
> q = 0.01 matlab (version 7) crashes (segmentation
> fault) - does this indicate
> a bug, or does it mean that all voxels are (would
> be) significant, and can
> the crash be prevented?
Well it shouldn't crash. Make sure you have all of
the SPM2 updates,
in particular the new spm_cluster.m (version 2.4, and
that you deleted
the old, mex-based spm_cluster). If you have all the
latest updates
and still crashes, can you try it on other machines?
> 3. The lowest uncorr. p-value is 0.001. To dissect
> p-values below, should just
> more permutations be used? Sorry, if this is
> trivial.
>
Yes. With a nonparametric permutation proceedure the
P-values are
discretely valued, and all are multiples of 1/k, where
k is the number
of possible permutations. So, yes, for more
resolution of small
P-values, use more permutations.
> 4. In addition, the sensitivity of FA and MD to
> detect group differences seems
> to be different - does it make sense to try to
> present data at a common FDR
> threshold or should the two modalities be treated
> separately with resp. to
> multiple test correction?
This is an implicit issue of FDR: it is adaptive to
the amount of
signal in the data and hence will find different
thresholds on
different datasets. You simply need to report what
you do: Use the
same uncorrected threshold on each and report what the
equivalent
(different) FDR threshold was on each, or use the same
FDR threshold
on each and report what the (different) threshold was
on each.
>
> 5. For 34 degrees of freedom is variance smoothing
> recommended for DTI data?
>
With 34 DF it shouldn't be needed. We see the most
impact of variance
smoothing when DF are less then 20.
-Tom & Jun
----------------------------
Jun Ding, Ph.D. student
Department of Biostatistics
University of Michigan
Ann Arbor, MI, 48105
----------------------------
|