Hi Danilo,
You probably will find these slides helpful, if you haven't already seen
them:
http://www.sph.umich.edu/~nichols/FDR/SMU2004.ppt
I have been researching the same issue myself -- specifically when it is
okay to use C(V) = 1 -- or C(N) = 1 in my case, where instead of voxels
I'm interested in nodes on a surface. On
http://www.sph.umich.edu/~nichols/FDR/index.html#Slides in the FDR.m
description, Tom Nichols says:
"For imaging data, an assumption of positive dependence is reasonable,
so it should be OK to use the first (more sensitive) threshold."
But I'm actually interested in anatomical data (sulcal depth -- kind of
like Freesurfer's convexity) -- not fMRI. The Genovese, Lasar, and
Nichols (GLN) 2001 paper
(http://www.fil.ion.ucl.ac.uk/spm/doc/papers/GLN.pdf) says the positive
dependence condition holds when "the noise in the data is Gaussian with
nonnegative correlation across voxels"; there does seem to be positive
correlation in the depth *data* across nodes, but I'm not sure my noise
is as gaussian as it needs to be (haven't smoothed variance). When I
try to read the Benjaminini and Yekutieli 2001 paper cited in the GLN
paper, my take on their "PRDS" (positive regression dependency) property
is different. I also had a look at some John Storey and Sarkar papers
yesterday, but they were mostly over my head.
Currently I'm assessing interest in a neuroimaging multiple comparisons
/ thresholding mailing list. The idea is that it would be
algorithm-centric, rather than tool-centric (i.e., so you wouldn't have
to cross-post to the SPM, FSL, AFNI, Freesurfer, caret-users, and other
tool-centric lists). If such a list already exists, please so advise on
the SPM list. Otherwise, if you would be interested in joining one, let
me know.
Donna Hanlon
Van Essen Lab
On 12/06/2005 08:41 AM, danilo dongiovanni wrote:
> Dear list,
> I have some questions about FDR theory
> - are there any assumption on the test distribution in order to apply
> FDR procedure to control for multiple comparisons?
> - In case there is some dependancy between FDR threshold estimate and
> the test distribution from which p-values are derived, what happen
> when the test distribution to threshold is very skewed or
> super/subgaussian ?
>
> thanks for any help
> danilo
|