Dear Mike,
On 14/07/16 09:31, Mike wrote:
> Thanks for everyone's replies. However, I believe that many
> researchers who use fMRI analysis software are not with a firm
> statistical background, just like me. For practical reasons, we need
> a "guideline," if any, to control multiple comparisons problem.
> Concerning cluster-wise thresholding, below is what I would follow
> according to Woo et al., 2004 and the recent cluster failure paper in
> PNAS, but I hope some erperts here can comment a bit.
>
> (1). For SPM and AFNI 3dClustSim users, the first arbitrary
> cluster-forming threshold (CFT) is suggested to be not too lenient.
> 0.001 is good, but 0.01 is definitely poor (I have no idea if 0.005
> is ok or not?). Then you can report clusters that survive a
> FWE-corrected p<.05 at the cluster-wise level (but can I report
> FDR-corrected p<0.05?).
For cluster-level inference, the cluster-forming threshold has to be
high enough so that the approximate results from the random field theory
become accurate. And high enough would be p<0.001 uncorrected (i.e.
SPM's default in the interface).
> (2). The commonly used "P = 0.001 uncorrected with a k of 10 voxels"
> should be abandoned (but it seems that many people still use it...).
This thresholding strategy does not control the family-wise error rate
of anything. As Chris mentioned, it is sometimes used for illustration
purposes in publications but the associated inference results table
should only list p-values of features (peaks, clusters) surviving
multiple testing correction (either full brain or within a small volume;
with the convention that p_corr<0.05 is the criterion for significance).
> Besides, I have a naive question: since cluster-extent based
> thresholding might be more problematic, why don't we just stick on
> voxel-wise thresholding?
A paper discussing the various levels of inference is the following
http://www.ncbi.nlm.nih.gov/pubmed/9345513
and you will also find elsewhere the comment copied below:
http://www.scholarpedia.org/article/Statistical_parametric_mapping_%28SPM%29
> One usually observes that set-level inferences are more powerful than
> cluster-level inferences and that cluster-level inferences are
> generally more powerful than peak-level inferences. The price paid
> for this increased sensitivity is reduced localizing power.
> Peak-level tests permit individual maxima to be identified as
> significant features, whereas cluster and set-level inferences only
> allow clusters or sets of clusters to be identified. Typically,
> people use peak-level inferences and a spatial extent threshold of
> zero. This reflects the fact that characterizations of functional
> anatomy are generally more useful when specified with a high degree
> of anatomical precision.
Best regards,
Guillaume.
--
Guillaume Flandin, PhD
Wellcome Trust Centre for Neuroimaging
University College London
12 Queen Square
London WC1N 3BG
|