Dear BettyAnn,
This is a standard problem associated with cluster-based
corrections. There is always an arbitrary threshold and
this alters the clusters that you see in a non-straightforward
way.
We, however, would not recommend trying lots and lots
of different thresholds until you got what you "liked". This
is a bit of a fishing game and not statistically sound.
An alternative that is available is to use the other thresholding
options in randomise (doing your statistics with permutation
testing instead of the standard parametric inference in FEAT).
Because randomise is non-parametric (permutation-based)
it allows us to implement more sophisticated corrections, including
cluster-mass-based threshold (including the magnitude of the
voxel-wise statistics over the threshold, not just the number of
voxels) and TFCE (Threshold-Free Cluster Enhancement).
Have a look at the randomise webpage for more information:
http://www.fmrib.ox.ac.uk/fsl/randomise/index.html
Given your results from FDR, it sounds like your activations
are quite near the edge of statistical significance, so using
some of the above methods that are more sophisticated/accurate
(since Gaussian Random Field Theory - the one used
for clustering in FEAT - is known to be approximate) will
hopefully help you out.
All the best,
Mark
On 23 Jun 2011, at 14:08, bettyann wrote:
> Mark, thanks for your reply; greatly appreciated. I understand the
> scenario you're outlining; I'll check if my dataset follows this.
>
> But this leads to a different sort of confusion for me. How to pick
> an optimal -- but objective -- threshold value? It did not occur
> to me to *increase* the zthresh to see more significant clusters.
>
> It feels as if I could just adjust the zthresh up and down until I
> get some clusters that I like.
>
> The way I found these clusters in the first place was I used 'cluster'
> without the --dlh --volume --pthresh option; I only used the --zthresh
> option.
>
> Am I correct in thinking that 'cluster' without the --dlh --volume
> --pthresh options gives me uncorrected thresholding, ie, no correction
> for multiple comparisons?
>
> Without these options, I got lots of clusters -- some quite small,
> not surprisingly. But some clusters seemed 'large enough' and,
> importantly, relevant. So when I realized I could do FWE+clustering
> correction with 'cluster', I tried that. And only 1 cluster survived.
> Until I arbitrarily increased the zthresh and then another interesting
> cluster showed up, too. This was nice ... but spooky.
>
> One interpretation of this chain of events is that I can find all
> sorts of clusters (and garbage) by thresholding with no correction.
> I can then try to find the larger/more robust clusters using FWE+clustering
> correction with some zthresh that makes it appear. And then report
> that p-value given by the FWE+clustering results? Is that legitimate?
> I don't want to appear as if I'm fishing.
>
> Is this just a known characteristic of FWE+clustering? Or is there
> some additional step / theory / calculation I can use to choose a
> zthresh that is both optimal and objective?
>
> I did try to use the FDR algorithm, but our data is much too smooth,
> I think. We apply a low-pass filter to the data.
>
> I followed the FDR webpage (thank you):
> http://www.fmrib.ox.ac.uk/fsl/randomise/fdr.html
>
> I end up with a single voxel:
>
> % ttologp -logpout logp1 varcope1 cope1 `cat dof`
> % fslmaths logp1 -exp p1
> % fdr -i p1 -m ../mask -q 0.05
> Probability Threshold is:
> 4.09127e-08
>
> % fslmaths p1 -mul -1 -add 1 -thr 0.999999959087299994386910384491784 \
> -mas ../mask thresh_1_minus_p1
>
> % fslstats thresh_1_minus_p1 -R
> 0.000000 1.000000
> % fslstats thresh_1_minus_p1 -V
> 1 8.000000
>
> Thanks again,
> - BettyAnn
>
|