Print

Print


Dear Nora,

if there are no nuisance variables in your model the permutation can be applied quite straightforward by permuting the rows of the design matrix (e.g. regression or > 1 group) or by permuting the sign in a one-sample t-test. However, there exist no standard method how to permute in the presence of any nuisance variables. If only the parameters of interest are permuted (and keeping the nuisance variables un-permuted) this method is called Draper-Stoneman and works maybe not best in all cases for designs with nuisance variables. There were introduced a bunch of different methods to deal with nuisance variables (Fredman-Lane, Smith, te Braak, Kennedy, Still-White, Manly, Huh-Jhun...) and they differ in terms of power and proportion of type 1 error. An excellent overview about this and the background of permutation strategies is given in:
  Winkler et al. Permutation inference for the general linear model.
  https://doi.org/10.1016/j.neuroimage.2014.01.060

In that paper Freedman-Lane and Smith method revealed the best tradeoff between power and proportion of type 1 error. However, as you and others have noticed Freedman-Lane shows sometimes a somewhat strange behavior depending on the nuisance variable and their correlation to your parameters of interest. I have thought by automatically selecting the "better" method (Draper-Stoneman or Freedman-Lane) I can try to prevent these issues, but I am also not really happy with that approach.

I have now implemented the Smith method in the newest TFCE version that works more stable and is used by default. If no nuisance parameter is found then Draper-Stoneman is automatically selected. Hopefully, this solves your issue. Freedman-Lane is kept only for compatibility purposes for a while and I would not recommend to use it in general.

Regarding the strange thresholds for uncorrected and corrected results: This might happen because for uncorrected thresholds the whole null distribution is used to find the threshold. For corrected thresholds the null distribution of the maximum statistic (e.g. using only the maximum value in one statistic) is used. Anyway, I would recommend to use TFCE only with correction for multiple comparisons, because that's the real strength of the permutation method. You can also apply a SVC by defining an external mask.

Sorry for the rather long and technical explanation, but his hopefully makes it clearer for you.

Best,

Christian

On Wed, 29 Nov 2017 09:59:13 +0000, Raschle, Nora <[log in to unmask]> wrote:

>Dear experts
>
>
>
>I have run an fMRI group comparison (2T) which resulted in
>
>1. (atttached image on left): 2 significant clusters at a cluster-forming p<0.001 and FWE cluster correction of p<0.05.
>
>2. (attached image in middle): Using the TFCE Toolbox, the same clusters and more are reported significant at TFCE p<0.05
>
>3. This week, I downloaded and updated SPM12 and also updated the TFCE (according to the download link from November 7th; attached image to the right) and run this version on the same data. When plotting the results, the whole brain is showing up as significant ?at TFCE<0.05.
>
>
>
>For both TFCE Options, when I used the SPM12 GUI to plot the results (select TFCE Toolbox > TFCE > Results > Type of Statistics TFCE > Original contrast > FWE 0.05 correction). If I do the exact same procedure but choose a less stringent correction (TFCE>None adjustment > p<0.001) the resulting statistics result in no suprathreshold findings.
>
>
>
>While the second outcome was somehow in the direction of what I expected according to the literature on TFCE, I can't explain the results based on the newest Download. Does anyone have an idea of what might be going on and how I should proceed (e.g. check SPM Installation, redo the TFCE install or potentially any updates I missed?)
>
>
>
>And as a side question, does the number of covariates int he SPM fMRI model of the group comparisons impact the TFCE permutation in any way?
>
>
>
>Thanks a lot for the help!
>
>Nora
>
>
>