Hi Matthew. The problem is that I think that I miss something when TFCE and permutations are combined. For example, if we aim at evaluating the significance of a decoding accuracy, we usually shuffle the labels of the input data and train a classifier a number of times in order to obtain a null distribution of accuracies to contrast against the actual accuracy. However, I’m not sure how TFCE handles this process because I just entered to randomise function the 24 balanced-accuracy maps (one per subject). How is the null distribution built? What is exactly shuffled in this case?
Thanks in advance,