Print

Print


Hi Han-Gyol,

Sorry, I forgot a detail: F-tests with TFCE don't have the same interpretation as usual with F-tests, that is, being significant if any of its constituent t-tests are significant.

When TFCE is used with F-tests, the support area is all positive throughout the map, and the values never cross the zero towards the negative side, something that is a normal behaviour with the t-statistic. This changes things, as the support area is completely different. The consequence is that F-tests can no longer be used (with TFCE) to guard against false positives due to the multiplicity of t-tests (as it can in an one-way ANOVA).

In fact, the best thing to do in your case seems to be to use Bonferroni over the t-tests as I mentioned earlier, and explain to the reviewer that his request (F-test + TFCE) won't help with the manuscript. Surely he/she will be satisfied with Bonferroni, as it's the most conservative, yet valid, way of correcting, and it's absolutely fine in your case (i.e., not excessively conservative, given the contrasts you have).

You can still use TFCE with the t-tests, and/or use F-tests as usual (without TFCE then).

All the best,

Anderson




On 2 July 2014 19:01, Han-Gyol Yi <[log in to unmask]> wrote:
Hi,

I tried the approach you have described earlier. I ran 3 separate 2nd levels to take 22 inputs (since we have 22 participants), which were cope1.nii.gz, cope3.nii.gz, or cope5.nii.gz files for the individual subjects. I had one contrast [1] and one f-test [C1]. After they finished running, I ran randomise under an identical GLM design:

randomise -i $SECONDLV/$COPE1.gfeat/cope1.feat/filtered_func_data -o $OUTPUT -d $DESIGN/design.mat -t $DESIGN/design.con -f $DESIGN/design.fts -1 -T

This gave me 6 files for each 2nd level:

fstat1
tfce_p_fstat1
tfce_corrp_fstat1
tstat1
tfce_p_tstat1
tfce_corrp_tstat1

All but one files have "normal" range of values, for the lack of a better word. However, tfce_corrp_fstat1, the only output that I am interested in at the moment, has all-0 values when checked with fslstats (or fslview). This feels pretty unnatural, since, again, the raw fstat and p-values for the fstat have normal ranges. I would have at least expected a distribution of very low (1-p) values.

So my quesiton is-- is this kind of behavior expected from randomise when it's working with f-stats?

Thank you,
Han-Gyol Yi