Dear all,
Apologies to bother you again with Ftest questions but I'm still puzzled (see previous posts "randomise interaction F-test") but now with a different issue.
After Ftests on 3 group comparisons gave me very different results from what I expected given previously performed planned T-contrasts, I got a little suspicious. So I ran both an F-test and a T-test on 1 contrast using randomise with TFCE (on TBSS data) and I get corrected and uncorrected p-values that are completely different. The T-test results in uncorrected p-values as low as 0.0014 and corrp values of 0.0772, while the Ftest gives a minimum uncorrected p of 0.0088 and a minimum corrected p of 1 (fslstats -R on this image results in 0.000000 for both maximum and minumum). All of this despite fslmaths -sqr of the raw Tstat image ensured compatible raw T and F statistics in each voxel (F=T²). Given I'm testing only 1 contrast with the Ftest, I think they should also have the same DoF.
So I'm wondering what is going on... Am I consistently doing something wrong? I wouldn'y know how because both analysis just ran with the same design files and 1 command. Could it be that because Fstats are non-directionally specific, random permutations may result in larger clusters and thus higher thresholds? Or does TFCE change because of this non-directionality of F stats? I'm not sure how to compare the TFCE stats between the two methods, but it could be meaningful if I could test their correspondence in a similar way as I did with the raw stats. Finally, could it be the version of randomise I'm using? It's randomise v2.1 in FSL 4.1.1.
Best wishes,
Emma
|