Hi Kumar,

If the statistical comparisons are made with the FA maps, the residuals need to be from this model, i.e., the GLM (see the command smoothest).

However, even if you compute the smoothness correctly, there are still issues:

1) If you are doing a VBM-style analysis, this has two problems:
- VBM-style analyses for FA data has a number of issues, being these one of the motivations for the development of TBSS;
- Although I never checked myself, there are reasons to suspect that there is a considerable degree of non-stationarity in FA maps, which would call for non-stationarity correction, specially since you are interested in cluster-level results (see Hayasaka et al (2004), doi:10.1016/j.neuroimage.2004.01.041 -- there is a toolbox available for Matlab called "ns" based on the paper).

2) If you are doing a TBSS analysis, RFT is innapropriate, as the projection of the data onto the skeleton breaks the assumption of being the map (under the null) a good lattice representation of a continuous underlying random field. The solution is to use permutation methods as available in randomise.

So, my suggestions are (1) use the guidelines for TBSS and/or (2) don't use RFT unless you do non-stationarity correction.

All the best,

Anderson



2013/4/11 M Kumar <[log in to unmask]>
Dear FSL Experts,

I will be much obliged if you could provide me with leads to the following questions :

1. I'm trying to use Random Field Theory to correct for multiple comparisons in a DTI dataset. I believe that the smoothness estimates (FWHM) would need to be computed from the residuals volume (likely dti_sse.nii.gz output from dtifit).

2. However, my statistical comparisons are in the FA  (Fractional Anisotropy) volume. It appears there is tremendous difference between the FWHM values computed on the FA volume versus the residuals even if done only on the white matter tracts.

Perhaps one should compute FWHM on the FA volume only ?

Any leads would be most welcome and appreciated.

Thank you,

Kumar.