Print

Print


Dear Chen,

> I have 2 questions concerning randomise inference and DTI analysis. In 
> Jones et al paper (NeuroImage 2005;26:546-554) mention about the problem 
> in different smoothing parameter and parametric statistic inference.

The problem that Derek describes isn't really related to the issue of
ariance smoothing. In his paper he simply shows (I think, I've only read
his poster on the same topic) that depending on what filter width (for
smoothing data, not just the variance) you choose you will be
differentially sensitive to different size (extent)
activations/differences. So, for example if you choose a 20mm FWHM
filter you will be more sensitive to large spatially extended
differences in FA compared to if you use 5mm.

>  My 
> question is 
> 1. If I use non-parametric approach and use variance smoothing under 
> randomise, is it still valid to overcome the non-normally disturbed 
> residuals problem ? what's the effect of variance smoothing ?

It is valid. The effect of variance smoothing is to render the data
distributed a little more like a normal distribution (rather than a low
df t distribution) thereby increasing sensitivity for low df data. There
is then also a potential risk of loosing some of the fine spatial detail
of the changes/differences, but in practice I hardly think that is an
issue.
 
> 2. In randomise, if I calculate cluster-based threshold at one time, 
> should I redo the analysis again if I change the c_thresh ?

Yes.

Good luck Jesper