Print

Print


Dear Anna-Maria,

Thanks Tom, for the help. You're right, now I get the FEAT-esque results I had been missing for my individual group analyses - to a degree at least. For controls (N=22), I get exactly what I'd expect. For patients though, I have a smaller N(=17) and much less robust activation, even compared to my FEAT analysis. I'm wondering is really down to the small N, and have been experimenting with smoothing as noted in the randomise tutorial page. Is there any way, other than trial and error, for identifying an appropriate smoothing factor? Or any other things I need to take into account to maximize signal in an N<20 sample?

Are you referring to variance smoothing set with the -v option?  My experience is setting this equal to the applied smoothing of the analysis (e.g. 5mm FWHM) is all you need to try.  (Note, that randomise expects this in units of std, so you have to divide FWHM by 2.35 to get sigma, e.g. 5mm FWHM = 2.1mm sigma).

Smoothing much less will give you results that are similar to no smoothing.  Smoothing much more doesn't seem to help, and eventually you're making too-strong assumptions about the variance (i.e. assuming that it's very very smooth).

So... no fishing needed.  Hope this helps!

-Tom
 

thanks,

a-m



--
__________________________________________________________
Thomas Nichols, PhD
Principal Research Fellow, Head of Neuroimaging Statistics
Department of Statistics & Warwick Manufacturing Group
University of Warwick, Coventry  CV4 7AL, United Kingdom

Web: http://go.warwick.ac.uk/tenichols
Email: [log in to unmask]
Phone, Stats: +44 24761 51086, WMG: +44 24761 50752
Fax:  +44 24 7652 4532