Hi Betty Ann,
Matthew answered already the (1) in the other email. Regarding (2), how
many subjects do you have? Unless you have a tiny sample, say, less than
about 18-20, you may not need variance smoothing at all. Regarding the
"best" kernel, I'm not sure if this has been checked but I suspect that
the "best" depends on the number of subjects and possibly on the imaging
modality. I'd be parsimonious and use not much more than the resolution
after the smooting you already applied, i.e., 6 mm.
Also, note that the <std> (sigma) isn't the same as half of the FWHM.
The relationship between is FWHM = sigma*sqrt(8*ln(2)) = 2.35*sigma.
All the best,
Anderson
On 10/01/2014 04:15, bettyann wrote:
> Dear all,
>
> (1) The input to randomise is a merged 4D volume. But what are the 3D volumes that go into that 4D volume? cope? pe? tstat? zstat? The inputs to randomise will be the results from my GLM fixed effects analysis per subject. But which file from the 'stats' directory do I use?
>
> (2) Also, how best to determine the variance smoothing kernel size? I have 16 subjects in my group. Just to be sure, the -v <value> is the *HALF* width at half max? I have already applied a spatial smooth of 6 mm FWHM in the preprocessing step. The input data to randomise have a voxel size of 2x2x2 mm3 (MNI space).
>
> Thanks,
> - BettyAnn
|