> I used new segment (1 channel on T1 IR images), then 'create
> templates' then normalise to MNI. I'm doing this for FUNCTIONAL MRI
> transforms to MNI.
>
> I was suprised that my rc1 images did not have the same vox dim and
> matrix dimensions of the TPM templates (2x2x2; 91x109x91,
> respectively). So I thought that it must have retained the original
> dimensions of the raw T1 image (1 x 1 x1; 256x256x162). Apparently not
> though - my rc1 images have dimensions 1.5x1.5x1.5;121x145x121.
>
> Subsequently my DARTEL template has these dimensions. Where do they
> come from?
The dimensions are usually the same as the TPM.nii files in the
spm8/toolbox/Seg directory, which are 121x145x121. These are at 1.5mm
resolution, which seems to be a reasonable resolution for DARTEL to work
with. The older tissue probability maps in SPM are only at 2mm
resolution.
>
> So, I used the normalise to MNI function expecting that my functional
> images (in this case I normalised my Con_*. images) would be
> resampled with the same dimensions as the TPM template when leaving
> default NAN's in vox dim and bounding box. Instead they were the same
> as the DARTEL template (1.5x1.5x1.5;121x145x121). I re-read the manual
> and it says this would happen.
>
> So...what I also wish to know is a) why doesn't it take the dimensions
> of the TPM template as it would in other normalise algorithms? It
> hought this 2x2x2 was some kind of standard space for MNI images but
> it may be arbritary, b) given my functional data was at 2.5x2.5x3,
> will resampling at 1.5x1.5x1.5 effect my statistics in my random
> effects model? Or will smoothing wash this out? c) If it could effects
> my stats, can I simply change the voxel sizes and the bounding box? (I
> don't quite get bounding boxes or how to find out what the bounding
> box of the TPM templates is), and d) as i asked before, where did
> these dimensions come from during the process of segmentation (seg8
> doesnt give options for this)
The corrections for multiple comparisons, using random field theory,
assume that the data are a good lattice approximation of a smooth
continuous function. Therefore, if your spatially normalised data are
at a higher resolution than the original data, then it should make the
corrections more accurate (at the expense of disk space and computing
time).
In the DARTEL toolbox, I've chosen to combine spatial normalisation and
smoothing (so that the smoothing is weighted according to how many
original voxels contributed to each voxel in the normalised data). If
no smoothing is used, then gaps can appear in the spatially normalised
images if their resolution is higher than that of the original images.
Best regards,
-John
--
John Ashburner <[log in to unmask]>
|