Dear Jason and Laura,
> We have a few questions regarding left/right image handedness and SPM2.
>
> 1) Our images are right handed; therefore, we have set
>
> defaults.analyze.flip = 0;
>
> We would like to normalize our images using the sbrain_avg152T1.img
> template created by Matthew Brett
> (http://www.mrc-cbu.cam.ac.uk/Imaging/Common/no_skull_norm.shtml)
>
> The template appears to be left-handed, so we reoriented it using a -1 X
> flip in Display. When we normalize, will SPM2 now properly interpret the
> right handedness of these images (eg, will it take account of the .mat
> file)?
The handedness of the template will depend on whether there is a .mat file or
not. If there is a .mat file, then the handedness is read from this. If
there is no .mat file, then the defaults.analyze.flip value is used to decide
whether flipping should occur.
>
> 2) Do the templates used by SPM2 Segment take account of the defaults
> handedness selection?
These templates are in MINC format, which has less ambiguities about right and
left-handed orientations. The defaults.analyze.flip value only applies to
Analyze images, because there is ambiguity about their handedness.
>
> 3) .mnc files appear to display images with the same handedness,
> regardless of the defaults.analyze.flip setting; we understand from
> previous discussion on this list that this orientation is right handed.
Correct.
> If this is true, does SPM2 flip these images when performing
> normalization on left-handed images (defaults.analyze.flip setting=1),
> but retain their right-handed orientation when performing normalization
> on right-handed images (defaults.analyze.flip setting=0)? If SPM2 does
> normalize in this way, wouldn't it be problematic that one cannot Check
> Reg the .mnc image and the normalized analyze image with both images
> displaying the proper handedness, if one is using left-handed
> orientation?
Data stored as right- or left-handed are both interpreted in a right-handed
coordinate syetem through a "voxel-to-world transformation". This is simply
a 4x4 affine transformation that encodes orientations, origins, voxel sizes
etc. All the computations are computed on co-ordinates within the
right-handed system. If you want to read more in order to understand this,
then take a look at: http://www.fil.ion.ucl.ac.uk/~john/thesis/chapter2.pdf
When you display an image, the voxel-to-world transform is taken into account,
such that the handedness is automatically taken care of. This means that
Analyze images stored as left handed can be displayed with the same
handedness as the MINC images used as templates.
I hope this helps.
Best regards,
-John
|