Hi all,
I am transforming a diffusion space image (FA image) from diffusion space into standard (MNI) space with the following steps:
1. Create affine-transformation matrix using FLIRT to transform from diffusion space to T1/MPRAGE space using 6 degrees of freedom and the mutual info cost function.
2. Create nonlinear warpfield using FNIRT to transform from T1 space to standard MNI space.
3. Applywarp the warpfield to the diffusion image to project it into standard MNI space.
To sanity-check my FLIRT/FNIRT transformations, I then invwarp'd my diffusion2standard warpfield to create a standard2diffusion warpfield. Then, I applied this warpfield to my diffusion image (which is in standard space) to project it back into diffusion space.
Then, I compare the original diffusion image with the diffusion image that has been transformed to standard space and then back-transformed into diffusion space. When looking at both images in FSLeyes, they seem perfectly aligned. However, the intensity values differ. The original image has an intensity range of 0, 1.224745. The transformed and back-transformed image has an intensity range of 0, 1.198769.
I am wondering if it this can be considered as an acceptable margin of error (a ~2% change in peak intensity). Is there an explanation for the difference in intensity values? In theory, shouldn't they be identical?
For further context, I am trying to run probtrackx2, and I need to transform standard space ROI masks into diffusion space to be used as seed, waypoint, and termination masks for probabilistic tractography.
Thank you! Any help is very much appreciated.
|