> Would you please give me a pdf or link, so that I can learn the principles?
It depends on how deep you want to go. I see you are based in a
computer science department, so I'll elaborate a bit more than usual.
The SPM manual chapter on the Deformations Utility may be some help,
although the recursive nature of the procedure (and the fact that many
pages of the manual are automatically generated) means the sections
and subsections can become a bit repetative).
There's a whole load of stuff on Wikipedia about differential
geometry, but I have to admit that I don't understand all that much of
it. Here's a starting point though
http://en.wikipedia.org/wiki/Invertible_function
For this particular case, you probably want to understand how the
mapping in the sn3d.mat file is parameterised, which is probably
easiest to figure out from the MATLAB code in spm_defs.m. From around
line 76, the function get_sn2def takes the contents of a sn3d.mat file
and converts it into a deformation field. It begins with a load of
code for figuring out bounding boxes and stuff, which you can probably
skip over rapidly. The main part builds up a displacement field from
a linear combination of cosine transform basis functions, and then
composes this with an affine transform.
Basis functions are generated by:
basX = spm_dctmtx(sn.VG(1).dim(1),st(1),x-1);
basY = spm_dctmtx(sn.VG(1).dim(2),st(2),y-1);
basZ = spm_dctmtx(sn.VG(1).dim(3),st(3),z-1);
Displacement fields are computed from these using the Tr field in the sn3d.mat:
tx = reshape( reshape(sn.Tr(:,:,:,1),st(1)*st(2),st(3))
*basZ(j,:)', st(1), st(2) );
ty = reshape( reshape(sn.Tr(:,:,:,2),st(1)*st(2),st(3))
*basZ(j,:)', st(1), st(2) );
tz = reshape( reshape(sn.Tr(:,:,:,3),st(1)*st(2),st(3))
*basZ(j,:)', st(1), st(2) );
X1 = X + basX*tx*basY';
Y1 = Y + basX*ty*basY';
Z1 = z(j) + basX*tz*basY';
Then an affine transform is computed using the Affine matrix you saw
previously, and the orientation information from the image that was
warped into alignment with the template:
Mult = sn.VF.mat*sn.Affine;
This is then used to affine transform the deformations (note that the
affine transforms are applied by A*[x y z 1]', rather than [x y z
1]*A):
X2= Mult(1,1)*X1 + Mult(1,2)*Y1 + Mult(1,3)*Z1 + Mult(1,4);
Y2= Mult(2,1)*X1 + Mult(2,2)*Y1 + Mult(2,3)*Z1 + Mult(2,4);
Z2= Mult(3,1)*X1 + Mult(3,2)*Y1 + Mult(3,3)*Z1 + Mult(3,4);
Inverting the deformations is done as described in the appendix of this paper:
Ashburner J, Andersson JLR & Friston KJ (2000): Image registration
using a symmetric prior – in
three-dimensions. Human Brain Mapping 9(4):212–225
Note that the procedure is not so easy to follow - particularly the
bit about how to find all the voxels that fall inside an arbitrary
tetrahedron.
Best regards,
-John
On 25 April 2012 15:25, Zhijiang Wang <[log in to unmask]> wrote:
> Dear John,
>
> Great!
> It works well!
> Great thanks!
>
> Would you please give me a pdf or link, so that I can learn the principles?
>
>
>
> Best wishes,
>
> Ross Wang
>
> --------------------------------------------
>
> PhD Candidate
>
> Room 407, East Segment, Material Science Building,
>
> The International WIC Institute,
>
> Brain Informatics,
>
> College of Computer Science and Technology,
>
> Beijing University of Technology,
>
> Beijing, China.
>
>
> 于 2012-4-25 21:43, John Ashburner 写道:
>
> My first answer was a bit quick as I had to go give a talk, and I had
> a feeling there'd be some follow up. Anyway, you can use the
> Deformations utility (via the Batch system, looming in the SPM
> Utilities options) to set up the following job to generate an inverse
> deformation field.
>
>
> Composition
> . Inverse
> . . Composition
> . . . Imported _sn.mat
> . . . . Parameter File [Your sn.mat file]
> . . . . Voxel sizes [NaN NaN NaN]
> . . . . Bounding box [NaN NaN NaN; NaN NaN NaN]
> . . Image to base inverse on [Your subject's image]
> Save as [some file name]
> Apply to [nothing needed]
> Output destination [up to you]
> Interpolation [not needed]
>
> This will generate a y_*.nii image file that has the same first three
> dimensions as your subject's image. You can then read off where voxel
> i,j,k is mapped to in MNI space by:
>
> Nii = nifti('y_blah.nii');
> Y = Nii.dat;
> [Y(i,j,k,1,1), Y(i,j,k,1,2), Y(i,j,k,1,3)]
>
>
> Best regards,
> -John
>
> On 25 April 2012 11:54, Zhijiang Wang <[log in to unmask]> wrote:
>
> Dear John,
>
> Greath thanks for your so valuable information.
>
> But, would you please say it more in detail?
>
> Thanks millions!
>
>
> Best wishes,
>
> Ross Wang
>
> --------------------------------------------
>
> PhD Candidate
>
> Room 407, East Segment, Material Science Building,
>
> The International WIC Institute,
>
> Brain Informatics,
>
> College of Computer Science and Technology,
>
> Beijing University of Technology,
>
> Beijing, China.
>
>
> 于 2012-4-25 18:25, John Ashburner 写道:
>
> If the transform is nonlinear, then the Affine part will not give you
> the exact location. To map a point in native space to some location
> in MNI space, you'll need the inverse of the mapping from MNI space to
> native space (which is the one used for generating the normalised
> images). You can write out the inverse of the deformation via the
> Deformations Utility, and simply read off the appropriate values from
> the Nx x Ny x Nz x 1 x 3 volume.
>
> Best regards,
> -John
>
> 2012/4/25 Zhijiang Wang <[log in to unmask]>:
>
> Dear all SPMers,
>
> We all know individual images can be normalized to MNI template.
>
> But, there is a question.
> I have a physical coordinate (x1,y1,z1) mm for a voxel, so what's its MNI
> coordinate when its belonged individual image is normalized to MNI space?
>
> I found there is a variable "Affine" in "*seg_sn.mat",
> is it right to do (x1, y1, z1, 1) * Affine ?
>
> Thanks!
> --
>
> Best wishes,
>
> Ross Wang
>
> --------------------------------------------
>
> PhD Candidate
>
> Room 407, East Segment, Material Science Building,
>
> The International WIC Institute,
>
> Brain Informatics,
>
> College of Computer Science and Technology,
>
> Beijing University of Technology,
>
> Beijing, China.
|