Dear Baidya,
Lets start with some definitions. There are several coordinate systems
you can work in. The results will be correct as long as vol, sens and
grid are in the same coordinate system.
1) Native space is defined by structural image you use.
2) MNI space is the native space of the MNI template image.
3) MNI-aligned space is the native space of an individual image
coregistered to the MNI template.
4) Coregistration space is the coordinate system in which SPM works
after coregistration. For MEG these are head coordinates which can be
defined differently for different MEG systems. See also
http://fieldtrip.fcdonders.nl/faq/how_are_the_different_head_and_mri_coordinate_systems_defined
Coordinates can be transformed between those systems using an affine
transformation matrix. You can combine affine transforms by
multiplying the corresponding matrices and get inverse transforms by
inverting them. Now there are several such matrices that you can find
in different places in the code. Here is what they do in terms of the
above definitions.
fromMNI 2->4
toMNI = inv(fromMNI) 4->2
inv{1}.mesh.Affine 1->2
M1 (in the beamforming code) 4->3
In my beamforming code it was convenient to work in coordinate system
3 because one needs to preserve the head size and it in some cases it
is convenient to have symmetry around zero of the X axis (for instance
to scan with symmetric dipole pair).
Now to your questions.
On Fri, Jun 22, 2012 at 8:07 PM, Baidya Nath Saha <[log in to unmask]> wrote:
> Vladimir,
> Some of the beamformer coordinate transformations between MNI and source
> space are confusing:
> In spm_eeg_ft_beamformer_source.m Two different transforms are used:
> pos = spm_eeg_inv_transform_points(M1*datareg.fromMNI, S.sources.pos);
> where S.source.pos started in MNI coordinates, and M1 is the svd of
> datareg.toMNI with scaling removed
This is transform 2->3, the points are defined in MNI template space
and transformed to MNI-aligned space by combining to known transforms.
> and vol and sens have both had the M1 transform applied:
> M1 = datareg.toMNI;
> [U, L, V] = svd(M1(1:3, 1:3));
> M1(1:3,1:3) =U*V';
> vol = ft_transform_vol(M1, vol);
This is transform 4->3. So now the points and vol and sens are all in
system 3 so everything is OK.
> Later in the same code (around line 247), correlation maps seem to be
> generated in MNI space (at least the results are overlaid on an MNI brain),
> but the input grid coordinates are not subjected to any transformations:
> cfg.grid.xgrid = -90:10:90;
> cfg.grid.ygrid = -120:10:100;
> cfg.grid.zgrid = -70:10:110;
Yes, this is a little sloppy since this code is hidden and was not
intended for most users. So the output is aligned to MNI space but
corresponds to the individual head size.
> In the spm_eeg_ft_beamformer_cva.m program (line 403), an mnigrid is created
> directly and transformed using the datareg.fromMNI transform. There is no
> transformation of the vol or sens performed, or M1 transform used:
> mnigrid.xgrid = -100:S.gridstep:100;
> mnigrid.ygrid = -120:S.gridstep:100;
> mnigrid.zgrid = -50:S.gridstep:110;
> mnigrid.dim = [length(mnigrid.xgrid) length(mnigrid.ygrid)
> length(mnigrid.zgrid)];
> [X, Y, Z] = ndgrid(mnigrid.xgrid, mnigrid.ygrid, mnigrid.zgrid);
> mnigrid.pos = [X(:) Y(:) Z(:)];
> cfg.grid.dim = mnigrid.dim;
> cfg.grid.pos = spm_eeg_inv_transform_points(datareg.fromMNI, mnigrid.pos);
This is OK, just this function written by Gareth Barned works in
coordinate system 4 to which the grid created in MNI space is
transformed.
> In the spm_eeg_ft_beamformer_freq.m program (line 403), an mnigrid is also
> created, but this one is transformed using the M1 and datareg.fromMNI
> transforms, with an M1 transform of vol and sens:
> mnigrid.xgrid = -90:S.gridres:90;
> mnigrid.ygrid = -120:S.gridres:100;
> mnigrid.zgrid = -50:S.gridres:110;
> mnigrid.dim = [length(mnigrid.xgrid) length(mnigrid.ygrid)
> length(mnigrid.zgrid)];
> [X, Y, Z] = ndgrid(mnigrid.xgrid, mnigrid.ygrid, mnigrid.zgrid);
> mnigrid.pos = [X(:) Y(:) Z(:)];
> cfg.grid.dim = mnigrid.dim;
> cfg.grid.pos = spm_eeg_inv_transform_points(M1*datareg.fromMNI,
> mnigrid.pos);
> cfg.inwardshift = -10;
Yes, this code works in coordinate system 3, but the grid is created
in system 2 so that when the output is written in system 2 the grids
will exactly correspond between subjects.
> So a couple questions:
> 1) In the spm_eeg_ft_beamformer_source.m routine, why can’t the
> S.sources.pos (in MNI coordinates) be used this way:
> pos = spm_eeg_inv_transform_points(datareg.fromMNI, S.sources.pos);
> without any need for the M1 transform applied to vol and sens?
They can, it was just a matter of convenience when that code was
written to work in MNI-aligned system.
> 2) In the same routine, can vol and sens just be transformed using:
> vol = ft_transform_vol(datareg.toMNI,vol)
> and then use the S.sources.pos coordinates in MNI space directly?
No because then you would also need to transform the MEG sensor array
and distort its shape which doesn't make sense since the sensor array
is rigid in the real world.
> 3) What is the correct form of the transform to use with specific MNI
> coordinates or a grid defined in MNI space? It seems all of these different
> forms can’t be correct (or at least the M1 transform seems to add an
> unnecessary layer).
You can work in any coordinate system as long as all the objects are
in the same system and you keep the head and the sensor array the
right size and shape. All of the above-mentioned systems except MNI
template space could fulfill these requirements.
>I suppose the one that seems inconsistent is the
> correlation map in spm_eeg_ft_beamformer_source.m where an mnigrid is
> created (line 247) but not subject to any transformations.
This is sloppy as I said but could be easily fixed if necessary.
Best,
Vladimir
> Thanks,
> Baidya
|