> I have a question about how spatial locations are treated in SPM. The
> context is a subject.img file corresponding to the structural image of a
> single subject, from which I extract information via the commands:
>
>>> volume1=spm_vol('subject1mm.img');
>>> array1=spm_read_vols(volume);
>
> My understanding is that the 4x4 matrix, call it T1, contained in
> volume1.mat is the transformation that takes array indices as input and
> returns the MNI coordinates of the corresponding entry.
For spatially normalised images, the matrix will map from voxel
indices to MNI space. If the image is not a spatially normalised one,
then the voxels could map to a variety of spaces. For example, if the
image has just been converted from DICOM (via SPM's DICOM import),
then the mapping is to scanner coordinates.
> In particular, if
> T1 is given by, say
>
>>> T1=volume1.mat
>
> T1 =
>
> -1 0 0 80
> 0 1 0 -100
> 0 0 1 -72
> 0 0 0 1
>
> then the array coordinates of the MNI origin is obtained by
>
>>> o_vector = inv(T)*([0 0 0 1]');
>>> o_coords=o_vector(1:3)'
> o_coords =
> 80 100 72
That all looks correct, except that the voxel may not correspond to
the AC if the image is not spatially normalised.
> Assuming my understanding is correct so far, suppose I'm now told that this
> image was "resampled" to the file subject4mm.img, so that
>
>>> volume2=spm_vol('subject4mm.img');
>>> array2=spm_read_vols(volume2);
>
> and the new transformation, T2, is given by
>
>>> T2 = volume2.mat
>
> T2 =
>
> -4 0 0 80
> 0 4 0 -100
> 0 0 4 -72
> 0 0 0 1
>
> and the new array coordinates for the MNI origin are (20, 25, 18).
>
> My question is this: What is the relationship between the intensities
> recorded in array2 and those recorded in array1? For example, is the (20,
> 25, 18) entry of array2 the average of the intensities of the 1x1x1 cubes
> contained in the 4x4x4 cube that sits in the positive MNI orthant with a
> corner at the MNI origin?
If the data have been resampled, the resulting intensities will
usually depend on the actual method used. Ideally, when going from
high-resolution to low-resolution, a bit more care would be taken to
prevent aliasing effects. We don't actually do this by default
though.
Judging from the matrices, the mapping from voxels in subject4mm to
those in subject1mm is given by
T1 = [
-1 0 0 80
0 1 0 -100
0 0 1 -72
0 0 0 1];
T2=[...
-4 0 0 80
0 4 0 -100
0 0 4 -72
0 0 0 1];
T = T1\T2
c = [1 1 1; 1 1 2; 1 1 3; 1 2 1; 1 3 1; 2 1 1; 3 1 1; 20 25 18]';
T(1:3,:)*[c; ones(1,size(c,2))]
We can see that those indices map to the following ones in subject1mm
4 4 4 4 4 8 12 80
4 4 4 8 12 4 4 100
4 8 12 4 4 4 4 72
Therefore, the values in subject4mm are obtained by reading every 4 mm
of subject1mm, which corresponds to ignoring 98% of the original
signal.
For the 20,25,18 coordinate, this should have the same intensity as
the 80, 100, 72 coordinate in the original image.
Before downsampling the 1mm image, it may be a good idea to spatially
smooth it first. That way, a bit more of the signal will be used, and
there should be fewer aliasing artifacts.
> Or does the this entry correspond instead to the
> 4x4x4 cube whose center is at the MNI origin? Or ...?
See:
http://en.wikipedia.org/wiki/Aliasing
http://en.wikipedia.org/wiki/Decimation_%28signal_processing%29
Best regards,
-John
|