| 1) The paper makes quite a point about having high resolution MRI scans, and
| yet the methods state that the segmentation is performed on the normalized
| images. Would not it be better to segment the raw images that are in 1x1x1mm
| voxels than to use the images that have been re-sampled into 2x2x2mm voxels?
| Is there a higher resolution template image that can be used for this
| specific application?
Spatial normalisation in SPM does not need to write the data at 2mm resolution.
You can specify any resolution you like.
The advantage of segmenting spatially normalised images is because the
segmentation overlays prior probability images. These could be overlayed
onto the image by an affine transformation, but a better match can
generally be obtained by overlaying them onto spatially normalised images.
This also means that the influence of the prior probability maps on the
segmentation of the spatially normalised images is similar from image to
image.
As the templates are pretty smooth, you probably wouldn't gain that much
by having higher resolution versions.
|
| 2) The smoothing of 12mm seems somewhat high to me when theoretically, the
| resolution of the data is 1m FWHM (in the raw form or 2mm FWHM normalized as
| per the paper) is the 3x voxel size no longer valid for this approach?
Three times the voxel size is the minimum smoothing for the Gaussian random
field theory used by SPM to be valid. There are also other considerations.
Spatial normalisation is far from perfect, as it only accounts for global
shape differences. Spatial normalisation in SPM does not do high resolution
matching, so it is probably not reasonable to expect to see real structural
differences (as opposed to registration errors) at a very high resolution.
Straightforward VBM would not actually show anything if the registration was
perfect, as all spatially normalised images would have the same spatial
distribution of grey matter. In addition, there are also other arguments
in favour of more smoothing (it makes the data more normally distributed,
better identification of more diffuse differences, etc).
|
| In the end I guess I have worked out a different pre-processing method and I
| was wondering if I could get some input on it's validity.
|
| 1st segment the high resolution MRI scans.
| 2nd smooth the gray matter segments with a 6mm Gaussian kernel to better
| facilitate normalization.
| 3rd normalize the raw image to the template space and bring the smoothed
| gray matter image with it.
| 4th the smoothed gray matter image needs smoothed once more in order to
| remove the effects of the deformation of the Gaussian field that occured in
| the normalization step (6mm again). In the end this would leave the images
| with a 8.5mm resolution.
| continue statistical analysis as per paper.
Something that we have tried involves spatially normalising by matching
segmented grey matter to the grey matter probability maps. This means that
non-grey matter has an almost insignificant effect on the spatial
normalisation. In order to maximise sensitivity to the statistical tests,
the residual variance should be as low as possible. Spatially normalising
this way directly attempts to minimise this variance.
Smoothing before spatially normalising is conceptually different to
smoothing after spatial normalisation. This is a difficult one to
explain without resorting to pictures, but it is worth pondering on
what is actually being tested. It helps if you think about how a similar
analysis could be done with ROIs. Would you make the ROIs all the
same size for all subjects, or would you make them bigger in some
subjects than in others? Would you base the test on the proportion of
grey matter in each ROI, or the total amount of gey matter?
Best regards,
-John
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|