> I have done optimised VBM on AD brains and determined the volumes of
> GM/WM/CSF. The mean volumes are:
> GM = 0.701 l
> WM = 0.459 l
> CSF = 0.745 l
> Total = 1,906 l
>
> I do not trust in these volumes because they are too large.
> But in the literature there are also papers that report large
> GM/WM/CSF volumes (e.g. Good et al., 2001, NeuroImage 14, 21-36)
> GM = 0.829 l
> WM = 0.454 l
> CSF = 0.397 l
> Total = 1,68 l
>
> Therefore my questions are:
> 1. any ideas what was going wrong or what happens with these volumes?
I wouldn't trust the CSF volumes produced by SPM2. The intention was not
really to produce CSF segments, but some people wanted them, so they were
retained. You may need to do some thresholding in order to get a more
accurate guess of the volume (look at the data with an inverse grey scale),
but for many datasets, the CSF and skull intensities are very close, so
accurate segmentation is not possible.
> 2. is it possible that the percentages of GM/WM/CSF are always related to
> the default
> bounding box and therefore independent from the customised bounding box
> dimensions
> (I have worked with a customised bounding box because parts of the
> cerebellum and
> the apex were cut with the default bounding box --> i.e. I have only
> enlarged the
> Z-coordinates)?
If there are bits missing in the data, then they won't be included in the
calculations.
> 3. can the large volumes explained by the larger bounding box that caused
> additional structures
> like brain stem and cerebellum also to be segmented and therefore
> enlarge the segmental volumes?
I'm not sure what a typical brain volume should be, but I think that volumes
of 1.16 and 1.283 litres don't seem excessively large. The volume computed
by SPM usually includes cerebellum and parts of the brainstem. In the
segmented data, would you say that there are many misclassified voxels? If
you want volumes without cerebellum and brainstem, then you'll need to use a
different approach (e.g. editing out the bits of the image you don't want to
measure).
> 4. is it also possible that the volume enlargements came from the nonlinear
> normalisation?
If the measurements were done on spatially normalised tissue maps, then these
would need to be modulated in order to get more accurate volume measurements.
> 5. In order to control the large volumes I have also tried the following
> function from John Ashburner
> V = spm_vol(spm_get(1,'*_seg*.img'));
> vol = 0;
> for i=1:V.dim(3),
> img = spm_slice_vol(V,spm_matrix([0 0 i]),V.dim(1:2),0);
> vol = vol + sum(img(:));
> end;
> fprintf('%g voxels, %g litres\n', vol, vol*det(V.mat(1:3,1:3))*1e-6);
> for GM: 838197 voxels, -0.838197 litres
> for WM: 553626 voxels, -0.553626 litres
> for CSF: 1.01618e+006 voxels, -1.01618 litres
>
> why there is a negative foresign?
See my response from yesterday. If you want to figure out the total volume,
then use the script on Tom N's website and modify it to
fprintf('%g voxels, %g litres\n', vol, vol*abs(det(V.mat(1:3,1:3)))*1e-6);
The reason for the negative volumes is that the voxel-to-world mapping flips
from a left- to a right-handed co-ordinate system. This mapping therefore
has a negative determinant. The script was originally written for SPM99,
where things were done slightly differently. It works for SPM2 - providing
you take the absolute value of the determinant.
> whole brain volume (GM/WM/CSF) is about 2.397 liters!!!
I make it 1.3918 litres (CSF doesn't really count). The CSF volume includes a
little of load of other stuff, so is not at all accurate.
Best regards,
-John
|