> I'm hoping someone else around here has run into a similar problem with the
> likelihood of a shared solution. We have a series of Time 1 & Time 2 PD
> scans that we would like to interrogate for longitudinal tissue changes,
> but for the time being we are slowed down by issues in the tissue
> segmentation process. Our PD images have decent resolution with visible
> differentiation of the grey/white border, but the CSF component is proving
> troublesome. Standard segmentation (VBM5) yields good grey/white
> dissociation, but the cortical rim is contaminated by CSF "bleed through"
> on the grey segments. Also, the ventricles (not the ventricular rims) are
> lit up and assigned as grey matter. Does anyone have a set of PD tissue
> priors that he/she could share and/or recommendations on segmentation
> settings or the like with images of this sequence type?
If the difference between the intensity of CSF and grey matter is not clear,
then the segmentation is unlikely to work so well. Assignment to the
different tissue classes is based mostly on the intensities of the voxels.
If GM and CSF have very similar intensities, then the algorithm won't be able
to separate them so well.
PD tissue priors should be the same as T1 tissue priors. The same tissue
distribution should be expected irrespective of the contrast used to image
it. However, having additional tissue classes may help the segmentation.
For example, if there tissue probability maps for non-brain tissue, then it
may be possible to segment out the boundary between CSF and skull rather
better. If this leads to a better characterisation of the CSF intensity
distribution, then a better separation between GM and CSF may be obtained.
> We have high-res T1 scans at Time 1, but not at Time 2. My thought for
> these (at least for Time 1) was to coregister and reslice the PD to fit the
> T1 scan; segment the T1 scan; and apply the grey matter segment as a mask
> on the PD Time 1 data. But, this still leaves me lost for Time 2, hence
> some interest in seeing if we can get the PD segments for Time 1 and Time 2
> without this complicated (if not questionable) masking scenario.
It's not very elegant, but you may be able to generate subject specific priors
from the T1 by generating spatially normalised GM, WM and CSF maps and
smoothing them by about 4mm. Then when you segment the PD images, you could
use these, rather than the images in the spm\tpm directory. I don't know
what the side effects of doing this may be, or whether you'd be able to get
the results of the work past the reviewers - but a proper generative model
that uses all three images together would be a bit too tricky.