Print

Print


The algorithm is essentially the same as that described in the 2005
paper, but with a few changes that make it more robust and hopefully
more accurate.

1) The tissue probability maps include a few new tissues (bone, soft
tissue and air), so should encode a better model of the head.  These
tissue probability maps have also been smoothed in a slightly different
way to ensure that there are no zeros anywhere and that the logarithms
of the values are reasonably smooth.  Some differences will arise from
the different set of tissue probabilities.  The extra tissues in the
model give a number of advantages:

a) The initial affine registration is more robust because it expects
bone and scalp outside the brain.  The older implementation didn't have
this information, so the initial affine registration often caused
problems.

b) By knowing about the existence of bone, the model is better able to
separate it from CSF (better estimates of TIV).  With a more accurate
idea about CSF, it should also be able to separate GM from CSF more
accurately.  Similarly, it expects there to be some soft tissue close to
the brain, which may have intensities similar to GM.  This may help to
eliminate some of it being mis-classified as GM.

c) Sometimes it is helpful to identify other tissue types.  For example,
knowledge of scalp surface and bone may be useful for M/EEG source
localisation.  Another aim was to have a better chance of separating
tissue from air in the head (eg sinuses), which we hope could lead to
improvements in EPI distortion correction.

d) The algorithm now has more chance of identifying CSF from high
quality CT images.  This does not work so well for CT with thick slices,
because the CSF around the brain has its intensity dominated by partial
volume between soft tissue and bone.  However, it may work for data with
thinner slices.

2) The deformation model is now more flexible than the old one.
Previously, only about 1000 parameters were used to model the shape of
the head, which is no-where near enough.  Some of the technology that
went into Dartel has been used as a framework for much more detailed
deformation modelling (typically with about 700,000 parameters).

3) The way that the mixing proportions are used has been changed
slightly.  This may bias some aspects of the segmentation more towards
the information in the template - which may be a good or bad thing.
Mostly good I hope.

4) There is now the possibility of modelling multi-spectral images,
rather than being limited to images of a single modality.

5) For single modality images, there is the option to use a
non-parametric (histogram) representation of the intensity distributions
of the different classes.  This avoids some of the local optima that a
mixture of Gaussians model can fall into.  The non-parametric option can
be used for multi-channel data, but it does not work well because the
histograms are only 1D. Outer products of 1D histograms are used for
representing multi-spectral intensity distributions - which is not an
especially good approximation.

6) The strategy used by the initial affine registration is now closer to
that used by the registration component of the main segmentation
routine.  This provides additional robustness to poor starting
estimates.

7) The UI is more flexible, so that the TPMs may be refined further to
include additional classes.  Treating brain as only GM and WM is really
a bit too simplistic.  To have any chance of achieving accurate
segmentation of thalamus or striatum, there needs to be additional types
of GM included in the brain model.  An eyeball tissue class would also
help a lot
( https://www.jiscmail.ac.uk/cgi-bin/webadmin?A2=ind0908&L=SPM&P=R13646 ).


That's about it.

Best regards,
-John

On Fri, 2010-02-12 at 13:50 +0000, João Duarte wrote:
> Dear SPMers,
> 
> in SPM8, what's the difference between the "Segment" button and the
> "New Segmentation"?
> 
> Thanks
> 
> JD


-- 
John Ashburner <[log in to unmask]>