Dear John, Koen, and all-
I was also wondering how SPMīs method of inhomogenity correction (by the
preliminary spm_flatten.m) would compare to the method provided by the EMS
(Expectation Maximization Segmentation) tool
(http://bilbo.esat.kuleuven.ac.be/web-pages/downloads/ems/ems.html)? And how
does the latter compare to the others tested by Arnold et al. in NeuroImage
13 (5), 931-943, 2001?
In addition, I would most certainly appreciate any comments on the
segmentation offered by EMS. As far as I can see, it produces equivalent
*seg*.imgīs (but cave: seg1 is WM, seg2 GM !) which may benefit from being
further "cleaned-up" and could then be entered into vbm-analyses. However,
the procedure runs rather slowly - in particular, when making use of the
Markov random field. I have also noticed more "contaminations" of cleaned up
GM-partitions obtained by EMS, SPMīs brain extraction, and Imcalc
functionality with, for instance, material in the dural sinuses (i.e., sinus
sag. sup.) than I have seen in cleaned up GM-partitions obtained by SPMīs
intrinsic segmentation, brain extraction, and Imcalc functionality. Maybe,
there would be a way to make use of EMSī seg4-6.imgīs to clean-up the
segments? It doesnīt look like, though...
TIA-
Andreas
****************************************************************************
******************************
Dr. Andreas J. Bartsch phone: +49 (0)931-201-0
Division of Neuroradiology,
ecr.: -34791
BJMU Wuerzburg pager:
#5325
Josef-Schneider-Str. 11 fax: +49 (0)
931-201-34685
97080 Wuerzburg email:
[log in to unmask]
Germany
[log in to unmask]
****************************************************************************
******************************
> has anybody any comments on SPM4s vs. Gary Glover4s method of inhomogenity
> correction (the latter has been kindly posted by Kalina on her website at
> http://www-psych.stanford.edu/~kalina/SPM99/Tools/vol_homocor.html)? Both
> seem to work quite well but they do not (of course) produce identical
> results which becomes obvious when substracting their results from each
> other. Maybe, someone has evaluated the two in a more detailed comparison
> to each other.
> In general, I have found inhomogenity corrections quite useful for VBM
> data, even when acquired at 1.5 T. Basically, I am wondering about the
> (dis)advantages of the above two algorithms and if it would make sense to
> run them both consecutively over data. If yes, which one first? If not,
> what would be the danger of "double" correcting by the two methods?
The SPM99 bias correction algorithm is documented in the appendix of:
J. Ashburner and K. J. Friston. "Voxel-Based Morphometry - The Methods"
NeuroImage 11:805-821, 2000.
Unfortunately, it is not without its problems, as pointed out in:
Arnold, J. B., Liow, J. S., Schaper, K. A., Stern, J. J., Sled, J. G.,
Shattuck, D. W., Worth, A. J., Cohen, M. S., Leahy, R. M., Mazziotta,
J. C. and Rottenberg, D. A.. "Qualitative and quantitative evaluation
of six algorithms for correcting intensity nonuniformity effect."
NeuroImage 13 (5), 931-943, 2001.
The reason for this is that the SPM99 algorithm effectively attempts to
minimise the entropy of the intensity distribution. This is a sucessful
strategy for log-transformed intensities, but not so good for original
intensities when modelling a multiplicative bias. The reason for this
is that scaling an image uniformly by zero will result in the sharpest
peak in the intensity distribution. A couple of people use a strategy
that involves constraining the average intensity of the bias corrected
image to remain constant. This appears to give reasonable results.
The approach used by SPM99 is to constrain the bias field to average
to one. This has the unfortunate effect of introducing a bowl shape into
the estimated bias, as the algorithm attempts to reduce the intensity
within the head, and compensates for this by scaling up the intensity of
the background. This effect is particularly apparent in data with very
little bias. Although the bias correction in SPM99 is slightly flawed, it
still usually allows a better segmentation than would be obtained without
it.
Another issue relates to how much intensity non-uniformity should be
removed.
If you remove all of it, then there is not much brain left in the image.
This
can be though of as the number of degrees of freedom used to represent the
bias field, and is a similar issue to determining the optimum amount of
regularisation for spatial normalisation. The same algorithm will produce
very
different results with different amounts of regularisation.
There is an early prototype of the SPM2 bias correction algorithm available
from:
ftp://ftp.fil.ion.ucl.ac.uk/spm/flatten
This algorithm should avoid some of the pitfalls of the SPM99 approach. It
is not the final version that will be available with SPM2 (which will use a
non-parametric rather than a parametric representation of the intensity
distribution), but it did work with the datasets I tried it with.
Best regards,
-John
--
Dr John Ashburner.
Functional Imaging Lab., 12 Queen Square, London WC1N 3BG, UK.
tel: +44 (0)20 78337491 or +44 (0)20 78373611 x4381
fax: +44 (0)20 78131420 http://www.fil.ion.ucl.ac.uk/~john
|