Print

Print


A number of coregistration methods have been objectively compared at:
        http://www.vuse.vanderbilt.edu:80/~jayw/
You may find this useful.

Other coregistration validation work that I know about is:
L. Barnden, R. Kwiatek, Y. Lau, B. Hutton, L. Thurfjell, K. Pile and C. Rowe.
"Validation of Fully Automatic Brain SPECT to MR Co-registration"
European Journal of Nuclear Medicine 27:147--154 (2000).

If you want to evaluate the methods with your own data, then I guess you
could come up with statistical tests based on a number of observers coming up
with some kind of score for how well coregistered the data are.
Alternatively, you may want to use some measure of consistancy. i.e.,
register A to B, B to C, C to D etc, and see how well A and D end up being
registered.

A good evaluation for spatial normalisation is rather more tricky as there
are so many diffent approaches out there.  The simplest is to identify points
in the brain images of several subjects, normalise them, and see how close
together all the points are after normalisation.  Alternatively, you could do
it with lines or surfaces.

Another approach is to segment structures in the different brain images and
look at the overlap of the structures.  I am not that keen on this approach
though (for cortical structures anyway).  It may be slightly better if the
structures were smoothed first so the results were more of a distance measure.

Another method is to take an image of a subject, warp it in some way, and
then try to automatically unwarp it by registering it with the original.
This method is not so good.  If a researcher uses their own warping model to
deform the images, then (surprise, surprise) their own registration method
will probably be found to work best.

The above approaches are based on matching structure, but the objective of
spatial normalisation for functional imaging is to match function.  Also, it
is not just about getting the right bits of the brain as close together as
possible.  Another factor is how much distortion occurs in the process.
Methods that distort a lot usually introduce a lot of local volumetric
changes.  This means that signal from some subjects will be reduced (due to
shrinkage), whereas others will have increased activation signal after
warping (increase in volume).  A more appropriate approach is to spatially
normalise a lot of functional data from different subjects and see how well
aligned the activations are.  This could be done by seeing which give the
most significant results in group studies.  If this is done though, it is
probably worth trying out different amounts of smoothing.  What works best at
one smoothness need not work the best at another.

There are other people out there with different views, so I hope they chip in
with their comments.

Best regards,
-John

On Friday 13 July 2001 23:36, Frank Hillary wrote:
> SPMers,
>
> Is there an objective way to compare separate coregistration methods?  That
> is, after using separate coregistration methods, how can they be compared
> empirically to determine which provides the optimal fit? Similarly, I would
> like to know which normalization procedures are best suited for my sample
> (brain injury).  This issue of having a "gold standard" with which to
> compare arises for normalization as well.
>
> Thank you for any suggestions.
>
> Frank Hillary

--
Dr John Ashburner.
Wellcome Department of Cognitive Neurology.
12 Queen Square, London WC1N 3BG, UK.
tel: +44 (0)20 78337491 or +44 (0)20 78373611 x4381
fax: +44 (0)20 78131420
http://www.fil.ion.ucl.ac.uk/~john
mail: [log in to unmask]