> I'd like to preface my questions by saying I really like the concept
> of DARTEL and the results presented in the Neuroimage paper. My
> questions are driven by my desire to learn more about how the steps in
> DARTEL are implemented and how they might compare to optimized VBM.
>
> I have several questions about how DARTEL operates:
>
> (1) From the user guide, it seems that the initial import only uses
> the affine portion of the seg_sn.mat file.... is this correct?
It actually only uses the rigid-body component of this. In the Segmentation
code, there is a hidden feature that I had hoped to use for analysing
deformation fields, which is that it decomposes the deformations into a
rigid-body component and a small deformation model parameterised by only
about 1000 basis functions. It is this rigid-body part that is used by the
importing, so that the deformations parameterise all the shape differences.
>
> (2) Has anyone investigated the effect of linear versus non-linear
> segmentation seg_sn.mat files on the initial import (e.g. in unified
> segmentation in SPM5, does including non-linear terms alter the affine
> component)? Can one simply use the normalization routine to get an
> Affine transformation (and insert the prior list matrix into the mat
> file), if not why?
The affine component will be slightly influenced by including the nonlinear
component in the generative model. This is a bit like covarying out
confounding effects in a GLM. If you don't do it, then the parameter
estimates that are of interest will be less accurate.
Also, the hope is that the nonlinear part allows the tissue probability maps
to be more accurately overlayed.
>
> (3) How is DARTEL performing the segmentations? Are the outputs
> posterior probabilities? Does the procedure include bias correction?
> Can you used customized priors? Are the priors determined from the
> seg_sn.mat?
DARTEL doesn't actually estimate the segmentations itself. It uses the
parameters estimated by the Segmentation - stored in the seg_sn.mat file to
generate tissue class images, but tissue class images that are in rigid
alignment. Note that there are two affine transform matrices stored in the
image headers, and the import actually makes use of both of them (so that
there is a mapping back to the un-imported data).
>
> (4) Does the segmentation use Hidden Markov Random Fields?
No. This may come with a later SPM version.
>
> (5) When using previous segments, did you mean to say use the segments
> in native space that are unmodulated?
??
>
> (6) I'm interested in looking at the CSF segments, particularly around
> the ventricles; you mentioned that including CSF in the processing is
> not an good idea because CSF segmentation is variable (especially
> around the edge of the brain). Can one apply another flow field to the
> CSF segments?
There are no tissue class images available for bone, fat, eyeballs, air etc,
which means that the outside border of the CSF is not properly modelled (I
think it is probably a legacy of skull-stripping algorithms being introduced
in processing pipelines in order to use registration algorithms such as AIR).
If good generative models of head anatomy are to be devised, then such
non-brain tissue probability maps would be very useful.
Best regards,
-John
|