> 1. Could you correct this? Depending on the kind of deformations required
> in a specific df*, *would be necessary that* v* were not constant in order
> to achieve the optimal shape . So, the influence of the constant velocities
> over deformations is that when a deformation in a d*t* occurs, this is
> constrained and It can't reach the optimum shape. Then, If df were smaller
> differentials, that influence would be smaller. So, very big number of
> iterations (as a theoretical idea, perhaps, very time consuming and not
> feasible in practise) would lead to better deformations, achieving almost a
> similar deformation we would obtain in a flow field from a LDDMM which
> match images modelling as the evolution in time, associating a smooth
> velocity vector field controlling the evolution.
I would need a bit more context to say more.
>
> 2. When DARTEL model is applied over a bunch of images, we are going to
> obtain their flow fields after the create_template step, but if we
> translate some of our images few pixels, and we replicate that study, the
> deformations would not be the same ones?
Not quite the same. The further that points need to travel, then the less
accurate are the resulting warps.
> Do not the previous rigid-body
> transform + some voxel-size info encoded in headers translate the images in
> order to obtain, afterwards, the same deformation fields?
This should be the case, but it is not strictly a part of the DARTEL algorithm
that was described in the paper. It is something that is done to the data to
make things a bit better behaved.
> Is an effect of the fixed velocity?
Yes. The fixed velocity framework assigns velocities to the points in space
through which the deforming brain passes. If points move further, then they
pass through more different values. Each change in value would change the
speed that each part of the brain moves.
Ideally, the parameterisation of shape should correspond with points in the
evolving brain, rather than points in the background space.
In contrast, LDDMM uses a variable velocity framework. With a bit of creative
imagination (or some hard maths), you could envisage this as being a
variational approach that finds a shortest distance (geodesic) between the
initial and final configuration of the brain. Given a differential equation
with start and end points (boundary conditions) specified, it is possible to
figure out the nonlinear function that joins them together. This is what
LDDMM tries to do. The registration minimises the difference between an
individual brain and a deformed template, while also minimising a measure of
distance that the template needs to travel.
An alternative approach would be to estimate the initial velocities needed to
shoot the template to its final configuration. Given the initial
configuration of points and the initial velocities associated with each
point, it is also possible to integrate the differential equation in order to
compute the same(ish) geodesic that would be derived by connecting the
initial and final configuration of points. Such an approach conserves the
"momentum" associated with each point in the brain.
Geodesic distances between pairs of brains could be used for modelling their
relative shapes (see a recent paper by M I Miller et al in HBM).
Alternatively, the initial velocities required to shoot the same template
into alignment with each of the subjects in the study, can also be used to
model the relative shapes of brains (see eg Lei Wang's IEEE TMI paper).
> This is something I understood in the paper* A fast
> diffeormorphic image registration algorithm-Ashburner, 2007.* Have I
> understood correctly? And, depending on the answer, have it any influence
> over the results?
In practice, it may not actually make all that much difference for the kinds
of mapping studies that most people do. However, for other applications, it
may make more of a difference. I have an implementation of a geodesic
shooting approach, and intend to test it out properly fairly soon.
All the best,
-John
|