Print

Print


I think what you want to do is not totally straightforward, and I can't think of a workaround that allow it to be done in an easy way (ie without writing any extra MATLAB code).

If the average is not segmented so well, this could be indicating that the alignment is leading to a slightly blurred average.  The average could be made more crisp by decreasing the registration regularisation that is used.  You could rescale the default deformation regularisation settings by (say) 0.1.  This would give the registration more flexibility so that it can make the warped images visually more similar to each other, which should give a crisper average that segments better.

Any nonlinear image registration algorithm uses some sort of a tradeoff between keeping the warps smooth, and making the images more similar.  The longitudinal registration in SPM is no exception.  The default amount of regularisation worked well for the dataset I used when developing the algorithm, but may be set a bit too high for some other other datasets.

Best regards,
-John


On 23 April 2018 at 12:22, Marina Papoutsi <[log in to unmask]> wrote:
Hi John,

Thank you very much for the quick response. It's greatly appreciated.

I have been having problems with partial volume effects in a longitudinal VBM analysis of Huntington's disease patients. When I compare two timepoints (~ 2months apart) across all my participants I get a large periventricular band. At ~2months, I am not expecting to see any changes due to disease progression.

A colleague from the FIL, suggested that instead of segmenting the average map (output from the longitudinal registration), I should instead segment each timepoint separately and then create an average from that, which I could use for dartel.

The meanrc* images (created by averaging the separate rc* images) look a bit cleaner than the rc_avg* (created by segmenting the average T1w volume), with lower GM probability assigned to periventricular voxels. However, for the meanrc* images the qform is the same as the sform matrix (since they are just an average).  Normalisation of the c1 images at a later stage is not correct.

From your reply, I understand that each rc*image (for each timepoint), will have a different qform matrix. Is it correct to simply copy the qform matrix from each rc* image  to the u_meanrc* Template when normalizing each timepoint?

If this is not a correct approach, is there anything that you can suggest to improve segmentation results of the average images? I have previously read about masking the c1* images with the c3* images (CSF), but I was hoping there would be a better option available.

Many thanks again for the help,
Marina




--
Prof John Ashburner
Professor of Imaging Science
UCL Institute of Neurology
Queen Square
Wellcome Trust Centre for Neuroimaging
University College London
12 Queen Square, London, WC1N 3BG
E: [log in to unmask]  T: +44 (0)20 3448 4365
http://www.fil.ion.ucl.ac.uk/