Dear Robert,
the new longitudinal pipeline (which are in fact 3 different ones) is explained in our CAT12 paper:
https://www.biorxiv.org/content/10.1101/2022.06.11.495736v1.full
There is a good overview in the supplements (Suppl. Fig. 2).
The other answer are below...
On Tue, 30 Aug 2022 04:10:22 +0000, Robert Cadman <[log in to unmask]> wrote:
>Dear Christian and CAT12 experts,
>
>I am trying to work with the longitudinal processing module in CAT12.
>
>Within each MATLAB invocation I am only processing data for a single subject, so please assume my description refers to 3-4 images for a single subject at different timepoints.
>
>It appears that the first step, or an early step, is to perform a rigid registration on the input images so that they more or less overlap. These have the prefix �r� on the input file name. I think of these r-prefixed images as being in �realigned timepoint space�, although you may have a better name for it.
>
>Then there is some bias correction and averaging to produce an image that is approximately the average of the 3-4 realigned images. This image has the prefix �avg_� on the filename of the first input image. I think of this image as �subject average space�.
>
>My first questions are about the deformation fields. For each timepoint I have a deformation field with the prefix �y_r� on the name of the input file. I also have a field with prefix �avg_y_r� on the name of the first input image. I was unable to find an explicit statement of what these deformations represent in the CAT12 manual.
These are the average deformations which are finally applied to all time points.
>
>I believe that if I run SPM pushforward with the �avg_y_r� field I can transform SPM space into subject average space. I also believe that SPM pushforward with the �y_r� fields transforms SPM space into realigned timepoint space. Is that correct?
The avg_y images can be directly used for normalizing with the normalization function in SPM12.
>
>If I have that correct, is there any way to write out the deformation field between the subject average space and a timepoint space?
No, only for the ageing pipeline low-dimensional deformations are additionally used but are not meaningful for other use and will be not saved.
>
>A related question is about the deformation field output. Although I have cat.output.warps = [1 1]; in cat_defaults.m, I only get �y_� deformation fields and never �iy_� deformation fields. Is that expected? Is there any way to write out inverse deformation fields?
This would be too complicated to consider. If you need something like that simply use the average image and estimate deformations and save the inverse.
>
>My other questions are about the tissue probability maps. It seems that with the default cat_defaults.m I get only TPMs for GM and WM, not CSF, and I only get them in realigned timepoint space, so prefixes �p1r� and �p2r�. If I modify cat_defaults.m to write out TPMs for CSF (and the tissue classes outside the brain) I also get them in subject average space, so I have prefixes �p3r� and �p3avg_�. So is there a way to get �p1avg_� and �p2avg_�? (A colleague suggests I should run the cross-sectional segmentation module on the average image to get the TPMs for GM and WM in the subject average space.)
This is also not prepared and the subject-specific TPMs would be not helpful and are only used internally. And yes, you could use the segmenation of the average image to get.
>
>I also see files with prefixes �p0r� and �p0avg_�, and I might be able to use �p0avg_� instead of the TPMs. It looks like the points in that file should be 0 for background, 1 for CSF, 2 for GM, and 3 for WM. I see some voxels have fractional values, as if the map was transformed as something other than categorical data. Is this a bug, or something I did wrong?
No, please don't use the p0avg images as TPM. And yes, this is not simple label image with just 3 values, but considers partial volume segmenation which is reflected in the label.
Best,
Christian
>
>I am using version 2043, although I upgraded very recently.
>
>Thanks for your help.
>
>Best,
>
>Robert
>
|