You are likely to encounter problems if you try to use SPM's longitudinal registration on images that are skull stripped.  To obtain a balance between the smoothness of the nonlinear warps and the fit to the images, the longitudinal registration uses an estimate of image noise.  By default, this image noise is estimated from the air in the background.  If images are skull-stripped, then this will give nonsense.  I'd suggest not skull-stripping.

I assume that your image volumes have good resolution within plane, but with only 34 slices, I'd guess your through-plane resolution might be relatively poor.  If this is the case, be cautious about interpreting findings obtained from such data.

I have no idea why the images end up being so strongly downsampled.  The rc*.nii images that dartel (and Shoot) work with should - by default - have the same voxel sizes and dimensions as the tissue priors used for the segmentation.  Maybe check what values entered for the bounding box and voxel sizes of the normalised images, as this will determine the dimensions of the normalised images.

best regards,
-John




From: SPM (Statistical Parametric Mapping) <[log in to unmask]> on behalf of Marek <[log in to unmask]>
Sent: 28 June 2019 15:16
To: [log in to unmask]
Subject: [SPM] VBM in SPM12 on animal sample (cats)
 

Hello.

I am working on my first voxel based morphometry (VBM) analysis project using SPM12 and I would like to get some advice on the pipeline I am following.

The subjects are 11 cats in a longitudinal study with 5 time-points (tp - first time-point is baseline). We would like to compare grey matter over time.

Steps performed:

1- images from DIGICOM to Nifti;

2- skull stripping (using FSL);

3- Serial Longitudinal Registration. To obtain: avg (for each cat), Jacobians "j_"(every tp for each cat), Divergence of velocity "dv_"(every tp for each cat), Deformation fields "y_"(every tp for each cat);

Time points are set taking into account the distance between the examinations.

4- Segmentation (on SPM12, Segment). Images obtained: c1, c2, c3, c4 (grey, white, CSF, binary map), and also rc1, rc2, rc3 (images required for DARTEL);

5- DARTEL (create template). Output: six times common DARTEL templates and "u_rc1" flowfields (one for each cat);

6- Jacobian files multiplied with "c1" files (using the imCalc button).

Expression: (i1.*i2)

The output files were "c1_j" images (every tp for each cat).

7- Warping to DARTEL space (with modulation!). Output: "mwc1_j" images.

8- Smoothing taking into account the voxel sixe of the images after DARTEL, I set the FWHM doubled the voxel size. Output: "smwc1_j" images.

Issue: When I checked the resolution of images after the DARTEL computation, I noticed that the images were strongly downsampled (from 279x280x34 to 50x65x33). I looked for an explanation on internet but I can't find anything on animal samples.


To avoid this downsampling, I performed as follow:

steps 1, 2, 3, 4, as before, and

5- manual reorientation of images to the atlas. This step was done since in the following step no orientation is performed;

6- Normalise (estimate and write). Output are images are "w_c1_j" images (every tp for each cat);

7- Smoothing taking into account the initial voxel size. Output "sw_c1_j" images (every tp for each cat).

Issue solved: the images obtained are no more downsampled (from 279x280x34 to 505x607x70).
Following this second pipeline results in images without modulation.

Finally.
Question about the downsampling of the images. Do you know why I obtain these low-resolution images after DARTEL? Is it because of the nature of the sample (animal sample)?

I will proceed with the analyses, but for now I'd like to have a feedback on this pipeline.

I thank you all in advance for your replies. 
Marco