Print

Print


Dear Rezwan,

The approch of Sepulcre et al. (2006) is known as "optimized VBM", which means you segment the images a first time, create your own tissue probability maps/templates, and then rely on them in the second segmentation step. This is rather outdated/redundant since the unified segmentation algorithm has been introduced with SPM5, see e.g. http://dbm.neuro.uni-jena.de/vbm/vbm5-for-spm5/ . In any case I would NOT turn to SPM2, instead go with SPM12 "Segment". Whether to create your own TPMs or not is up to you, for that purpose you should have rather large sample sizes probably. However, for group comparison you might want to use the Dartel toolbox or maybe better the Shooting Toolbox to warp together the GM, WM, CSF images obtained from unified segmentation, this should be more sensitive. For longitudinal comparison I would rely on the longitudinal toolbox, as it seems to be the most sophisticated algorithm in SPM at the moment.

Sepulcre et al. (2006) ran two different segmentations/normalisations as for analysis 1 they created TPMs based on their patients and controls, whereas the second was based just on the patients. This is valid.

For the longitudinal comparison they just used rigid-body transformations as they assumed them to be sufficient (actually they claim their strategy to be more sensitive). Whether this holds true for your data or not is up to you, but note that they cite Cardenas et al. (2003, Neurobiol Aging), which is based on findings with a low group of subjects with a particular algorithm (AIR 3.08). If you want to go with rigid-body transformations only you can adjust the settings in longitudinal toolbox accordingly (which you should use in any case due to some of its features like the inhomogenity corrections, see Ashburner & Ridgway, 2013, Front Neurosci for details), then you could compare the results to the default settings including non-linear registrations. 

Concerning the lack of effects, maybe you have to adjust for sex, age, total brain volume, ..., maybe your sample size is still too small. I would suggest to ROI analyses to get a better impression of the data. Maybe some of the volumes are corrupted, this would affect the analyses of course.

Hope this helps a little,

Helmut