Dear Luca,
Relying on T values is actually quite common, e. g. see Della Rosa et al. (2014, https://dx.doi.org/10.1007/s12021-014-9235-4 ) in the context of two different PET templates. Ideally you should transform the differently preprocessed data into a common space to not just conclude that some statistical value like a peak T is larger in case of one preprocessing pipeline, but also that it is significantly larger relative to the other pipeline.
> This, however, isn’t really looking at the spatial aspect of it, only whether it affects the following analysis.
Correct. The ground truth is usually unknown, and while a better registration should reduce anatomical mismatch it remains unclear how much this transfers onto functionally defined regions or corresponding statistics.
To evalute spatial accuracy of different normalisation strategies it would be reasonable to work with prominent landmarks in the raw images defined manually, e.g. determined by a coordinate (or a region defined by a set of voxels), as it's easier to infer something about the overall quality then.
In one step, you could normalise the T1 images based on e.g. the segmentation routine, in another step, you could go with "Old normalise". Then determine the distance of the landmarks in the normalised images of different subjects (or the overlap of the normalised regions across subjects), ideally it's 0 (or perfect overlap).
In principle you could also go with T1 segmentation vs. indirect T1 normalisation (based on the parameters from "Old normalise" of the coregistered PET image normalised onto your PET templates). This is not just contrasting "Segment" and "Old normalise" then, as it involves a coregistration step in one case, as the normalisation parameters are derived from different images, with the PET normalisation e.g. being less accurate due to larger voxel size, templates / tissue priors being of different quality and/or derived from a different set of subjects (this would also hold when staying within the T1 domain and working with default files) ... , but it might serve the purpose of contrasting the "standard" approch (whatever it is) with another.
One could also think of applying deformations of known extent and then forwarding these altered images into the pipelines to see to which extent the mismatch is reduced, but this might be difficult to implement.
> I could take the segmented gray matter class from the MRI normalization
Working with GM images for that purpose would probably have to be considered as a bias towards the segmentation routine. Aside from that, possibly you have very good overlap across GM files, but possibly the GM voxels are all misclassifications.
Best regards
Helmut
|