Dear experts,
I have some images of human brains which I process to get two (rather noisy) images in output, one representing the probability that each voxel is gray matter and another one representing the probability that each voxel is White matter. (these aren't real probabilities, but let's assume that they correlate strongly enough with something you could reasonably put as the argument of a gaussian).
I was thinking about feeding these two images as two channels in the "new segment" algorithm in SPM12 to get an improved segmentation compared to the one I can naively perform (which isn't that bad to begin with!) and to transform these maps to the MNI space. As tissue classes in the segment algorithm I used 3 tissues: TPM,1; TPM,2 and a sum of the 4 remaining TPM images for "everything else".
In my first trial, my maps had very large values outside of GM and WM, and close to zero in the tissue of interest. That gave rise to ridiculous results (e.g.: nothing in the GM map, everything in the WM map or other huge errors). Then I multiplied by -1 these images and shifted them so that now everything outside is zero and values get bigger (and positive) in the tissues of interest. And... it works kind of ok.
Anyone has any idea why is that? Why would the GMM find it harder to model a distribution compared to a linear transformation of the same one?
Thank you very much,
Luca
Rispetta l’ambiente: non stampare questa mail se non è necessario.
Respect the environment: if it's not necessary, don't print this mail.
|