Dear Alain,
> I recently worked with PET images of an amyloid tracer.
>
> I was trying to compare two different ways for uptake quantification, one using a PET template, the other using images in native space.
>
> For different reasons (among others some reviewer's comment) I tried both linear and non linear registration of the PET images to the template.
> I was quite surprised to see that the intensity value are greatly different using the two methods.
> In particular the images registered in a non linear fashion present higher values than the images linearly registered (let's say that the difference in values is of 500 in a scale that range from 0 to 3000). Why this happens? I mean, leaving the interpolation method unchanged, why the type of _spatial_ transformation should affect the values ?
I had a look at the images you sent me, and I don't see what you describe. For any anatomical location I think I am able to identify in both the flirted and fnirted images I find very similar values. Could you please point me to some specific voxel/region where you see this?
However, the fnirted images look horrible. The T1 configuration file you have used will perform a non-linear registration with a 10mm warp resolution, and with a relatively small amount of regularisation. It also uses the full resolution of your images (at the final stages). This doesn't work for noisy PET images and you would need to use a lower warp resolution and more regularisation. You also need to use a different intensity model (global_linear).
> And a semi-related question: could the number of degree of freedom in a linear registration affect the values of the final image, once again leaving unchanged the interpolation method ?
It can under specific circumstances. Say for example you have a template with roughly similar intensity in grey and white matter and that you then attempt to register an image with much lower intensity in white than in grey matter.
If you use an affine/low-res non-linear registration there isn't all that much that the algorithm can do to reconcile those intensity differences and most likely it will end up fitting the brain surfaces and the intensities will be largely preserved.
If on the other hand you have a high-resolution registration method it will be able to squash the white matter together (thereby getting rid of the low intensities) and expand the grey matter. So for that case the average intensity will now be higher in the registered image.
Jesper
>
> Thank you in advance for any answer or tip
>
> Alain
|