If you use the modulation, then it should be based on the same deformation
field that you used to write your spatially normalised images.  E.g., if you
used the _sn3d.mat based on the T1 image to write the spatially
normalised images, then you would use the same one to determine how much
local contraction/expansion has occurred.

What processing steps have you used?  It is best to coregister the original
image pair together first.  Then decide which image you want to use to
estimate the spatial normalisation parameters from and use the same
parameters to write spatially normalised versions of both images.  In an
ideal world, the spatial normalisation would be able to simultaneously
estimate the warps by matching the T1 image to a T1 template, and the
T2 image to a T2 template.  This is something that I may get around to

Bets regards,

| I am using spm for VBM analysis and have segmented on a combination of
| T1 and T2 images within the segmentation-tool. Now, as I understand it,
| when I run your spm_preserve_quantity-mfile on the segmented picture I
| can only take one of the (in my case) two *_sn3d.mat-file (T1 and T2)
| into consideration when I want to modulate the segmented pictures. Is
| there any way to use the *_sn3d.mat -files from both T1 and T2
| normalization in this modulation-step on the segmented images?