How robust is the method of doing ratio normalization?
Currently, in spm99, the method for PET seems to be "divide by the adjusted
mean; the adjusted mean being the mean over all voxels with intensity
greater than 1/8 the true mean".
A colleague of mine has a different method which restricts consideration to
a subset of (axial) planes and uses a different method for determining which
voxels should be included in the mean.
Of course, the numbers themselves don't matter; two methods give the same
normalization if their *ratio* is fairly constant over many scans. (Of
course, the variance intra- and inter-subject will be different.)
My concern is that, in the 6 scans I looked at, this ratio varied by 10%.
Obviously, this means that SPMs generated following these two methods will
be quite different, at least if one is looking for highly significant
activations.
Aside from the question of "which method is best" (and I assume a lot of
thought was put into the SPM method, although perhaps it tries to be very
general (as it must) and a method tailored to a particular scanner and
configuration of slices might be even more stable), there is the question of
how stable any such method is, which is almost equivalent to "how much can
any adjusted mean really reflect gCBF".
Any comments or pointers to the literature would be welcome.
Best wishes,
Stephen Fromm, PhD
NIDCD/NIH
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|