On 5/11/07, Steve Smith <[log in to unmask]> wrote:
> The 4D dataset is scaled to be 10000 on average, but there is
> variability around that (otherwise the timeseries analysis would be
> slightly pointless.....)
>
> So for example voxels at the edge of the brain can have much lower
> values (unless you make the thresholding in the preprocessing a lot
> more aggressive) and hence you can get very high % signal change -
> this is an obvious limitation and danger of quantifying effects via %
> signal change.
Some other possible options are:
i) instead of dividing by baseline voxelwise, compute the mean
baseline value across the whole brain (which would be 10000 if using
FEAT) and use that as the denominator in % signal change calculations.
This is more or less the approach used by SPM I believe, and will give
more reasonable values in areas of low signal. The problem is that it
is not really a correct measure of % signal change, so I tend not to
like this option.
ii) use the mean baseline value for a small region of voxels
surrounding the voxel of interest in calculations of % signal change.
This is a compromise between the voxelwise approach and i) above.
Because most areas of low signal are in areas showing quite high
signal gradients, this approach will tend to give a denominator in %
signal change that is somewhat higher than some of the individual
voxels, while still representing the local baseline signal to a
certain degree.
iii) use z-scores instead of % signal change. This changes the
interpretation somewhat, since z-scores are a measure of contrast to
noise, rather than contrast to baseline signal. But z-score values
will be more consistent across brain regions, and in most cases in my
experience give highly similar statistical results as % signal change.
Tom
--
Tom Johnstone
Waisman Laboratory for Brain Imaging and Behavior
Waisman Center
University of Wisconsin-Madison
Tel. +1 608 263 2743 Fax. +1 608 265 8737
[log in to unmask]
http://brainimaging.waisman.wisc.edu/~tjohnstone
|