On Tue, 20 Jun 2000, John Ashburner wrote:
> The uncorrected t statistic images should be the same. The differences
> you see should relate to the effects of correcting for multiple dependent
> comparisons. With a bigger bounding box, you do more tests so each test
> needs to pass a higher threshold before being considered significant.
> A bigger bounding box may also give you a slightly lower grey matter
> threshold, therefore including even more voxels.
I thought that the tests are performed on only those voxels that exceed
the grey matter threshold. It is true that a larger bounding box results
in a slightly lower threshold.
Two questions:
1) This may be a really idiotic question, but how does one view the
uncorrected t-statistic images? I'm assuming that viewing the t-statistic
images for a given contrast using the default values: "corrected height
threshold = no", "threshold {T or p value} = 0.001", and "extent threshold
{voxels} = 0" still applies a correction that is based on the the
smoothness estimates and consequently the number of resels.
2) The size of the bounding box strongly influences the smoothness
estimates, which I assume are then used to generate the significance maps.
Normalization to the SPM99 Default bounding box results in the following
values:
VOL.
S: 21242
R: [2 43.5423 508.3141 1.2528e+03]
FWHM: [2.6842 2.5119 2.1405]
whereas normalization to a bounding box which is the same size as the
original volume results in
VOL.
S: 23350
R: [1 22.7378 110.1009 125.8692]
FWHM: [5.9930 5.6235 4.7065]
In both cases, the voxel size is 3.75 x 3.75 x 5
The larger bounding box has a smoothness estimate that is twice as large
as the smaller bounding box, and an order of magnitude fewer resels. I
assume that this is why the t statistic images differ so much.
It is somewhat puzzling that the smoothness estimate would depend so
strongly on the bounding box, unless if the smoothness estimate is derived
from a particular spatial frequency bin, which would correspond to a lower
spatial frequency given a larger bounding box.
Thanks.
Petr
> Good luck,
> -John
>
> | We are encountering the somewhat puzzling and annoying problem that the
> | bounding box size we specify for our normalized images determines the
> | extent of significant activations we observe.
> |
> | We have taken the EPI data of a single subject and normalized it using 4
> | different bounding boxes:
> |
> | The SPM99 default: -78:78 -112:76 -50:85
> | The SPM99 template: -90:91 -126:91 -72:109
> | SPM99 option 4: [-95:95 -112:76 -50:95]
> | A bounding box which returns ~the original volume (64x64x27) of our data:
> | [-120:119 -139:100 -64:77]
> |
> | We were careful to make sure that our bounding box did not crop the brain
> | any more than any of the SPM bounding boxes.
> |
> | We made the voxel size of the resulting normalized images the same as the
> | voxel size of the original data: 3.75 x 3.75 x 5.
> |
> | We apply identical smoothing kernels ([6 6 8]) to each of the normalized
> | images, run it through our design matrix, etc., and plot the same
> | contrasts. The t-images differ **substantially** as a function of the
> | bounding box. When we use a bounding box which is the same as our
> | original data, we end up with one (1) significant voxel. When we use the
> | smallest of the bounding boxes (SPM99 default) we end up with 1461
> | significant voxels. The other bounding boxes yield intermediate numbers
> | of significant voxels as a function of bounding box size.
> |
> | Given the same voxel size, same normalization parameters, etc., it
> | seems the volume of the brain (suprathreshold voxels) should be the
> | same independently of bounding box size. Why then does the bounding box
> | size have such a profound effect on the statistics?
> |
> | This is a particular problem for us for the following two reasons:
> |
> | 1) In the interest of performing a statistical analysis over a group of
> | subjects, we normalize each individual's EPI images so that they can be
> | compared across subjects. However, we are also interested in looking at
> | the data of individual subjects on their individual high-res T1 images
> | because we have reason to expect significant topographical differences in
> | activation patterns. However, the bounding box problem prevents us from
> | doing this because the unnormalized image size is 64x64x27 whereas the
> | normalized (with bounding box 'Template') is 49x58x37 and the resulting
> | significance maps cannot be compared.
> |
> | Should we crop our unnormalized images so that they have approximately the
> | same bounding box as the normalized images?
> |
> | 2) As mentioned above, it is disconcerting that the seemingly innocuous
> | and irrelevant bounding box parameter should be one of the strongest
> | determinants of finding an effect in the data.
> |
> | Thanks in advance for clarification!
> |
> | Petr Janata
> | Barbara Tillmann
> |
> | --------------------------------------------
> | Petr Janata, Ph.D.
> | Research Associate
> | Dept. of Psychological and Brain Sciences
> | Dartmouth College
> | 6207 Moore Hall
> | Hanover NH, 03755
> |
> | phone: 603 646 0062
> |
>
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|