Dear Jorge,
...
> I also think that there should be more
> reasons for fitting the same model at each and every voxel in the
> analysis.
Yes, that reason being that figuring out and fine-tuning a model (or
more appropriately, a class of models) is not a trivial task. Apart
from the issue of software production and testing, the performance of
your model depends on how well the assumptions is making survive the
impact of the real world, and how efficient it is in picking up
deviations from the null. A class of models that works is a major
achievement.
> For example, I think that in any other case it should be
> difficult to apply appropriate multiple comparison methods like
> Random Field Theory or permutation testing, since the number of
> degrees of freedom, and even more parameters, of the distribution of
> the test static could be different at each voxel.
The issue of multiple comparisons has nothing to do with data
distribution. The basic formalization of the field only specifies
abstract properties of the test statistic used on the family, not its
distribution. Distributional issues only become relevant if you adopt
a parametric model, and this is true irrespective of the issue of
multiple comparisons.
I do not see any reason to think that permutation methods would not be
applicable to the case you mention. Permutation approaches assume
exchangeability of the observations that are permuted, and if this
assumption is satisfied by the data, then they are usually applicable.
Differences in the distribution in each voxel may affect the
sensitivity of your test but the control is valid nonetheless.
>
> What about sensitivity, that is, having
> different parts of the SPM with different sensitivities?
You'll have to live with that. If the variance in your data (or other
aspects of the distribution) is not uniform across the volume, then
your sensitivity varies. It isn't a feature of voxelwise varying models.
Best wishes,
Roberto Viviani
University of Ulm
|