On Wed, 29 Aug 2007 Eric Zarahn wrote:
> The question of mathematical validity of the p-value for a given
> method is whether the probability of falsely rejecting the null
> using that method is equal (or, in practice, close enough) to the
> nominal or desired probability. One issue that will affect p-value
> validity is random sampling. This is the relevant assumption to
> consider when thinking about how placing ROIs for SVCs will affect
> results. If the ROI one chooses is contingent on the data, then the
> random sampling assumption has been violated. You now have a
> distribution that is conditioned on the criteria used to choose the
> ROI, but are instead using the marginal distribution to determine
> nominal p-values.
This mathematical explanation really helps a lot. Many thanks.
Thanks to the other posters too.
The anecdotes about number-plates and Feynman are very intuitive
and memorable. I think that Eric's explanation in terms of
random sampling speaks more directly to the question
of what the objective justification for small-volume correction might be.
I admit that I still find the whole small-volume procedure
worryingly arbitrary in some regards.
E.g. even if you have a gold-standard pre-existing independent hypothesis,
there is usually no well-defined standard for how to draw your volume.
The example provided by Alexander Hammers of having a well-delineated
brain structure, namely the hippocampus, is a noble exception.
Returning to the original question: does anybody know of any
p-value correction procedures, other than small-volume correction,
that would be able to avoid the tendency of unfairly penalising
activations believed to arise from small midbrain nuclei?
If there are none, then I guess that small activation clusters
are condemned to languish with uncorrected p-values.
Raj
|