Dear Jesper,
>I am trying to get my head around the default choice for gamma when
>calculating PPMs.
>
>The general idea is that we want to estimate the conditional (posterior)
>distribution for some combination of parameter estimates at each voxel.
>For "display purposes" we would subsequently like to threshold our PPM
>such that we see only voxels for which p(b>gamma)>p0, i.e. voxels where
>the likelihood that the parameter (b) is larger than gamma is greater
>than some p0 (typically 0.95).
>
>For classical inference we typically use as null-hypothesis b=0, and
>attempt to discard that. From that perspective it would seem to make
>sense to use gamma=0 and consequently threshold our PPM at p(b>0)>p0
>(which would also give a nice conceptual link to FDR thresholded at
>E(FDR)<(1-p0) ).
>
>When reading e.g. the HBF book it is pointed out that gamma gives the
>option of testing if b is greater than some value that has "some
>neurophysiological meaning". I guess no one really knows what that value
>would be, and SPM gives the default suggestion to test if b is greater
>than one standard deviation of the prior distribution of b.
>
>The (empirical) prior distribution is calculated from the observed
>distribution across voxels. As far as I understand this prior
>distribution would therefore depend on the actual (unknown "true")
>distribution of that parameter (or linear combination of parameters).
>This is in accordance with what I have observed empirically where in a
>main-effect contrast (where I expect plenty of activations) I (well, SPM
>actually) observe an std of the prior distribution that is an order of
>magnitude greater than for an interaction contrast in the same data set.
>
>This has the slightly counter intuitive consequence that what I consider
>as "neurophysiologically meaningful" is quite different in one contrast
>compared to in another.
>
>Conversly, if I (as I would think quite reasonably) use a gamma of 0
>(i.e. look at voxels where the 95% confidence interval is completely
>above zero) I do observe quite widespread "activations". Some in
>slightly surprising locations (e.g. ). I am guessing
>that observations of this kind may have motivated the default (1 std of
>prior) in SPM?
No, not really. The motivation for using a non-zero threshold was
to circumvent the fallacy of classical inference; i.e. that one can always
show something is not zero with sufficient observations (because the
probability of something being exactly zero is, itself, zero).
Although one may be 95% confident that the centrum semiovale shows
an (biologically meaningless) activation of greater than zero, you may
also be 95% confident it is less than 0.001%. I think the problem here
is that the empirical prior and 'meaningful' are not the same thing. The
meaning
has to be informed by an understanding of the system you are dealing with.
I suspect the aim of bringing conditional inference into line with classical
approachs (FWE and FDR control) is misguided. The motivation for PPMs (and
related confidence interval arguments in classical inference) is that our
inferences should be endowed with a quantitative meaning. As you point out
this
will depend on the contrast. For example. if we assume a robust physiological
activation is about 1%; we might only be interested in activations of greater
than 10% of this, namely 0.1%. If our interaction is detecting a change in
the activation then we might accept changes of greater than 10% of the minimal
activation i.e. 0.01% and so on. In short, gamma, affords latitude that
enables quantitatively informed inferences. The downside is that you have to
supply the information!
>It would be very interesting to hear peoples views on this. In
>particular it would be interesting to know if people are beginning to
>have enough empirical experience of using PPM's such that there are some
>kind of consensus about if/when/how it should be used.
Absolutely.
All the very best - Karl
|