Hi Paige and Joe
This is an interesting question and predictably, the answer is:
"It depends"
I think Joe's answer was pretty clear, but perhaps overly harsh on the
"pure Bayesian" perspective.
It depends on what you are trying to do. The concept of multiple
comparisons corrections relies on the concept of
thresholding/classification. So, as Joe says,If you want to classify
voxels as "active", then, Bayesian or Not, if you have a threshold of
p=0.05, you'll get 5% wrong.
Bayesian tests can mimic null-hypothesis tests. That is: we can say that
an active voxel has a signal change which is greater than 0. Then we can
test the posterior distribution on the parameter of interest, P(\beta|Y),
against zero. i.e. we can compute P(\beta > 0|Y). If Bayesians play with
their inference, they can make this test equivalent to a null
hypothesis test, and therefore threshold/classify/perform multiple
comparisons corrections exactly as would be expected.
In this case, the advantage of using Bayesian statistics is that we can
use Bayesian techniques to infer on models for which we cannot write down
the null distribution (Such as the hierarchical linear model in Flame).
This is what we do by default in Flame and, therefore, after Flame and
multiple comparison correction you protect the family-wise error rate as
before.
However, performing these tests in a Bayesian framework we have much more
flexibility in our inference. We now have posterior distributions on the
parameters, so we have information, not just on "how surprised are we to
see this data, given that we know the actual value of the parameter is
zero", but we actually have information about the true value of the
parameter. So we can now do lots of different kind of "mapping". We could,
for example, plot the probability that the % signal change is > 5% at each
voxel.
The issue is, Bayesians no longer have a binary concept of "active" or
"not active" or, in null-hypothesis speak "reject" or "accept" the Null.
So they no should no longer perform this classification. Voxels have
continuous amounts of activity. As I said above, if you then use all this
information to classify (e.g. threshold at 95% chance that there was a
signal change of > 5%) then you will get e.g. 5% wrong, but the true
Bayesian perspective would not have you classify/threshold to perform
inference unless you have explicit representations of two classes (a
binary decision in the model e.g. mixture modelling).
I don't know exactly what the PPM approach displays as maps at the end of
the day but, at HBM, Karl Friston was pretty clear that thresholding was
for the purposes of ease of visualisation on the glass brain, and not
for the purposes of inference - in this case, he is absolutely right, the
true Bayesian has no multiple comparison problem.
Right - I've just reread that email and it seems to be a bit of a
whistle-stop tour of conceptual Bayes. I can see that it might be quite
hard to follow, but this is an interesting debate point, so if anyone has
any queries, we're happy to have a go at clarifying.
Cheers
Tim
On Tue, 29 Jun 2004, Joseph Devlin wrote:
> Hi Paige,
>
> THe short answer is yes, you need to correct for multiple comparisons if
> you want to characterise your results in terms of "activations". As I
> understand these things, you have the option of of simply describing the
> patterns as probably greater than 0 with a given confidence, but in
> practice this is not generally what one wants to say. Instead, we normally
> are asking what is "activated" by a given contrast (e.g. more activated in
> condition A than B) and for that you need to control the risk of family
> wise error.
>
> From what I can tell, this issue of not correcting posterior probability
> maps (PPMs) is a bit of semantic trickery and not generally useful given
> the types of questions one normally tries to answer using fMRI... but I'd
> be interested in hearing other's (more educated!) opinions.
>
> > Since FLAME uses Bayesian inferences techniques, is it necessary
> > to correct for multiple comparisons when thresholding
> > for voxel-wise activation? The SPM info pages explicitly state
> > that it is not necessary to do so when using its Bayesian
> > estimation and inference packages; I was wondering if the same
> > held true for the estimation and inference techniques used by
> > FLAME? Also, if you could provide me with a "dummy's" explanation
> > for why this is or is not the case, I'd be most grateful.
>
> Joe
>
> --------------------
> Joseph T. Devlin, Ph. D.
> FMRIB Centre, Dept. of Clinical Neurology
> University of Oxford
> John Radcliffe Hospital
> Headley Way, Headington
> Oxford OX3 9DU
> Phone: 01865 222 738
> Email: [log in to unmask]
>
--
-------------------------------------------------------------------------------
Tim Behrens
Centre for Functional MRI of the Brain
The John Radcliffe Hospital
Headley Way Oxford OX3 9DU
Oxford University
Work 01865 222782
Mobile 07980 884537
-------------------------------------------------------------------------------
|