Dear Stuart,
> I don't want to labour this point but it is important. There is lots
> of stuff out there at an uncorrected p, some of it mine..!
>
> Mathew wrote:
>
> -------------------------------
> Well, if you think that a threshold that would give you a 0.05
> probability of a false positive is too harsh, then a corrected
> threshold of 0.05 is too harsh. If you do want that level of control
> of false positives, then to say that corrected p values are too harsh
> is simply false. Thresholding at a corrected p value of 0.05, using
> Random Field theory, gives you a false positive rate that is very near
> 1 in 20, exactly as requested. You can show this from theory, from
> random number data (see Worsley 1996 paper and the link from the
> previous mail), and from real data (see the Worsley 1992 paper). With
> an uncorrected p value, you have no idea what the corresponding false
> positive rate is. Because is it a 'p value', it appears to refer to
> the false positive rate in your experiment, but in fact this is not
> the case.
> -----------------------
>
> Two points in response to this. My understanding is that a bonferroni
> correction within SPM is overly harsh, i.e., it does not give a false
> positive rate of 1 in 20 but it gives a false positive rate
> significantly less than 1 in 20 that varies depending on the precise
> parameters of your study and analysis. I garnered this understanding
> from the SPM course video, specifically Andrew's talk and his
> attempts to provide a better estimation of the false positive rate. I
> know I shouldn't believe everything I see on television so perhaps
> someone else could chip in on this.
When the assumptions underpinning the use of GFT hold the expected
false positve rate is as specified. Early versions of SPM used GFT for
Z variate feilds after a probability integral transform but now (for
the past few years) the explicit expressions for the SPM{T} or SPM{F}
are used. Obviously with very low thresholds the false positive rates
will deviate from their nominal values because the GFT results are only
asymptopically true.
> I have a slightly more controversial retort, however, which is that
> the p<0.05 test for false positives is without doubt overly harsh
> regardless of whether it gives a 1 in 20 chance of a false positive
> or a more conservative rate. Why is this? Simple, if your
> intervention (stimulus, cognition, affect, whatever) has no effect
> (i.e., the null hypothesis is true) then the only kind of error that
> can be made is a type I error: A false positive, and the rate of that
> error will indeed be constrained by your corrected threshold. But if
> your experimental intervention does have an effect, then a type I
> error is impossible. The errors will be type II: False negatives.
> Type II Error rate is rarely as low as 5% for any branch of natural
> science. For us functional imagers the problem is catastrophic.
> Firstly, by the principle of materialism always being correct, it has
> to be the case that our experimental interventions alter activity in
> the brain. The null hypothesis is always wrong, the profile of
> activity has to change. If you are searching for a regional effect
> then the story changes (although there is plenty of BS to be had
> between "changes in the brain" and "changes in regions x,y,z"). If
> you are looking for a particular region or network of regions then it
> would be advisable to calculate error rates so as to assess the
> possibility of a type II error. This is a power analysis and
> everybody I talk to tells me a power analysis is impossible for
> functional imaging... The term "buggered" springs to mind!
The term "Bayesian" should spring to mind: The reason why power
analyses are so difficult in neuroimaging is that the specification of
the alternate hypothesis is complicated. If one know the prior
distributions of the evoked repsones in all brain areas in all
experimental contexts then the power (and Type II error rates) could be
computed under those prior densities. More importantly, if we knew the
prior densities for every experiment then we could proceed with
conditional inferences about the activations given the data that eschew
the multiple comparison problem (there is no categorical declaration
that a voxel has 'activated' and therefore no false positives or
negatives). The conditional Bayesian inference simply says that, given
the data, the probability that the activation in a voxel is greater
than some value is P. This posterior probability does not change with
the number of voxels analysed and completely resolves the difficulties
inherent in classical inference you allude to above.
The problem is that there is no way of specifying the prior densities
for all experiments. There is, however, an approach that can estimate
the priors in a maximum likelihood sense from the data using the linear
models we usually adopt. This approach is called Parametric Emprical
Bayes (PEB). We have been evaluting PEB methodology in relation to PET
and fMRI over the past year or so and it looks very promising. We
currently have four papers under submission detailing the approach
which will be made available after peer review.
> ------------------------
> This was one of the first things I did with SPM, back in 1996. I took
> my own activation PET scan data from 7 subjects, put in the full model
> for the subjects and global counts, and added a fresh column of random
> numbers to the model as a covariate. From this I created an SPM
> looking for an effect of this random number covariate. Over hundreds
> of repetitions I found that the 0.05 corrected height threshold gave -
> 1 in 20 analyses with a false positive peak. Nearly every SPM thus
> generated gave one or more false positive peaks at p<0.001
> uncorrected.
> -------------------------
>
> Well, this was not my experience.
What is your experience? If you have performed Monte-Carlo simulations
and have shown that the family-wise false positive rate is significantly
different from 0.05, using a corrected threshold of 0.05, then you
should disclose your results immediately. This can happen but it is
invariably due to some violation of the assumptions underlying the use
of GRF, which can, in itself, be enlightening.
> -----------------------------
> > Finally, it is difficult to assess regional involvement across
> > studies when authors only report a few regions at a very high level
> > of significance.
>
> There is a very important point here, which is well raised. It is
> indeed difficult to compare results across studies. This is a
> primarily a problem of giving t or Z or p values rather than effect
> size, and again related to the difference between hypothesis testing
> and estimation (see links in my earlier mail). But to return to my
> earlier point, the problem is not resolved by using uncorrected p
> values, because they do not have any meaning in this context. The
> false positive rate for any given uncorrected p value depends on the
> number of voxels analysed, the shape of the volume analysed, and the
> smoothness of the data (Worsley 1996). Thus, your p<0.001 is not
> comparable to that of another study. It is of course reasonable to
> report as trends, results that do not reach conventional levels of
> significance, but my own view would be that this is best achieved with
> corrected p<0.1 etc, as this will take into account all the above
> variables.
>
> ---------------------------
>
> I agree. Reporting CI and ES would improve the situation and would be
> advisable for virtually all the social sciences. I like your
> suggestion of dropping the corrected threshold rather than using an
> uncorrected value. As it happens I tend to report the corrected
> alongside the uncorrected thresholds in my papers, although reviewers
> give me a hard time and sometimes force me to take out the corrected
> values...
This is remarkable! Could you let us know which Journals have advised
you to remove the corrected p value in favour of the uncorrected p
value.
I think there are two themes that emerge from this debate (i) The
potential of conditonal inferences within a Bayesian (PEB) framework
and (ii) the dangers of not using anatomical constraints when making
classical inferences that are adjusted for the volume analysed.
The faciltiy to report corrected [i.e. adjusted] p values was a vital
step forward in characterising PET data that established a rigour in
the eyes of other disciplines. However, this adjustment can be abused
if used indiscriminately. As a research programme matures one knows in
adavnce where the activations are likely to be expressed and a small
volume correction should be employed around the sites in question. It
is clearly ridiculous to adjust for the entire brain volume when making
inferences about activations in the language system, given that the
language system has been defined by almost a decade of careful imaging
neurosicence (and unlike the visual sysem does not encompass the most
of the brain).
I would expect results to be reported in terms of (i) the estimated
activation (parameter estimates reported in tabular or graphical
format), (ii) the Z score equivalent for cross comparison and
data-basing and (iii) the corrected p value using an appropriately
small or large VOI. The uncorrected p value is totally redundant
(given the Z equivalent) and has no inferential utility. In short the
issue is not corrected vs. uncorrected but "what degree of anatomical
constraint can I apply to maximise the sensitivity of my analysis" (in
this context using a very small volume reduces to using the uncorrected
p value).
With very best wishes - Karl
|