Hello Karl:
------You wrote-------------------------
What is your experience? If you have performed Monte-Carlo
simulations
and have shown that the family-wise false positive rate is
significantly
different from 0.05, using a corrected threshold of 0.05, then you
should disclose your results immediately. This can happen but it is
invariably due to some violation of the assumptions underlying the use
of GRF, which can, in itself, be enlightening.
-----------------------------
Unfortunately I would not go so far as to say I performed a "Monte-
Carlo" simulation. I simply entered a single column of random numbers
in place of my covariate of interest and revealed one false-positive
(this is what I believe Mathew also did). Obviously this is not a
systematic test of the real fasle positive rate, which would require
many thousands of random number entries. I don't have the time or
energy to do that manually (!) and don't know of an easy way to do it
computationally.
------You wrote-------------------------
This is remarkable! Could you let us know which Journals have advised
you to remove the corrected p value in favour of the uncorrected p
value.
----------------------------------------
Hmmm. I don't really want to tell tales but it has happened three
times, Pain and Psychosomatic Medicine are the two most recent
culprits. It boils down to a combination of reviewers not really
understanding the distinction (which Mathew alluded to) and my
presentation, which on the one hand tries to justify using an
uncorrected threshold and on the other still presents the corrected p-
values. In light of this discussion I will probably have to revise
this strategy.
------You wrote-------------------------
I think there are two themes that emerge from this debate (i) The
potential of conditonal inferences within a Bayesian (PEB) framework
and (ii) the dangers of not using anatomical constraints when making
classical inferences that are adjusted for the volume analysed.
The faciltiy to report corrected [i.e. adjusted] p values was a vital
step forward in characterising PET data that established a rigour in
the eyes of other disciplines. However, this adjustment can be abused
if used indiscriminately. As a research programme matures one knows
in
adavnce where the activations are likely to be expressed and a small
volume correction should be employed around the sites in question. It
is clearly ridiculous to adjust for the entire brain volume when
making
inferences about activations in the language system, given that the
language system has been defined by almost a decade of careful imaging
neurosicence (and unlike the visual sysem does not encompass the most
of the brain).
I would expect results to be reported in terms of (i) the estimated
activation (parameter estimates reported in tabular or graphical
format), (ii) the Z score equivalent for cross comparison and
data-basing and (iii) the corrected p value using an appropriately
small or large VOI. The uncorrected p value is totally redundant
(given the Z equivalent) and has no inferential utility. In short the
issue is not corrected vs. uncorrected but "what degree of anatomical
constraint can I apply to maximise the sensitivity of my analysis" (in
this context using a very small volume reduces to using the
uncorrected
p value).
-------------------------------
This was very useful and I am going to try and adopt the strategy you
outline from now on. I am, however, faced with the issue of
pain "activating the whole brain"; pain does activate an awful lot
making it tedious to use the Small Volume Correction that, in any
case, tends towards a correction for the whole brain. I have some
other questions swirling around that I can't quite get to a point of
articulation at the moment. As I am seeing Tom N. pretty soon I will
pick his brains in exchange for beer and get back to you.
All best wishes,
Stuart.
Sent by Medscape Mail: FREE Portable E-mail for Professionals on the Move
http://www.medscape.com
|