I apologize for the lengthy e-mail. But this is certainly a juicy topic
for discussion.
It was not easy here in the States to get a copy of Robert Mathews
article in the Sunday Telegraph, but my librarian did finally prevail.
As Ted Harding and Steve Simon both point out, Mathews does indeed lump
together several distinct problems with statistical tests. P values
confound effect size and random error. Presentation of the point
estimate of the effect size along with confidence intervals nicely
avoids this problem, and thus provides more information, albeit based on
the same distribution and statistical assumptions. Other than inertia,
the main reason P values are still used so much is because they so
readily give us a single number that is always compared to the same
index (0.05) across all fields of research (and throughout our
individual careers). We all crave that simplicity, even if we know
better.
Another issue Mathews implies and Harding identifies is publication bias
or the file drawer problem. This problem is dangerous and does indeed
threaten the whole research enterprise, including evidence based
medicine. But as vexing and intractable as this problem seems, it has a
simple solution. Researchers and journals must simply publish their
negative results. Otherwise adherence to P value hypothesis testing
does indeed provide a worldwide filter for selecting false positive
results.
The Bayesian problem is more complicated, but is certainly
understandable by anyone reading this list. Although at root it only
involves simple arithemetic calculations on simple probabilities, it
dramatically demonstrates some disturbing limitations of statistical
tests. Without going into much detail I think I can demonstrate the
essential Bayesian insight using an analogy that has helped me and that
should be familiar to any well trained clinician. Any biostatics
textbook will show how Bayes' theorem explains the way the prevalence of
a disease influences the predictive values given by a diagnostic test.
Once this is understood it is usually disturbing to those who want the
test to always give answers of the same reliability regardless of the
population a patient comes from. A perfect test would. It is only
imperfect tests that are influenced by prevalence. For imperfect tests,
the higher the prevalence (the prior probability of disease), the higher
the positive predictive value (the post test or posterior probability of
disease). Also, the more imperfect the test (the worse the sensitivity
and specificity), the greater is the influence of the posterior
probability. This is a mathematical fact that no one has disputed since
Bayes demonstrated it 200 years ago (using basic probabilities, not
medical diagnostic tests). "Frequentists" use Bayes' theorem to
interpret diagnostic tests just as readily as "Bayesians", they simply
insist that the estimate of the prevalence be based on real frequency
data. In the abscence of such an estimate, a frequentist will maintain
that Bayes' theorem is not operable. But "radical Bayesians" will go
ahead and make their best guess at the prevalence, and thus calculate a
"personal" post test probability. Obviously, people with different
guesses concerning the prior probability will end up with different post
test probabilities.
Now, understand that any statistical hypothesis test is an imperfect
test due to random error, and that statistical hyopthesis testing of
research results is exactly analogous to testing for disease with an
imperfect diagnostic test. What the researcher seeks is the post test
probability that the alternative hypothesis is true (or that the null
hypothesis is false). Just as in the case of the imperfect medical
diagnostic test, the statistical hypothesis test will crank out a post
test probability that is dependent, according to Bayes' theorem, on the
prior probability estimate of the truth of the alternative hypothesis.
To paraphrase Neils Bohr, if you are not disturbed by this, you don't
understand the situation. Each person will bring a different degree of
belief in the alternative hypothesis to the experimental test, and using
Bayes' theorem will calculate a different post test probability.
This problem was well understood in Ronald Fisher's day. His reaction
to it was to invent the P value (the probability that the experimental
data could be obtained by random chance if the null hypothesis is true).
For various historical reasons the scientific community took the P value
with great relief and never looked back. For Fisher the P value was a
scalar estimate of the weight of the evidence. To his consternation,
Neyman and Pearson dumped the scalar aspect and used a predetermined P
value as a dichotomous test.
The relief was not so long-lived in the theoretical statistics
community. It soon became apparent that Fisher had not done away with
Bayes' theorem. He had simply permanently settled for the special case
in which belief toward the alternative and null hypotheses is completely
"equivocal", in other words, a prior probability of 0.5 (with a broad
distribution for continuous data). This is a one-size-fits-all
approach; and as Robert Mathews correctly points out, this is alot of
credibility to give to some hypotheses. It is exactly analogous to
incorrectly assuming a high disease prevalence and using it to calculate
an incorrectly high positive predictive value for a medical diagnostic
test result. This is counter to a common informal axiom of science:
outrageous claims require overwhelming evidence, while expected findings
are easily accepted with only moderate amounts of evidence. Bayes'
theorem simply formalizes this axiom and makes the degree of prior
belief explicit.
A handfull of statisticians have been crying in the wilderness about
this problem for years. Unfortunately, lower level statistics textbooks
inadequately address this problem (if at all). The scientific community
has been reluctant to question P values because of the seductive nature
of their simplicity and the complexity and distasteful philosophical
implications of the alternative, a return to the Bayesian world of
subjective estimates of prior probabilities and the resulting personal
posterior probabilities.
The key to understanding all of this is to realize that there have
arisen three parallel terminologies that describe the elements and
relationships used to derive probabilities from test results: Bayesian
terminology, statistical hypothesis testing terminology, and medical
diagnostic testing terminology (derived from signal detection theory in
physics). For example: in Bayesian terminology P(+T I Ha) means the
probability of +T given that Ha is true, which is the probability of a
positive test result if the alternative hypothesis is true, also called
the likelihood, which in diagnostic testing and signal detection is
called sensitivity, which in Neyman-Pearson hypothesis testing is called
the power of the test (1 - beta, where beta is the type II error
probability or the false negative probability). All of these are the
same entity. Also, P(+T I Ha) / P(+T I Ho) is called the likelihood
ratio (for alpha of 0.05 the likelihood ratio is 95/5=19), which in
diagnostic testing is the ratio of sensitivity to the false positive
probability, which is what is plotted on the ROC curve derived from
signal detection theory. (I have tabled all these terminologies in a
Rosetta Stone, but it is a Microsoft Word table that might not work
properly as e-mail) The different terminologies have too long obscurred
the fact that the Bayesian relationships are at the root of all
imperfect tests, including statistical hypothesis tests; and the
influence of the often subjective estimate of the prior probability on
the outcome is inescapable.
Again I apologize for the length of this; but I hope this is useful for
those who consider themselves Bayseian challenged, as I was until
recently. I believe we would all be very interested in hearing the
views of the gurus of evidence based medicine, as well as of other
troops in the trenches such as myself. My own 2 cents worth is that in
addition to publishing the P value, or something like it, that
represents the "equivocal" 0.5 Fisherian prior probability assumption,
researchers should also publish the results for a 0.1 or 0.9 prior
probability assumption, for those who strongly disbelieve or strongly
believe the alternative hypothesis, respectively. Or perhaps they could
simply provide a formula so that the reader could fill in their own
prior probability estimate and crank out their own personal post-test
probability. Of course policy makers, who understandably hate this kind
of ambiguity, are in for a rough ride.
Some further reading on this facinating problem:
Goodman SN, P-values, hypothesis tests, and likelihood: implications
for epidemiology of a neglected historical debate. 1993, Am J Epidemiol
137:485-96.
Gigerenzer G, et al., The empire of chance. 1989, Cambridge University
Press, Cambridge, New York, Melbourne. A layperson's history of
probability and statistics
Berry DA, Statistics: a Bayesian perspective, Duxbury Press, New York,
1996. An introductory statistics texbook
Berry DA and Stangl DK (eds.), Bayesian biostatistics, Marcel Dekker,
Inc., New York, 1996. A more advanced book with medical research
applications
David L. Doggett, Ph.D., Medical Research Analyst
Health Technology Assessment and Information Service
ECRI, a non-profit health services research organization
5200 Butler Pike, Plymouth Meeting, PA 19462 USA
(610) 825-6000 ext 509, FAX (610) 834-1275
[log in to unmask]
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|