[Best viewed in fixed-width font!]
Following on the recent discussion (about "negative results" insofar as
this means that they are "not significant" at some level of statistical
significance), I'd like to try to clarify the situation from a more
theoretical point of view.
The principles of this are fairly straightforward, though perhaps they
involve simulataneously considering more issues than most people (maybe
even most editors and reviewers) are in the habit of considering at once.
They also have some uncomfortable implications in the context of usual
practice.
Let's consider just a simple Null Hypothesis (NH), e.g. "no difference"
versus a simple Alternative Hypothesis (AH), e.g. "difference = D" (I know
that in practice one does not often envisage a single value of D, but we
can suppose that this is, say, the smallest difference of practical
clinical significance).
"Rejection" of NH in favour of AH arises when a test statistic T exceeds
a critical value T0, and when NH is true this will occur with frequency
alpha ("size" of test, "probability of Type I error"). When AH is true,
this will occur with probability (1 - beta ) ("Power" of test), where
beta = "probability of Type II error".
A good test statistic gives small beta for any given alpha; the "best"
test statistic T gives the smallest possible beta ("most powerful test"),
and is equivalent to a likelihood-ratio test.
At this stage there is no theoretical criterion whatever for the choice
of any particular value of alpha: 0.05, 0.01 or whatever are purely
*conventional* values, and you can choose what you like. Whichever you
choose, there is a corresponding "best" beta, so the performance of the
test, given the trial design, is summed up in a graph that looks like
1 *
|*
| *
| *
+ *
beta | *
| *
| *
+ *
| *
| *
| *
0 +----+----+----+----+----+----*
0 alpha 1
You can set alpha=0 (never reject NH) and you will never make a Type I
error; but then beta=1 and whenever AH is true you will make a Type II
error. Or you can set alpha=1 (reject every time); then beta=0 and you
will never make a Type II error but whenever NH is true you will make a
Type I error. Both of these extremes are usually unrealistic (and amount
to ignoring the data). So you choose alpha *between* 0 and 1, and this
test procedure (T and choice of alpha) determines the (alpha,beta) pair
for a point on the above curve.
Adopting a conventional value of alpha (e.g. 0.05) means that only 1 in 20
studies where NH is true will come through the test and be accepted for
publication in "Journal of Significant Research" (or, in case you choose
alpha = 0.01, in "Journal of Extremely Significant Research"). Conversely,
if AH is true then the chance of publication is (1 - beta), as read off
from the graph. Adopting a converntional alpha means that the criterion
for assessing the study is based solely on control of alpha, the "Type I
error rate"; the value of beta ("Type II error rate") is a consequence of
this choice (but is rarely quoted explicitly ... ).
Clearly, at this stage, you can start to think about a good choice of
alpha (and, therefore, of beta at the same time) by considering the
above graph in relation to the comparative consequences of "Type I" and
"Type II" errors. If a "Type I" has graver consequences than a "Type II"
then you want to make alpha small and you don't mind so much about beta
not being small; therefore you would tend to prefer the left-hand part of
the curve. Conversely, if "Type II" is grave but "Type I" is not, then
you prefer the right-hand end. If you reckon that they are about equally
important, then you might like to get the error rates equal, in which case
you choose alpha (therefore beta=alpha) where the line "beta=alpha" cuts
the curve.
However, this does not take account of the likelihood that NH or AH will
in fact be true. Indeed, if you knew for certain that NH could NOT be
true, then you would always reject anyway (alpha=1); likewise, if you
knew that NH must ALWAYS be true then you would never reject (alpha=0,
beta=1). So, the greater your expectation that NH is true, the smaller
you should choose the value of alpha.
These ideas, relating repesctively to the consequences and to the
likelihoods of the two cases, are at this stage quantitatively imprecise
and mainly indicate which direction you should be looking in.
You can take it further to a quantitatively precise conclusion, if you are
in a position to assemble all the elements required for a Bayesian
calculation of an optimal decision. For this you need the prior
probability or expectation that NH is true (p1, say) and that AH is true
(p2, = 1-p1); and you also need a measure of the "cost" (c1) of a "Type I"
error and a measure of the "cost" (c2) of a "Type II" error.
The result of the calculation is that the best (least expected cost)
choice of alpha (and the corresponding beta as read from the curve)
occurs at the point on the curve where its slope (which is negative)
has the value
- (c1 x p1)/(c2 x p2)
Short of being able to reach this point (i.e. you do not have the
required information), there is no single "best" choice of alpha,
and you are free to choose within a range which respects such information
as you do have regarding expectations (of NH vs AH) and consequences
(relative gravity of "Type I" vs "Type II"), subject to the more
qualititative considerations discussed above.
And that's it, really.
The "uncomfortable" consequences of all this flow from the fact that one
almost never encounters these considerations explicitly considered in
practice, even in their vaguer forms. This indicates, to me at least,
that -- insofar as observed "significance levels" get compared with
conventional test sizes (e.g. 0.05 and 0.01) and these influence both
decisions of researchers as to whether to submit for publication or
where to submit ("JSR or JESR?"), and possibly also decisions of editors
and reviewers about acceptability for publication -- the EVIDENTIAL
meaning of published research remains -- to some extent at least --
unquantified and possibly unquantifiable. This does not strike me as
good.
Confidence intervals help to some extent in practice, since they present
more information than the mere "P<0.05" does about the "multiple
alternative" (i.e. where there is not a single value of D, the difference
between NH and AH, but a range of possible values); but they have nothing
to add to the observed significance level ("P-value") in the case of a
simple alternative (single value of D). But in almost all cases they
suffer from the same fundamental flaw: a conventional-95% CI is the set
of values, each of which would not have been rejected by a test with
alpha=0.05 had it been adopted as a NH. A CI with a fixed confidence
level is on the same footing as a test with fixed alpha -- you lose all
the consequences you could draw (as above) by considering different
values of the confidence level (from 0 to 100%).
Of course, even if you can go all the way to the Bayesian calculation,
your prior expectations (p1,p2) and cost evalutaions (c1,c2) may not be
the same as anyone else's; and someone else may not want to make a
*decision* but really want to know what *Information* has been obtained.
In the case considered here, this information is encapsulated in
(a) The above graph
(b) The value of D
(c) Specification of test statistic T
(d) The value of T obtained
and in general is encapsulated in the design of the study along with the
likelhood function. In fact distinguished people have from time to time
advocated publishing design+likelihood function, on the grounds that once
you have these you can do what you like; but this does not seem to have
had much impact on usual practice.
Sorry about the length of all this -- but if it is to be presented at all
it has to be presented as a whole!
Best wishes to all,
Ted.
--------------------------------------------------------------------
E-Mail: (Ted Harding) <[log in to unmask]>
Date: 15-Jan-99 Time: 15:22:31
------------------------------ XFMail ------------------------------
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|