At 13/03/2006, Howard Mann wrote:
>When reading a published article I look for the published estimate of the
>treatment effect on the primary outcome measure(s), and the associated 95%
>C.I., and disregrd any published p-values.
>I have seen the following comment in another Forum :
>"A major reason for including p-values with confidence intervals (CI)
>arises when sample data is not normally distributed. P-values are based on a
>normal distribution, whereas CIs are calculated from the data itself. In
>certain cases, a CI that does not cross zero (for absolute risk comparisons)
>is actually *not* significant. This is easily determined if a p-value for
>the comparison exceeds the significance (alpha) level."
>What does this mean ? What do you understand by this comment ?
I agree with Ted Harding that this is non-sense.
1. There are many different ways to calculate a p-value (eg. with and
with continuity corrections in dichotomous data, with and without
transformations, parametric and non-parametric, etc).
2. The CI and p-value should correspond IF THE SAME underlying method
has been used to calculate both
3. Correspondingly, if different methods are used you may get the
anomaly of say a CI crossing the "null" value but a significant p-value.
(It sounds as though the person that wrote this did just that, e.g,
used an RR for the CI but then used a difference of proportions, say
with a continuity correction, to calculate the p-value. They should
be close but occasionally just different enough to give his anomaly).
>Why should I bother with p-values ?
I prefer a CI, but I also can find it hard to distinguish a p of 0.03
from 0.001 from the CI, but would think differently about the two results.
>University of Utah School of Medicine
Department of Primary Health Care &
Director, Centre for Evidence-Based Practice, Oxford