When reading a published article I look for the published estimate of the
treatment effect on the primary outcome measure(s), and the associated 95%
C.I., and disregrd any published p-values.
I have seen the following comment in another Forum :
"A major reason for including p-values with confidence intervals (CI)
arises when sample data is not normally distributed. P-values are based on a
normal distribution, whereas CIs are calculated from the data itself. In
certain cases, a CI that does not cross zero (for absolute risk comparisons)
is actually *not* significant. This is easily determined if a p-value for
the comparison exceeds the significance (alpha) level."
What does this mean ? What do you understand by this comment ?
If the C.I. is wide, I might conclude that a claim (for instance) that the
innovative intervention is superior to the control intervention to lack
credibility, and that the result is better characterized as "indeterminate."
If both ends of the 95% C.I. are on the "benefit" side of a pre-specified
"minimum clinically important difference," I'll conclude that a claim of
evidence of efficacy superiority (in this trial) is credible.
Why should I bother with p-values ?
University of Utah School of Medicine