On 1/7/2011 6:58 PM, Anoop Balachandran wrote:
> What is the best way to interpret the results of a study when the
> confidence intervals are not given. Only the p -value is given?
This is a violation of the CONSORT guidelines.
* http://www.consort-statement.org/
There are dozens of references that also criticize the reporting of
p-values without confidence intervals. Here are a few:
* http://www.pmean.com//category/ConfidenceIntervals.html#Borenstein
* http://www.pmean.com//category/ConfidenceIntervals.html#Gardner
* http://www.pmean.com//category/ConfidenceIntervals.html#Savitz
In general, a large p-value, by itself, is ambiguous. It could represent
a negative finding, but it could also be an indication of an inadequate
sample size. Normally, the width of the confidence interval gives you an
indication of the adequacy of the sample size. See a cute joke in my
Monthly Mean newsletter of a confidence interval so wide that it
indicates an inadequate sample size.
* http://www.pmean.com/news/201004.html#12
Since there is no confidence interval, there is one other thing you can
look for. Did the authors report a power calculation conducted prior to
data collection? If so, and if the inputs used in the power calculation
are not totally outrageous, then a large p-value can be taken as an
indication of an adequate sample size and of a definitive negative result.
For a small p-value, things are easier. Power calculations are not as
important here. What is important is the magnitude of the difference you
observed in your sample. You need to be concerned about statistical
significance without practical importance.
For a small p-value, define a range of clinical indifference and ask
yourself whether the observed difference lies inside this range. In this
setting, even with statistical significance, your findings are not large
enough to achieve practical significance.
--
Steve Simon, Standard Disclaimer
Sign up for The Monthly Mean, the newsletter that
dares to call itself "average" at www.pmean.com/news
|