Thank you very much for all the useful responses. Below is the original questioned posted followed by the replies. Thank you once again.
Regards.
Ly Mee
===================================================
Original question
===================================================
I have a set of data that the mean change between the 2 groups is significantly different (p<0.05). But when I put calculate the power it gives only 50%. How should I interpret this?
Also, can someone kindly advise as whether it is meaningful (or pointless) to calculate the power when the result is statistically significant?
===================================================
Reply
===================================================
You have successfully demonstrated exactly what power is about! Power is
to do with the probability of a demonstrating a significant result at a
pre-assigned level, given a particular effect size and variability. So if
your effect size turns out to be just large enough to give significance,
then you must be in the position that the power is 50%.
Usually, when you plan a study to have a given power from prior
information (rather than from the information actually in the study, as in
your calculation), you aim for a higher power such as 80% or 90%. In such
a study, if the estimates of effect size and variance turn out to be
exactly right, then you expect to demonstrate significance at a much
higher level. You would not plan for a pwer of 50% if you can help it,
because that would give you only a 50:50 chance of demonstrating
significance at the level you have chosen.
I don't see much point in calculating power after the fact, except as an
exercise.
Peter Lane
++++++++++++++++++++++++++++++++++++++++++++++++++++++
Irrespective of the sample size, all experiments will get a significant
result for 1 in 20 of experiments (when significnace level is 0.05). This is
teh "type 1 error".
Large studies have a better chance of detecting teh "true" difference
between people of group1 and group2 types in the target population -- ie a
large study has high "power".
Small studies with significant results include a higher proportion of
"false" estimates of the difference, COMPARED to large studies.
This uncertainty is explicitly reflected when you report the 95% CONFIDENCE
INTERVAL.
When reporting hypothesis tests always interpret using the point estimate
and 95%CI, and to summarise using the p-value, ie diff=0.74 (0.06 to 1.42,
p=0.03).
Your 95%CI is very wide: showing a lot of uncertainty in your estimate of
effect size. Estimating teh effect size with precision is usually more
interesting than just knowing the may be a difference between the groups
(consider clinical sigificance with respect to teh 95%CI).
best regards
Tri, Tat
++++++++++++++++++++++++++++++++++++++++++++++++++++++
When one performs sample size calculation using a specified power the
expected difference + other parameters are set in advance. The power is the
chance of finding a significant difference it it truly exists. However this
does NOT mean that the corresponding p-value will be 0.05... it could be
.001 or .023 etc. Post study power (what you have calculated) has limited
interpretation(and should be avioded whenever possible) - you have either
found a significant difference or not. What you have calulated is the
probability, given the observed difference (& sample size, se etc), that
another sample will be significant if the condions and assumptions are
identical. Any probability >= 50% will be sufficient
Regards... Val Gebski
++++++++++++++++++++++++++++++++++++++++++++++++++++++
The concept of power really relates to the planning stage of a study,
not to the analysis. Suppose your data gives a p-value which is just
at 0.05. This suggests that if you repeat the study, sampling from
exactly the same population, then on average half the time you'll get
more extreme evidence for a difference, half the time less extreme.
In the former case it'll be significant, in the latter case not. So
a p-value at exactly your chosen alpha level equates with a power of
50%. Assessing power once you have the data really amounts to a
rescaling of the p-value that is confusing rather than enlightening.
Hope this helps.
Robert Newcombe.
++++++++++++++++++++++++++++++++++++++++++++++++++++++
It doesn't really make sense to retrospectively calculate power whatever the
result of the test statistic.
All of the information about the true population parameters is contained in
the data and any test statistics that are calculated or confidence intervals
that are derived.
In terms of power, power is the probability of getting a value as large or
larger than some critical value under the alternative hypothesis. The
probability of getting a value as large or larger than what you have
observed under the alternative hypothesis that the observed value is the
true value is 0.5. Hence, your result.
John W. Stevens
++++++++++++++++++++++++++++++++++++++++++++++++++++++
It is not sensible to do retrospective power calculations - see below.
Doug Altman
Goodman SN, Berlin JA.
The use of predicted confidence intervals when planning experiments and the
misuse of power when interpreting results.
Ann Intern Med 1994 Aug 1;121(3):200-6
Although there is a growing understanding of the importance of statistical
power considerations when designing studies and of the value of confidence
intervals when interpreting data, confusion exists about the reverse
arrangement: the role of confidence intervals in study design and of power
in interpretation. Confidence intervals should play an important role when
setting sample size, and power should play no role once the data have been
collected, but exactly the opposite procedure is widely practiced. In this
commentary, we present the reasons why the calculation of power after a
study is over is inappropriate and how confidence intervals can be used
during both study design and study interpretation.
|