Dear statisticians,
I posted two questions regrading (confidence intervals) CIs a few days
ago. I received 3 responses, thanks the people who contributed, which provided
me with some approaches to solving problem 2 but I didn't find the solutions
offered to problem one so adequate. The responses can be found at the end of
the letter.
So I tinkered with the problem during work and I think there seems to be
a plausible way to determine the CI for an arbitrarily
distributed random variable (and is quite elegant if my arguments are
mathematically correct). Since I can't seem to send an attachment to the list,
I provide a copy of the short derivation I wrote on the internet for those
interested. It is available at:
http://briefcase.yahoo.com/hendrai (in the folder Docs).
The file name is determine_CI.doc (MS-Word).
The main results are:
1. We cannot blindly use the normal distribution based CI formula to
approximate the CI for any arbitrary non-normally distributed random variable.
Fortunately, only a simple modification to the formula is needed.
2. A larger number of samples is required to get better estimates of the CI of
the mean for non-normally distributed random variables comapred to normally
distributed ones.
Any comments would be greatly appreciated. After all, the results could be
errorneous, but if correct hopefully the results will be useful for others too.
Best wishes,
Hendra I. Nurdin
==================================================================
Response from Robert Newcombe ([log in to unmask])
===================================================================
Re question 2, if (L,U) is a CI for ybar, then simply use (g(L),
g(U)) as a CI for g(ybar). This method is termed "substitution" by
Daly (1998), who gives several examples. Usually we would expect
g(.) to be monotonic as well as continuous. If G(.) is monotonic
decreasing, then we use (g(U), g(L)) as the interval, of course.
You are right to take the view that continuity is an important
requirement! The evidence based medicine community often invert the
difference between two proportions and call it the number needed to
treat (for 1 extra case benefitting as a result of using treatment 1
instead of treatment 2). In the non-significant case the interval
for p1-p2 includes 0, i.e. L<0<U. The mathematically correct CI for
the NNT then consists of two doubly infinite intervals, from 1/U to
+infinity and from -infinity to 1/L (which is -ve). But I can't
accept that this is helpful to clinicians! See Newcombe (1999).
Re question 1, constructing a CI based on an assumed Gaussian
distribution is pretty robust, because of the central limit theorem.
The most serious contraindication is when the distribution is in fact
binary, i.e. for CIs for proportions. Often we can transform to get
something reasonably Gaussian, as you suggest in (2) - the log
transformation is especially useful as many variables are close to
log-Gaussian (again because of the central limit theorem, but
applying in a multiplicative domain). If N is small and there is
seriously non-Gaussian distributional form, resampling methods can be
used. However, we then have to think carefully whether it is really
the mean that we want to estimate - it may be very far from typical
in a grossly skew distribution. Often the median is a more
meaningful measure of central tendency, and CIs for this are
available, for example in Minitab or CIA. In economic applications,
the mean has a special importance, irrespective of distributional
form, and it is probably in this instance that resampling methods are
most important.
References.
Daly LE. Confidence limits made easy: interval estimation using a
substitution method. Am J Epidemiol 1998; 147: 783-790.
Newcombe RG. Confidence intervals for the number needed to treat -
absolute risk reduction is less likely to be misunderstood. British
Medical Journal 1999, 318, 1765.
Hope this helps.
Robert Newcombe.
..........................................
Robert G. Newcombe, PhD, CStat, Hon MFPHM
Senior Lecturer in Medical Statistics
University of Wales College of Medicine
Heath Park
Cardiff CF14 4XN, UK.
Phone 029 2074 2329 or 2311
Fax 029 2074 3664
Email [log in to unmask]
==============================================================================
Response from Ken Lakahni
==============================================================================
Hi,
1. You have heard of the Central Limit Theorem? The mean of any variate
(with a few pathological exceptions) tends to be normal as the sample size
"n" increases -- this progression towards normality is very good even for a
not very large value of "n". So, just use the normal theory, provided "n" is
not too small. Even for an extremely skew distribution, if "n" is >20, then
the normality assumption for the mean will be quite good.
2. Suppose the 95% confidence limits for Y are L and U i.e. L=lower and
U=upper conf. limits.
Calculate Z(L)= g(L) and Z(U)=g(U); then Z(L) and Z(U) are the 95% FIDUCIAL
limits for Z.
Best wishes,
Ken
K.H. Lakhani
Statistical Consultant
6 Cranfleet Way
Long Eaton
Nottingham NG10 3RJ
0115 9732250
==============================================================================
Response from Andrew Robinson
==============================================================================
It depends on whether you are willing to apply the Central Limit Theorem, which
states (in essence) that: as the sample size increases, the sampling
distribution of the mean becomes more normal. This is independent of the
population from which the data were samples, and therefore independent of the
distribution of the sample.
For some simulations, see http://www.ruf.rice.edu/~lane/stat_sim/index.html
If you are willing to apply it then the distribution of the sample is
irrelevant. If not, you might try to use a bootstrap (see e.g. Efron and
Tibsharani 1993)
2. If we can determine the confidence interval of the mean for a random
variable Y can we say anything about the confidence interval for the mean
of Z=g(Y) if g(.) is a known non-linear continuous function (in
particular g(x)=exp(x))?
There are two approaches: firstly, you can try the delta method, and secondly
the bootstrap again. The delta method is based ion a Taylor series expansion
and is used for the transformation of random variables.
Label the transforming function g(x). Take the first and second derivatives,
g'(x) and g"(x). Then if you have a random variable X, mean = mu and variance
= s2, the following is often an acceptable approximation:
E(g(X)) = g(mu) + 0.5 * g"(mu) * s2
and
Var(g(x)) = s2 * (g'(mu))^2
Good luck!
Efron, B. and R. J. Tibsharani (1993). An introduction to the bootstrap,
Chapman and Hall.
Andrew
Andrew Robinson Phone: 208-885-7115
Department of Forest Resources Fax: 208-885-6226
University of Idaho E: [log in to unmask]
Po Box 441133 WWW: http://www.uidaho.edu/~andrewr
Moscow, ID 83843 and: http://www.biometrics.uidaho.edu/
No statement above necessarily reflects the opinion of my employer
|