All,
Could anyone provide a proof or explain why that for a given distribution,
both normal or otherwise, for which population parameters have been
estimated from samples, the confidence interval for estimates of the
median(50%ile) or mean is smaller than for those values that tend towards
the extremities such as the 90%ile or 10%ile. I assume it is because the
mean is usually one of the population parameters we use to describe the
distribution (or can be readily inferred from the parameter(s) we do
calculate) and hence is bound to be our best point estimate, whilst as we
move away from 50% we cannot be sure that the calculated variance is the
true variance of the population and therefore greater uncertainty creeps in.
Thanks for any help
Alan
|