I'm not convinced that agonising over a precise technical definition for
CI is helpful to students in the implied context (John Sorkin is I
assume teaching introduction to stats for medical students). My
experience is that too many graduates of such courses remember only that
you must always choose "the right test" and will obtain a "precise
arithmetical answer". Thought experiments about alternative
hypothetical samples may not help, especially when studying a rare
disease or condition.
As a simple example, the emission standard for cars is a constant. Does
brand X meet the standard? Test a single car of that brand and it
passes the test - is that sufficient? I suggest we *all* work on a
quasi-Bayesian framework: making the measurement implies a prior belief
that it will/will not pass, and the evidence persuades us to
endorse/change that view. Test more, and you get a distribution of
readings for the parameter. Fortunately, under a very wide range of
conditions, the mean of such samples will fall within the CI with a
given probability - but stress that 90/95/99% is an arbitrary choice.
The CI therefore gives an objective measure to aid the decision whether
to accept that brand X meets the standard generally, even if a few
examples fail.
The calculations do rely upon assumptions that the sample is
representative of brand X cars, and that the measurement is relevant to
the parameter of interest. There is always a circular logic that you
accept the answer based on assumptions, but accept the assumptions
because you get an answer. Teach students to examine and question the
assumptions.
Posted before the moderator steps in!
Allan
You may leave the list at any time by sending the command
SIGNOFF allstat
to [log in to unmask], leaving the subject line blank.
|