On 31-Mar-05 John Whittington wrote:
> At 12:51 31/03/05 +0100, Macfarlane, Alison wrote:
> [...]
> [para 1]:
> I'm obviously missing something here. As I mentioned,
> my understanding is that there is a big problem in
> relation to attempts at either inference or estimation
> when one is dealing with (as much as one can get of)
> a 'whole population' - since both inference and estimation
> are all about sampling error.
> [...]
> [para 4]:
> Thinking aloud, I imagine that one could in theory use
> Time Series techniques to attempt to separate inherent
> 'noise' (random year-to-year variation) from long-term
> trends, and thereby perhaps get some handle on the
> 'significance' of an actual year-to-year change (e.g.
> from 22.0 to 22.7) in relation to that random 'noise' level.
> However, I suspect that would in practice be a very difficult
> exercise, since the underlying trends are not going to be
> simple and straightforward (being influenced by all sorts
> of factors - medical, sociologiocal, political etc.) and
> therefore might be very difficult to model (i.e. might
> end up being confounded with random variation)/'noise').
>
> Maybe someone can help me understand all this?
I think you already do, John! Clearly, the results for
a whole population are simply a statement of fact, with
no sampling to inject the randomness required to underlie
probabilistic statements of uncertainty, just as you state
in you para 1 above.
Again, you're right in para 4 about the theoretical possibility
of using Time Series methods if you have observed the whole
population at several time points. But these can only work
if there is a structural model for the time series with
respect to which deviations, to be assigned to "random noise",
can be derived. The thorny question then is: on what basis
can any specific generic model (e.g. trend + autoregressive)
be assumed? Where is the evidence for it?
If a network of causal mechanisms has been objectively
identified, which would drive the data deterministically in
the absence of noise, then you can start along these lines
(think of the flight of an arrow, subject to aerodynamic
forces and gravity but flying though turbulent air).
Even so, there needs to be an assumption that the perturbations
themselves obey some stability or stationarity so that one
can assume some constancy of statistical properties.
But with the sort of data under discussion, what are the
forces? Mostly unobserved, and unlikely to themselves be
such that one can make such assumptions.
Even if you disaggregate the Caesarean data by RHA or PCT
or whatever, there are likely to be enough covariates to
undermine the attempt to identify "noise".
Reading over this, I seem to be saying the same as you!
Since Alison says there is a confidence interval of (0.5,0.9)
for the 0.7, this must have been calculated according to
some procedure for which assumptions can be identified,
so maybe I should now go and read up what is said about this
(if anything) in Alison's reference for the bulletin in her
first mail.
Cheers,
Ted.
--------------------------------------------------------------------
E-Mail: (Ted Harding) <[log in to unmask]>
Fax-to-email: +44 (0)870 094 0861
Date: 31-Mar-05 Time: 19:05:47
------------------------------ XFMail ------------------------------
******************************************************
Please note that if you press the 'Reply' button your
message will go only to the sender of this message.
If you want to reply to the whole list, use your mailer's
'Reply-to-All' button to send your message automatically
to [log in to unmask]
*******************************************************
|