Thank you very much for the response. It seems that this is exactly what I
needed. However, there is one point that is unclear to me. This page says:
From each analysis, one must first calculate and save the estimates and standard
errors. Suppose that Qj is an estimate of a scalar quantity of interest (e.g. a
regression coefficient) obtained from data set jj=1,2,...,m) and Uj is the
standard error associated with Qj
My problem is that I'm not sure what exactly does the author mean under "the
standard error associated with Qj". The problem is that, according to
http://mathworld.wolfram.com/StandardError.html :
There appear to be two different definitions of the standard error.
The standard error of a sample of sample size n is the sample's standard
deviation divided by sqrt(n). It therefore estimates the standard deviation of
the sample mean based on the population mean (Press et al. 1992, p. 465). Note
that while this definition makes no reference to a normal distribution, many
uses of this quantity implicitly assume such a distribution.
The standard error of an estimate may also be defined as the square root of the
estimated error variance sigma^^^2 of the quantity,
s_e=sqrt(sigma^^^2)
Do you know which of the two estimates does your link mean?
Thank you
________________________________
From: Sotiris Adamakis <[log in to unmask]>
To: Jonathan James <[log in to unmask]>
Sent: Wed, July 21, 2010 12:47:14 AM
Subject: Re: Combining variances
Hi Jonathan,
You need to combine the between and the within variances. An example of how to
do that in multiple imputation can be found here:
http://www.stat.psu.edu/~jls/mifaq.html#howto
You need a similar idea.
Regards,
Sotiris
Dr. Sotiris Adamakis
Senior Statistician - Ipsos MORI
T +44 20 7347 3828
[log in to unmask]
www.ipsos-mori.com
79-81 Borough Road, London, SE1 1FY
If we torture the data long enough, in the end they will confess, and the
confession will usually be wrong.
--- Στις Δευτ., 19/07/10, ο/η Jonathan James <[log in to unmask]>
έγραψε:
>Από: Jonathan James <[log in to unmask]>
>Θέμα: Combining variances
>Προς: [log in to unmask]
>Ημερομηνία: Δευτέρα, 19 Ιούλιος 2010, 13:57
>
>
>Sorry if you've got this mail without subject.
>Also I would like to mention that I have originally posted this question on
>http://mathoverflow.net/questions/32365/combining-variances but due to the lack
>of answers, re-posted it here. I will keep both allstat and methoverflow.com
>updated with the responses I will get.
>
>My original question is
>
>
>>I have a set of N* bodies, which is a random sample from a population whose mean
>>
>>and variance I want to estimate.
>>
>>A property of each body is being measured m_i times (m_i>1$ and different for
>>each body index i identifies which body it is;
>>the property is expected to be distributed around zero). I would like to
>>describe the resulting measurement.
>>
>>Particularly I'm interested in average property value and in the variance.
>
>>
>>The average value is simple. First calculate the mean values for each body and
>>then calculate the mean of means.
>>
>>The variance is more tricky. There are two variances: the variance of
>>measurement and the variance of
>>
>> property values. In order to have an idea on the confidence we have in any
>>single measurement,
>>
>> we need to account for both the sources. Unfortunately, I can't think of a good
>>
>>method.
>>
>> It is obvious that putting all the numbers in a single pool and calculating the
>>
>>stdev of this pool isn't a good idea.
>
>Any suggestion?
>
>
>
>You may leave the list at any time by sending the command
>
>SIGNOFF allstat
>
>to [log in to unmask], leaving the subject line blank.
>
>
You may leave the list at any time by sending the command
SIGNOFF allstat
to [log in to unmask], leaving the subject line blank.
|