Dear all,
In the area of statistical process control, I tried to model the
historical within-control data (say 100 observations) that are clearly
multi-modal using finite mixtures of Gaussian. Now to determine for
example the 95% confidence bound to detect deviation from the nominal
data, I tried to take (say 10000) Monte Carlo samples from the mixture
distribution, and then use the relevant quantile to estimate the
confidence bound. Clearly using the quantile of the original 100 data
to estimate a 95% bound is unstable and sensitive to small change in
data.
Now I am challenged by a colleague, who argues that "this will
increase the nominal sample size and thus unduly overestimate the
precision of your estimate" and asks for statistical justification.
Can anyone share his/her thought about this, or direct me to some
reference talking about this issue?
Thank you for your attention.
Sincerely,
Tao Chen
|