Dear all (please reply to sender,[log in to unmask] ):
Here is a question I'm not sure about for a simulation study.
In order to evaluate the power of some "regression diagnostics"
methods aimed at solving "masking effect" when multiple outliers
occur, 40 "good" points, for example, are generated from a linear
regression model, and then 20 contaminated points from an other
distribution are added. To show the efficiency of a diagnostic method,
, if the nominal level of the detective test settled to be 0.05, I
would use the proportion when 95% of the outliers(that is 19 points
in the situation above) are correctly find out during the simulation
(say, 500 or 1000 times repeat). Is this procedure appropriate? If it
works, then under the same level, with 20 good points and 5 bad ones
(as lightly contaminated), how many correctly captured points should
be to achive the efficiecy?
Hints or refernce to the problem above or answers for how to analyse
a simulation result are all welcome.
I'll post a summary of replies.
Thanks in advance for your time and consideration.
Best regards,
WangTong mailto:[log in to unmask]
*********************************************
Wang Tong
Department of Health Statistics
The School of Public Helth
ShanXi Medical University
South Xinjian Road 86#
Taiyuan City, ShanXi Province,P.R.CHINA.
Tel: 086-0351-4135049
Fax: 086-0351-2027943
E-mail: [log in to unmask]
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|