Dear All,
I am work on a relatively large data-set (N=380) and use the probit
estimator as the response variable is binary. Once the coefficients
have been estimated then this exercise will use the probit model for
predictions.
In order to check the predictive validity of the estimated model,
I intend to apply double cross-validation techniques. This simply
means that the sample is split into two parts, S1 and S2, and each
of these sub-samples is employed as a validation sample for an
estimation performed on the other.
Whilst this may sound good, I have some difficulties how
I would carry out two tasks of this method.
1. How would I assess the stability of coefficients between the two
sub-samples? That is, what sort of parametric statistical
(even non-parametric) test do i need to carry out? Does checking
involve using the LR Test, the Score tet or something else?
2. How would I examine the (mis)classification error rate in the
probit model where the regressand is an indicator? In other
words, what kind of checking procedures should I apply to
investigate the predictive validty of the model?
I would be grateful to you if you would offer me your advice
on what do regarding the above points as well as suggest me
any references of empirical and theoretical context.
Thank you very much indeed for taking the time to consider my
request. I look forward to hearing from you.
Yours sincerely
Panos Papanikolaou
UWCM
Cardiff
Great Britain
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|