Hello all,
I am interested in doing the following. I have a simple linear regression
problem: Y = a + bX + e where e are errors not necessarily. Now the truth
is that the true slope is between 0 and 1. But my regression equation gives
me a slope that can be either be negative, or positive. So I truncate the
slope, meaning if I get a negative value for the slope, I use 0 and if I
get a positive value greater than 1, I use 1.
My question is this are there any papers around that has this proof for
this type of truncation of the slope? I am looking for a proof that the
trimmed slope that I am using actually converges to the true slope.
--
Thanks,
Jim.
You may leave the list at any time by sending the command
SIGNOFF allstat
to [log in to unmask], leaving the subject line blank.
|