On 4 July 2011 12:23, Dan <[log in to unmask]> wrote:
> Thanks Jeremy, I thought it might be something to do with that, though wasn't aware that was the name for it. I find it worrying SPSS does not tell you this and instead just throws out highly significant, large coefficients (isn't that what tests of significance are supposed to be about in the first place, that we can't be sure because our sample size is too small?)
>
Hi Dan
I don't know of any software that gives you a warning. You asked SPSS
to calculate the estimates and p-value, and it did. Are you also
looking at standard errors? (I forget if they're an option or a
default in SPSS). This is often the first sign of it. Your standard
errors are likely to be ridiculous.
It's not that SPSS isn't sure - SPSS is very sure. And it's telling
you the answer.
> The thing that I find it difficult in deciding an arbitrary cut-off point. I have large numbers of models in the chapter I'm currently working on, some with 60, 70, 80, 100 odd in the smallest binary category (same predictors for all). They mostly confirm what I've predicted, so it seems a shame to throw them out. Yet at the same time, an examiner is probably going to flag up this quasi-complete separation problem. I figure the only sensible solution is to pick some kind of semi-arbitrary size and bite the bullet.
>
My advice (which might be rubbish) would be not to choose an arbitrary
point, but to present them all and say that there are interpretation
difficulties because of separation issues. If you don't show them, it
looks suspicious - like you've something to hide.
And an aside. Students _always_ worry that examiners are going to
pick them up on their statistics. Most examiners are not super
confident with their statistics, and don't find it interesting. They
are much more likely to raise other issues, which they are both
confident and interested in. (Note: that's not going to be true if
I'm your examiner.)
Jeremy
|