Print

Print


But you noticed, so in a sense it was flagged.  :)  In the same way it
doesn't flag significant results, it expects you to interpret them.

Methods that automatically flagged things would probably irritate me -
because I want to decide whether I think it's a problem, I don't want
the computer to decide.

No one usually complains about their Wald statistics being too large.
What were the 95% CIs of your exp(B)s?  If they are silly, that's
problematic.  And your exp(B)s too.

If your exp(B) (or odds ratio) is 1000, and your sample size is 200,
then you might write in your report Men were 1000 times more likely to
do X than women.  I'd ask how that could be, if your sample size was
that small.



On 4 July 2011 13:15, Dan <[log in to unmask]> wrote:
> On 4 July 2011 12:23, Dan <[log in to unmask]> wrote:
>> Thanks Jeremy, I thought it might be something to do with that, though wasn't aware that was the name for it.  I find it worrying SPSS does not tell you this and instead just throws out highly significant, large coefficients (isn't that what tests of significance are supposed to be about in the first place, that we can't be sure because our sample size is too small?)
>>
>
>
>
> Hi Dan
>
> I don't know of any software that gives you a warning.  You asked SPSS
> to calculate the estimates and p-value, and it did.  Are you also
> looking at standard errors?  (I forget if they're an option or a
> default in SPSS).  This is often the first sign of it.  Your standard
> errors are likely to be ridiculous.
>
> It's not that SPSS isn't sure - SPSS is very sure.  And it's telling
> you the answer.
>
>
>> The thing that I find it difficult in deciding an arbitrary cut-off point.  I have large numbers of models in the chapter I'm currently working on, some with 60, 70, 80, 100 odd in the smallest binary category (same predictors for all).  They mostly confirm what I've predicted, so it seems a shame to throw them out.  Yet at the same time, an examiner is probably going to flag up this quasi-complete separation problem.  I figure the only sensible solution is to pick some kind of semi-arbitrary size and bite the bullet.
>>
>
>
> My advice (which might be rubbish) would be not to choose an arbitrary
> point, but to present them all and say that there are interpretation
> difficulties because of separation issues.  If you don't show them, it
> looks suspicious - like you've something to hide.
>
> And an aside.  Students _always_ worry that examiners are going to
> pick them up on their statistics.  Most examiners are not super
> confident with their statistics, and don't find it interesting.  They
> are much more likely to raise other issues, which they are both
> confident and interested in.  (Note: that's not going to be true if
> I'm your examiner.)
>
> Jeremy
>
> haha, all psych-postgrads readers take note.
>
> What I mean to say is, shouldn't SPSS say something to the effect of 'some of the subgroups according to the combination of IVS in your model are very small or non-existant.  Therefore, the large and significant coefficients I'm giving you are not to be trusted'  Or I guess more simply: 'You have quasi-complete separation issues.'
>
> I have read that the standard errors will be very large with this issue, but that doesn't seem to apply in my case (yes they are given by default).  It seems to be the Wald statistic that is ridiculously big for some coefficients, as big as 30 odd which is way off the other values.  So I'm not sure what's going on here.
>
> I think the idea of presenting them all and flagging interpretation issues is a good one.  Something I'm much more comfortable with than an arbitrary point.
>
> Many thanks
> Dan
>
>
>
>