Hi all!
Just to clarify, a lot of you have commented on my notation not being clear,
and I wanted to confirm that I am talking about hierarchical models, but
should have probably written either
Model 1: (A,B,C)
Model 2: (AB,C)
or
Model 1: A+B+C
Model 2: A+B+AB+C
Sorry about that.
Also, chi-square based p-values are not the only measure I was using, I'm
also using total absolute error, which also gives me tables where M1 is
better than M2.
Now the differences are actually quite small, so I'm inclined to think this
might be due to a rounding error. The tables have thousands of cells, and in
the concerned tables AB seems to be close to independence anyway, so it is
plausible that its only some 4th or 5th decimal points that are causing
this.
So really if someone could just say "No, a higher model will always - by
definition - be at least as good as the lower nested one, there must be
something wrong with your data!" I'll shut up.
Sorry for filling up your inboxes and thanks everyone who replied for your
kind words and references!
Maja.
On 8 December 2010 17:19, maja zaloznik <[log in to unmask]> wrote:
> Hi!
>
> I would really appreciate some pointers on this:
>
> a log-linear model that produces better fit with fewer terms. E.g. in a 3D
> scenario I have
> Model 1: A+B+C
> Model 2: AB+C
>
> And in some cases I find that Model 1 has better fit. (just to be clear, by
> better fit I mean it has a higher p-value, not a lower Chi-square value - I
> do realise the Chi-squares are not directly comparable.)
>
> Somehow I just always assumed that a "higher" model would always improve
> the fit - the fact that in all the examples I can find in my books that is
> always the case may have had something to do with this..
>
> Any pointers to literature that touches upon such issues would be extremely
> welcome.
>
> Thanks!
>
> Maja.
You may leave the list at any time by sending the command
SIGNOFF allstat
to [log in to unmask], leaving the subject line blank.
|