Hi all,

I'm going to try and explain my confusion as simply as possible.

Overall, I have a 3x3x4 mixed design where 4 is the between-subjects factor. I'm going to refer to them as X, Y, and Z, with their levels being X1, X2, X3, Y1, Y2 etc.The between-subjects factor is in fact 4 different experiments with the exact same within-subjects factors and so are very comparable.

One experiment (or one level of the Z factor) at a time, I carried out four separate 3x3 within subjects ANOVA and found two significant main effects of X and Y. The main effects of both were followed-up with one-way ANOVAs, each of which was followed by a Bonferroni pairwise comparisons post-hoc. For the Y factor, the comparisons revealed that the difference between X2 and X3 was not significant, but the rest were. I wrote this up and it was fine.

However, I then came to do a 3x3x4 mixed ANOVA (X by Y by Z, where  Z1 is the level I carried out the 3x3 analysis on earlier). Amongst others, I followed up an interaction found with this design with a 1x3 (1 by Y) within-subjects ANOVA for Z1 (and will do for Z2, Z3 & Z4) to check where the differences lie in the interaction.

This is where I'm confused ... shouldn't the 1x3 follow-up reveal the same significance levels as the Bonferroni Pairwise Comparisons? they were both implemented for the same reason, i.e., to determine where the differences lie for a main effect of Y within Z1. But I'm not getting the same p-values. When I carry out the 1x3 ANOVA, it tells me there is a significant difference between all the levels of the Y factor, where as the Bonferroni told me that there were only two significant differences.

Should I skip on reporting the 1x3 ANOVA and refer people to the pairwise comparisons done earlier, or should I report the 1x3 ANOVA anyway? How do I then deal with the discrepancy?

Any help would be much appreciated

Kind Regards

Jeunese