Dear Allstat members,
I have a quick query I hope you may be able to help me with: when
running one-way ANOVA in SPSS, there are a number of post hoc tests
which can be selected. If I run my analysis and choose Bonferroni and
Tukey from the list, the p-values obtained for each post hoc comparison
are very similar between the two different tests. Thing is, they're each
also very similar (i.e. within 0.001) to the results I get if I run an
independent t-test for the same comparisons. As Julie Pallant's
excellent SPSS Survival Manual states, 'post hoc tests are designed to
help protect against the likelihood of a type 1 error, however this
approach is stricter, making it more difficult to obtain statistically
significant differences'. This is what one would naturally expect, given
the nature of post hoc tests, but it doesn't seem evident in my results.
There are three levels to the factor in my analysis, so there are three
tests of each type (Bonferroni/Tukey) being performed - is SPSS making
any correction for these multiple comparisons when it produces the
p-values for its post hoc tests? If so, why are the results so similar
to uncorrected t-tests? Why are the p-values not, for example, 3 times
higher (as you might expect if the Bonferroni correction for multiple
comparisons was being applied), or am I mixing up my Bonferronis here?
It looks to me as though having obtained the results of my post hoc
Bonferroni tests, I'm going to have to Bonferroni-correct them myself,
which doesn't seem logical, unless the defences against Type 1 errors
employed by these tests are much more subtle than I was expecting.
Thanks in advance for your help,
Liz Hensor
Dr Elizabeth M A Hensor PhD
Data Analyst
Academic Unit of Musculoskeletal and Rehabilitation Medicine
36 Clarendon Road
Leeds
West Yorkshire
LS2 9NZ
Tel: +44 (0) 113 3434944
Fax: +44 (0) 113 2430366
[log in to unmask]
|