Dear Allstatters
Many thanks to those of you who responded to my query regarding the
validity of performing post hoc tests following a non-significant ANOVA
F test. There was not a lot of consensus but I think most people came
down in favour of pairwise comparisons being justified provided there
were a priori comparisons of interest (these would then need correction
for multiple tests). Personally I had assumed that a non-significant
ANOVA meant that testing should stop there, but found it difficult to
back up this view in light of contrary published precedent, and a
nagging worry that potentially important differences between two groups
(out of many) could be masked by very small differences between the rest
in ANOVA. My confusion here largely stemmed from wondering why one would
perform ANOVA at all if one were intending to perform the pairwise
comparisons of interest irrespective of the ANOVA result, but it has
been pointed out to me that ANOVA can identify trends across groups and
it is possible to achieve a significant result with ANOVA without
finding any highly significant pairwise comparisons. Therefore, if you
intend only to make certain comparisons of interest, the decision to
perform an ANOVA prior to making pairwise comparisons, rather than just
performing Bonferroni-corrected t-tests, for example, is therefore
dependent on the nature of the research question at hand, and whether
you would reasonably expect there to be a trend across your groups of
interest (or would be interested in exploring this).
The literature towards which I was guided included:
-J C Hsu's book 'Multiple Comparisons', Chapman & Hall (1996). ISBN 0
412 98281 1. p178.
-Work on multiple range tests by J. N Perry and G Freeman.
-Statistics for the social sciences, by William L Hays, published by
Holt, Rhinehart Winston
-Using multivariate statistics, by Barbara G Tabachnick and Linda S
Fidell
-Statistical Methods in Medical Research, by P. Armitrage and G. Berry,
`published by Blackwell.
-Bland & Altmann: Multiple significant tests: the Bonferonni method, Br
Med J 1995: 310: 170.
-Perneger TV. What's wrong with Bonferroni adjustments. Br Med J 1998:
316: 1236-8.
-Newson R and the ALSPAC Study Team. Multiple-test procedures and smile
plots.
The Stata Journal 2003; 3(2): 100-132.
- Altman's Practical Statistics for Medical Research.
Armed with this information I will discuss the matter further with my
colleagues and consider carefully whether ANOVA is the correct technique
for answering the precise questions they are asking of their data.
Thanks again to all who replied - I include their full responses below.
Regards
Liz Hensor
Dr Elizabeth M A Hensor PhD BSc (Hons)
Data Analyst
Academic Unit of Musculoskeletal and Rehabilitation Medicine
36 Clarendon Road
Leeds
West Yorkshire
LS2 9NZ
Tel: +44 (0) 113 3434944
Fax: +44 (0) 113 2430366
[log in to unmask]
........................................................................
........................................
See J C Hsu's book 'Multiple Comparisons', Chapman & Hall (1996). ISBN
0 412 98281 1.
Page 178: 'In short, to consider multiple comparisons as to be performed
only if the F-test for homogeneity rejects is a mistake.' For post hoc
comparisons, he suggests Scheffe's method (providing simultaneous
confidence intervals).
John Shade
........................................................................
........................................
I cannot give you any direct references to papers on multiple range
tests
but I suggest you do a lit search covering the agricultural and
statisitcal
journal on authors J. N Perry and G Freeman who I believe have papers on
the subject. The key words experimentwise and comparisonwise would also
be
useful and I have a feeling that there is a paper with Goldilocks and
the
Three bears in the title which deals with multiple range tests.
I used to have a whole list of papers on the subject which has got lost
in
my recent moves but I would suggest going back to the exoerimenters and
quiz them why the included the treatments. I nearly always found that
the
researchers had underlying questions at the back of their minds which
influenced the treatments selected, ie two different families of drugs
to
be compared, with different levels of active ingredients and I was able
to
devise a series of orthogonal polynomials to test their ideas. I would
not
be surprised if this were the same in your case.
Ken Ryder
........................................................................
...........................................
Liz
There is a brief discussion of the issue in
Statistics for the social sciences, by William L Hays, published by
Holt, Rhinehart Winston
and also in
Using multivariate statistics, by Barbara G Tabachnick and Linda S
Fidell
The only one I could find in my collection that mentions the specific
case where the overall F test is not significant is
Statistical Methods in Medical Research, by P. Armitrage and G. Berry,
`published by Blackwell.
It says
If there are no contrasts between groups which have an a priori claim on
our attention, further scrutiny of the differences between means could
be made to depend largely on the F test in the analysis of variance. If
the variance ratio is not significant, or even suggestively large, there
will be little point in examining differences between pairs of means. If
F is significant there is reasonable evidence that real differences
exist and are large enough to reveal themselves above the random
variation.
Personally, I would draw your attention to two things:
a) The logic of the process. What meaning could or should be attached to
a significant subgroup result, given that the overall F test is
non-significant? I would argue that one could not attach much importance
to it. a I would assume it to be a fluke (the result of data dredging)
unless further evidence is produced.
b) Calculation of confidence intervals is likely to shed more light than
the use of significance tests.
best regards
Blaise F Egan
........................................................................
.................................
Liz
It sounds like Dave Saville's approach (Think he wrote something about
it in Technometrics in the 80s or 90s) My gut feeling is against it,
though there is a bit of support for it in Mead's 'Design of
Experiments' <drat, can't find it straight off, but the implication I
drew from it was that if you had a number of treatments which were all
very similar and one which stood out, the mean square for treatments
would be low - since it's an average of many small deviations and one
large one - and so the F test might indicate no significant difference
when there was something of interest there. I guess the thing is to
look at the pattern of the means and decide whether you've got a Normal
spread, and any t-test would just be cherry-picking, or whether a few
stand out from the herd and migfht be worth commenting on>
Duncan Hedderley
........................................................................
................................
Elizabeth,
A quick google search on "lancet subgroup statistics dangers" lead me to
http://www.som.soton.ac.uk/staff/tnb/publications/The Presentation of
Statistics.pdf. And that paper cites these:
1. Bland & Altmann: Multiple significant tests: the Bonferonni method,
Br Med J 1995: 310: 170.
2. Perneger TV. What's wrong with Bonferroni adjustments. Br Med J 1998:
316: 1236-8.
I haven't read them but from their titles, I expect they might provide
you the ammunition you're looking for!
Dominic Muston.
........................................................................
.................................
Hi Elizabeth,
Even when the overall ANOVA dose not detect a significant difference
clients will often insist on multiple comparisons. If the results are
used inhouse to provide clues on the direction of future research I
prefer to use the least significant difference (t-test). If results are
going external or are needed for an important decision you need to use a
multiple comparison test that protects the overall error rate. The most
common and probably most powerful multiple comparison test is Tukey's
test. The most conservative test is the Bonferroni test.
The attached zip file (if you are not too afraid to open a document from
an unknown source) contains the information lifted from the SAS online
documents
Regards
Dave.
........................................................................
....................................
Hello Elizabeth
A possible place to start looking into the issues of multiple testing in
general might be my Stata Journal paper on the subject (Newson, 2003),
which points the way to a lot of other references. A pre-publication
draft
is downloadable from my website (see my signature), where you can also
download my presentation on the subject at the 2003 UK Stata Users'
Meeting. Multiple-test procedures form a fast-changing area of
statistics,
as some seminal papers have been published this millennium (from 2001
onwards).
I hope this helps.
Best wishes
Roger
........................................................................
..................................
Hi Liz,
I'd always been taught that there was no point in analysing pairwise
comparisons of groups if the overall analysis was not significant.
Altman's Practical Statistics for Medical Research says; 'Note that you
should only investigate differences between individual groups when the
overall comparison of groups in the analysis of variance is significant
unless certain comparisons were intended in advance of the analysis'.
So, unless you were always interested in comparing group 2 and 4 say,
above all the other group comparisons, I wouldn't bother with post-hoc
tests, since they will always be non-significant (I presume, although
I've never checked that) and they will only serve to up your Type I
error rate.
Remember also, that just because an analysis method has been published,
that doesn't make it right.
Hope this helps
Kathleen Baster
|