JiscMail Logo
Email discussion lists for the UK Education and Research communities

Help for ALLSTAT Archives


ALLSTAT Archives

ALLSTAT Archives


allstat@JISCMAIL.AC.UK


View:

Message:

[

First

|

Previous

|

Next

|

Last

]

By Topic:

[

First

|

Previous

|

Next

|

Last

]

By Author:

[

First

|

Previous

|

Next

|

Last

]

Font:

Proportional Font

LISTSERV Archives

LISTSERV Archives

ALLSTAT Home

ALLSTAT Home

ALLSTAT  2004

ALLSTAT 2004

Options

Subscribe or Unsubscribe

Subscribe or Unsubscribe

Log In

Log In

Get Password

Get Password

Subject:

SUMMARY: references on post hoc tests

From:

Elizabeth Hensor <[log in to unmask]>

Reply-To:

Elizabeth Hensor <[log in to unmask]>

Date:

Fri, 27 Feb 2004 13:51:55 -0000

Content-Type:

text/plain

Parts/Attachments:

Parts/Attachments

text/plain (419 lines)

Dear Allstatters

 

Many thanks to those of you who responded to my query regarding the
validity of performing post hoc tests following a non-significant ANOVA
F test. There was not a lot of consensus but I think most people came
down in favour of pairwise comparisons being justified provided there
were a priori comparisons of interest (these would then need correction
for multiple tests). Personally I had assumed that a non-significant
ANOVA meant that testing should stop there, but found it difficult to
back up this view in light of contrary published precedent, and a
nagging worry that potentially important differences between two groups
(out of many) could be masked by very small differences between the rest
in ANOVA. My confusion here largely stemmed from wondering why one would
perform ANOVA at all if one were intending to perform the pairwise
comparisons of interest irrespective of the ANOVA result, but it has
been pointed out to me that ANOVA can identify trends across groups and
it is possible to achieve a significant result with ANOVA without
finding any highly significant pairwise comparisons. Therefore, if you
intend only to make certain comparisons of interest, the decision to
perform an ANOVA prior to making pairwise comparisons, rather than just
performing Bonferroni-corrected t-tests, for example, is therefore
dependent on the nature of the research question at hand, and whether
you would reasonably expect there to be a trend across your groups of
interest (or would be interested in exploring this).

 

The literature towards which I was guided included:

 

-J C Hsu's book 'Multiple Comparisons', Chapman & Hall (1996).  ISBN 0
412 98281 1. p178.

-Work on multiple range tests by J. N Perry and G Freeman.

-Statistics for the social sciences, by William L Hays, published by
Holt, Rhinehart Winston

-Using multivariate statistics, by Barbara G Tabachnick and Linda S
Fidell

-Statistical Methods in Medical Research, by P. Armitrage and G. Berry,
`published by Blackwell.

-Bland & Altmann: Multiple significant tests: the Bonferonni method, Br
Med J 1995: 310: 170.

-Perneger TV. What's wrong with Bonferroni adjustments. Br Med J 1998:
316: 1236-8.

-Newson R and the ALSPAC Study Team. Multiple-test procedures and smile
plots. 

  The Stata Journal 2003; 3(2): 100-132.

- Altman's Practical Statistics for Medical Research.

 

Armed with this information I will discuss the matter further with my
colleagues and consider carefully whether ANOVA is the correct technique
for answering the precise questions they are asking of their data.
Thanks again to all who replied - I include their full responses below.

 

Regards

 

Liz Hensor

 

 

Dr Elizabeth M A Hensor PhD BSc (Hons)

Data Analyst

Academic Unit of Musculoskeletal and Rehabilitation Medicine

36 Clarendon Road

Leeds 

West Yorkshire

LS2 9NZ

Tel: +44 (0) 113 3434944

Fax: +44 (0) 113 2430366

[log in to unmask]

 

........................................................................
........................................

 

See J C Hsu's book 'Multiple Comparisons', Chapman & Hall (1996).  ISBN
0 412 98281 1.

 

Page 178: 'In short, to consider multiple comparisons as to be performed
only if the F-test for homogeneity rejects is a mistake.'  For post hoc
comparisons, he suggests Scheffe's method (providing simultaneous
confidence intervals).

 

John Shade

 

........................................................................
........................................

I cannot give you any direct references to papers on multiple range
tests 

but I suggest you do a lit search covering the agricultural and
statisitcal 

journal on authors J. N Perry and G Freeman who I believe have papers on


the subject. The key words experimentwise and comparisonwise would also
be 

useful and I have a feeling that there is a paper with Goldilocks and
the 

Three bears in the title which deals with multiple range tests.

 

I used to have a whole list of papers on the subject which has got lost
in 

my recent moves but I would suggest going back to the exoerimenters and 

quiz them why the included the treatments. I nearly always found that
the 

researchers had underlying questions at the back of their minds which 

influenced the treatments selected, ie two different families of drugs
to 

be compared, with different levels of active ingredients and I was able
to 

devise a series of orthogonal polynomials to test their ideas. I would
not 

be surprised if this were the same in your case.

 

Ken Ryder

 

........................................................................
...........................................

Liz

 

There is a brief discussion of the issue in

 

Statistics for the social sciences, by William L Hays, published by
Holt, Rhinehart Winston

 

and also in

 

Using multivariate statistics, by Barbara G Tabachnick and Linda S
Fidell

 

The only one I could find in my collection that mentions the specific
case where the overall F test is not significant is

 

Statistical Methods in Medical Research, by P. Armitrage and G. Berry,
`published by Blackwell.

 

It says

 

If there are no contrasts between groups which have an a priori claim on
our attention, further scrutiny of the differences between means could
be made to depend largely on the F test in the analysis of variance. If
the variance ratio is not significant, or even suggestively large, there
will be little point in examining differences between pairs of means. If
F is significant there is reasonable evidence that real differences
exist and are large enough to reveal themselves above the random
variation.

 

Personally, I would draw your attention to two things:

 

a) The logic of the process. What meaning could or should be attached to
a significant subgroup result, given that the overall F test is
non-significant? I would argue that one could not attach much importance
to it. a I would assume it to be a fluke (the result of data dredging)
unless further evidence is produced.

 

 

b) Calculation of confidence intervals is likely to shed more light than
the use of significance tests.

 

best regards

 

Blaise F Egan

 

........................................................................
.................................

Liz

 

It sounds like Dave Saville's approach  (Think he wrote something about
it in Technometrics in the 80s or 90s)  My gut feeling is against it,
though there is a bit of support for it in Mead's 'Design of
Experiments' <drat, can't find it straight off, but the implication I
drew from it was that if you had a number of treatments which were all
very similar and one which stood out, the mean square for treatments
would be low - since it's an average of many small deviations and one
large one - and so the F test might indicate no significant difference
when there was something of interest there.  I guess the thing is to
look at the pattern of the means and decide whether you've got a Normal
spread, and any t-test would just be cherry-picking, or whether a few
stand out from the herd and migfht be worth commenting on>

 

Duncan Hedderley

 

........................................................................
................................

Elizabeth,

 

A quick google search on "lancet subgroup statistics dangers" lead me to
http://www.som.soton.ac.uk/staff/tnb/publications/The Presentation of
Statistics.pdf. And that paper cites these:

 

1. Bland & Altmann: Multiple significant tests: the Bonferonni method,
Br Med J 1995: 310: 170.

 

2. Perneger TV. What's wrong with Bonferroni adjustments. Br Med J 1998:
316: 1236-8.

 

I haven't read them but from their titles, I expect they might provide
you the ammunition you're looking for!

 

Dominic Muston.

 

........................................................................
.................................

Hi Elizabeth,

 

Even when the overall ANOVA dose not detect a significant difference
clients will often insist on multiple comparisons. If the results are
used inhouse to provide clues on the direction of future research I
prefer to use the least significant difference (t-test). If results are
going external or are needed for an important decision you need to use a
multiple comparison test that protects the overall error rate. The most
common and probably most powerful multiple comparison test is Tukey's
test. The most conservative test is the Bonferroni test.

 

The attached zip file (if you are not too afraid to open a document from
an unknown source) contains the information lifted from the SAS online
documents

 

Regards

 

Dave.

 

........................................................................
....................................

Hello Elizabeth

 

A possible place to start looking into the issues of multiple testing in


general might be my Stata Journal paper on the subject (Newson, 2003), 

which points the way to a lot of other references. A pre-publication
draft 

is downloadable from my website (see my signature), where you can also 

download my presentation on the subject at the 2003 UK Stata Users' 

Meeting. Multiple-test procedures form a fast-changing area of
statistics, 

as some seminal papers have been published this millennium (from 2001
onwards).

 

I hope this helps.

 

Best wishes

 

Roger

 

........................................................................
..................................

Hi Liz,

 

I'd always been taught that there was no point in analysing pairwise
comparisons of groups if the overall analysis was not significant.

 

Altman's Practical Statistics for Medical Research says; 'Note that you
should only investigate differences between individual groups when the
overall comparison of groups in the analysis of variance is significant
unless certain comparisons were intended in advance of the analysis'.

 

So, unless you were always interested in comparing group 2 and 4 say,
above all the other group comparisons, I wouldn't bother with post-hoc
tests, since they will always be non-significant (I presume, although
I've never checked that) and they will only serve to up your Type I
error rate.

 

Remember also, that just because an analysis method has been published,
that doesn't make it right.

 

Hope this helps

 

Kathleen Baster

 

 

Top of Message | Previous Page | Permalink

JiscMail Tools


RSS Feeds and Sharing


Advanced Options


Archives

May 2024
April 2024
March 2024
February 2024
January 2024
December 2023
November 2023
October 2023
September 2023
August 2023
July 2023
June 2023
May 2023
April 2023
March 2023
February 2023
January 2023
December 2022
November 2022
October 2022
September 2022
August 2022
July 2022
June 2022
May 2022
April 2022
March 2022
February 2022
January 2022
December 2021
November 2021
October 2021
September 2021
August 2021
July 2021
June 2021
May 2021
April 2021
March 2021
February 2021
January 2021
December 2020
November 2020
October 2020
September 2020
August 2020
July 2020
June 2020
May 2020
April 2020
March 2020
February 2020
January 2020
December 2019
November 2019
October 2019
September 2019
August 2019
July 2019
June 2019
May 2019
April 2019
March 2019
February 2019
January 2019
December 2018
November 2018
October 2018
September 2018
August 2018
July 2018
June 2018
May 2018
April 2018
March 2018
February 2018
January 2018
December 2017
November 2017
October 2017
September 2017
August 2017
July 2017
June 2017
May 2017
April 2017
March 2017
February 2017
January 2017
December 2016
November 2016
October 2016
September 2016
August 2016
July 2016
June 2016
May 2016
April 2016
March 2016
February 2016
January 2016
December 2015
November 2015
October 2015
September 2015
August 2015
July 2015
June 2015
May 2015
April 2015
March 2015
February 2015
January 2015
December 2014
November 2014
October 2014
September 2014
August 2014
July 2014
June 2014
May 2014
April 2014
March 2014
February 2014
January 2014
December 2013
November 2013
October 2013
September 2013
August 2013
July 2013
June 2013
May 2013
April 2013
March 2013
February 2013
January 2013
December 2012
November 2012
October 2012
September 2012
August 2012
July 2012
June 2012
May 2012
April 2012
March 2012
February 2012
January 2012
December 2011
November 2011
October 2011
September 2011
August 2011
July 2011
June 2011
May 2011
April 2011
March 2011
February 2011
January 2011
December 2010
November 2010
October 2010
September 2010
August 2010
July 2010
June 2010
May 2010
April 2010
March 2010
February 2010
January 2010
December 2009
November 2009
October 2009
September 2009
August 2009
July 2009
June 2009
May 2009
April 2009
March 2009
February 2009
January 2009
December 2008
November 2008
October 2008
September 2008
August 2008
July 2008
June 2008
May 2008
April 2008
March 2008
February 2008
January 2008
December 2007
November 2007
October 2007
September 2007
August 2007
July 2007
June 2007
May 2007
April 2007
March 2007
February 2007
January 2007
2006
2005
2004
2003
2002
2001
2000
1999
1998


JiscMail is a Jisc service.

View our service policies at https://www.jiscmail.ac.uk/policyandsecurity/ and Jisc's privacy policy at https://www.jisc.ac.uk/website/privacy-notice

For help and support help@jisc.ac.uk

Secured by F-Secure Anti-Virus CataList Email List Search Powered by the LISTSERV Email List Manager