I’ve had over 20 e-mails on this subject - many of which contained results
of chi-square and Fisher’s exact tests. Thank you to all who replied.
I’ve split my summary into three sections:
1. Discussion about chi-square contingency test
2. Discussion about Fisher’s exact test
3. Discussion about which of the two should be used
So here goes - and please note these are a filtered version of the
information which I received (containing a certain amount of lifted text)
- and NOT my opinion, so don’t shoot the messenger.
1. Discussion about chi-square contingency test
As for the rule about 20% of expected being less than five. Like I said
above, it is unfortunate that in the example I posted that the lowest
expected value was as high as 4.96, and the opinion here was that this
might as well be 5, so what’s the worry! All others who expressed an
opinion about the rule said that it is only a rule of thumb, and can be
ignored.
2 suggested references were:
T. Lewis, I. Saunders and M. Westcott, in Biometrika (1984), 71, 3, pp
515-22 entitled 'The moments of the Pearson chi-squared statistic and the
minimum expected value in two-way tables
Fienberg,S.E. (1977/1980) The analysis of cross-classified categorical
data. MIT press, Cambridge
Several people mentioned the possibility of applying, Yate’s correction
"….which simply makes the chi-square test more conservative, but probably
does not answer the key issue. It reduces your 2*2 chi-square statistic
from 6.46 to 5.24, but does not alter the fact the driving force behind
that chi-square value is your observed value of 10, which contributes 5.12
of the 6.46" Derek Pike
Others have suggested that Yate’s correction is not of use.. given that N
is large.
2. Discussion about Fisher’s exact test
This technique was generally flagged as appropriate.
"Be careful with FET - need a two-tailed test (though there are two
different ways to do this), it's almost always fallacious to quote a
one-tailed p-value." Robert Newcombe
And regarding CI’s for the P-value (something not mentioned by all):
"BUT ... hypothesis testing isn't everything, it's almost always more
sensible to summarise a 2 by 2 table by some sort of effect size measure,
then put a confidence interval (95% by default) on it.
Assuming that you're comparing two proportions, 10/492 and 15/1989, the
obvious possibilities are the rate ratio or relative risk,
(10/492)/(15/1989); the odds ratio, a variant of this that still works in
retrospective and cross-sectional studies, here (10/482)/(15/1974); and
the difference of proportions, 10/492 - 15/1989" Robert Newcombe.
"The CIs for the odds ratio are more exact than those for the relative
risk, as the conditional distribution of a 2x2 table, given the marginal
totals, is defined by the odds ratio. CIs are more informative than
P-values, because P-values (even exact P-values) measure the compatibility
of the data with the hypothesis that the odds ratio is one, whereas the 95%
CI gives a range of odds ratios with which the data *are* compatible.
EpiInfo can be downloaded free from
http://www.cdc.gov/epo/epi/epiinfo.htm." Roger Newson
As for means of calculating FET (other than by hand), there was quite a bit
of anger/comment about the suite of inadequate and lazily written functions
on the web, which can’t cope with large factorials. But, there are ways
which can:
Stata seems to be a popular way of calculating FET
StatXact3 was also as popular ("…with even larger numbers StatXact would
still do this using Monte Carlo methods rather than exact ones." Ross
Corkrey) I don’t know if this is the case for the other packages
SAS is another possibility Armin Schueler
If you use the shareware statistical programming language, R, with the
add-on package, ctest, it has no problems performing Fisher's exact test -
Simon Bond
CIs for RR and OR are available widely, e.g. in SPSS from CROSSTABS. Macros
for good methods for confidence intervals for proportions and their
differences available at http://www.uwcm.ac.uk/uwcm/ms/Robert.html
Robert Newcombe.
3 .Discussion about which of the two (or others) should be used:
I think the general opinion was that your data present no problems for
chi-square then use that, but otherwise go for FET.
"In general, however, if the software is available I think you should
always use Fisher's exact test." Richard Fisher
"I would only consider using Fisher's exact test if both the row and column
totals are fixed by design -- that is to say, you know before you collect
any data what they will be. (This is not common; for example, if you were
doing a survey and you wanted to sample 500 city people and 200 rural
people, with the possible answers being "yes" and "no", you'd have a 2x2
table with city/rural as one category and yes/no as the other. You know
what that the city and rural totals will be 500 and 200, but you don't know
what the yes and no totals will be ahead of time.)….. the exact calculation
of Fisher's exact test involves a lot of computation when the frequencies
are "large" (then again, 10 is not "large", exactly). One can use the
chi-squared test as an approximation to Fisher's exact test (which is
presumably what the suggestion is), in that with "large" frequencies you'll
get a similar P-value. But the "large" thing is just plain confusing here.
So I would say, unless you have a situation in which Fisher's exact test
applies, go ahead and do the chi-squared test as usual." Kenneth Butler
"Another possibility would be the page that calculates the z-ratio for the
significance of the difference between two independent
proportions.http://faculty.vassar.edu/~lowry/binompro.html" Richard Lowry
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|