JiscMail Logo
Email discussion lists for the UK Education and Research communities

Help for ALLSTAT Archives


ALLSTAT Archives

ALLSTAT Archives


allstat@JISCMAIL.AC.UK


View:

Message:

[

First

|

Previous

|

Next

|

Last

]

By Topic:

[

First

|

Previous

|

Next

|

Last

]

By Author:

[

First

|

Previous

|

Next

|

Last

]

Font:

Proportional Font

LISTSERV Archives

LISTSERV Archives

ALLSTAT Home

ALLSTAT Home

ALLSTAT  2000

ALLSTAT 2000

Options

Subscribe or Unsubscribe

Subscribe or Unsubscribe

Log In

Log In

Get Password

Get Password

Subject:

KAPPA

From:

Thierry Gorlia <[log in to unmask]>

Reply-To:

Thierry Gorlia <[log in to unmask]>

Date:

Tue, 7 Nov 2000 11:17:11 +0100

Content-Type:

multipart/mixed

Parts/Attachments:

Parts/Attachments

text/plain (51 lines) , kappa.txt (374 lines)

Dear allstater,

Please find herewith an txt file with the answers I received for my KAPPA
question. Thank you to all contributers

Thierry
 <<kappa.txt>> 


> ----------------------------------------------------
> Thierry Gorlia             Email : [log in to unmask]
> Statistician
> Health Economic Unit
> EORTC Brain Group
> European Organization for research and Treatment of Cancer
> Data Centre - Avenue Mounier 83, 1200 Brussels, Belgium
> Phone : +32 2 774 16 52
> Fax 	: +32 2 772 67 01
> 
> ----------------------------------------------------
> 
> -----Original Message-----
> From:	Thierry Gorlia [SMTP:[log in to unmask]]
> Sent:	Wednesday, 27 September, 2000 11:07 AM
> To:	[log in to unmask]
> Subject:	KAPPA !
> 
> dear allstater,
> 
> I am looking for a good reference discussing the sample size/power
> calculation for testing the kappa in the case of a two by two agreement
> table.
> 
> Thanks in advance
> 
> Thierry 
> 
> > ----------------------------------------------------
> > Thierry Gorlia             Email : [log in to unmask]
> > Statistician
> > Health Economic Unit
> > EORTC Breast Group
> > European Organization for research and Treatment of Cancer
> > Data Centre - Avenue Mounier 83, 1200 Brussels, Belgium
> > Phone : +32 2 774 16 52
> > Fax 	: +32 2 772 67 01
> > 
> > ----------------------------------------------------
> > 



Many thanks to all those who responded to my query below.... >When you are making a comparison between two different methods used to measure the same thing the aim is to assess >their agreement with one another. For example, you might want to compare measurements made by a current piece of >equipment with measurements made by a new piece of equipment (but the true measurement is not known). Using simple >correlation to look at the relationship is not the right thing to do since, amongst other reasons, you would expect there to be >quite a high degree of correlation between two methods which were, after all, designed to measure the same thing! > >In the past I have always used plots of mean value against the difference (sometimes known as Bland and Altman plots in >certain circles!) as described in Bland & Altman's paper in the Lancet (1986) which also includes an excellent explanantion >of why correlation is not a suitable method to assess agreement with! >However, I have recently come across something called the coefficient of concordance (in the book 'Biostatistical >Analysis' by Zar) and wondered if anyone has any opinions on experience of use or comparison of methods or knows of >any other methods used to assess agreement of this type that they would like to share with me! There doesn't seem to be a >great deal of readily accessible information around on this subject. Lots of different methods were suggested along with some interesting opinions. In addition to Lin's coefficient of concordance and the limits of agreement method by Bland and Altman, the other methods that were suggested were: Kappa statistic (although this is for categorical data not continuous) Multitrait-Multimethod model (MTMM)gage R&R analysis Data envelopment analysis Passing-Bablok regression References suggested were: Measurement in Medicine by Ludbrook (1997) 24(2) 193-203 Mandel, K and Stiehler, R D. "Sensitivity - A criterion for the comparison of methods test" J. Res. Natl. Bur. Stand. 1954; 53(3):155-159 Tan, C Y and Iglewicz, B. "Measurement - methods comparisons and linear statistical relationship" Technometrics. 1999; 41(3):192-201 Bartko (1994) Measures of agreement: a single procedure. Statistics and medicine Lin (1989) A concordance correlation coefficient to evaluate reproducibility. Biometrics (1989):45:255-268 Martin R F. General Deming regression for estimating systematic bias and its confidence interval in method-comparing studies. Clinical Chemistry; 46(1):100-104 (2000) Bland JM, ALtman DG. Measuring agreement in measurement comparison studies. Statistical Methods in Medical Research (1999);8:135-160 Morton AP, Dobson AJ. Assessing sgreement. Medical Journal of Australia (1989); 150:384-387 "Evaluatng the Measurement Process" by Donald J Wheeler and Richard W Lyday. SPC Press Inc. Passing H, Bablok W: A new biometrical procedure for testing the equality of measurements from two different analytical methods. J Clin. Chem. Clin. Biochem. (1983);21:709-720 Passing H, Bablok, W.: Comparison of several regression procedures for method comparison studies and determination of sample sizes. J Clin. Chem. Clin. Biochem. (1984);22:431-445 Dhanoa, MS et al. Use of mean square prediction error analysis and reproducibility measures to study Near Infrared calibration equatrion performance. Journal of Near Infrared Spectroscopy:7:133-143 The most noteable comment made was probably that by Doug Altman...that Lin's coefficient of concordance is a measure of *relative* agreement whereas the limits of agreement method proposed by Bland and Altman assess *absolute* agreement. If anyone is interested in the replies in more detail please contact me (not the list!!!) and I will be happy to forward this on as an appropriate file. Thanks again to all those who replied, JOY ([log in to unmask]) Should Maxwell's tests of marginal homogenity and his generalisation of the McNenamar test be presented with kappa statistics in general use? (Ref. Maxwell A E, Comparing the classification of subjects by two independent judges. British Journal of Psychiatry 1970;116:651-5.) Consider the following data from Doug Altman's excellent book (Altman DG. Practical Statistics for Medical Research. Chapman and Hall 1991.):                                        RAST                       negative weak moderate high very high MAST negative 86 3 14 0 2        weak 26 0 10 4 0        moderate 20 2 22 4 1        high 11 1 37 16 14        very high 3 0 15 24 48 Possible co-presentation of kappa and Maxwell: General agreement over all categories (2 raters): Unweighted kappa Observed agreement = 47.38% Expected agreement = 22.78% Kappa = 0.318628 (se = 0.026776) 95% confidence interval = 0.266147 to 0.371109 z (for k = 0) = 11.899574 Two sided P < 0.0001 One sided P < 0.0001 Weighted kappa (weighting method is 1-Abs(i-j)/(1 - k)) Observed agreement = 80.51% Expected agreement = 55.81% Kappa = 0.558953 (se = 0.038019) 95% confidence interval = 0.484438 to 0.633469 z (for kw = 0) = 14.701958 Two sided P < 0.0001 One sided P < 0.0001 Disagreement over any category and asymmetry of disagreement (2 raters): Marginal homogeneity (Maxwell) chi-square = 73.013451 df = 4 P < 0.0001 Symmetry (generalised McNemar) chi-square = 79.076091 df = 10 P < 0.0001 Any comments? Iain Buchan Cambridge University Medical Informatics Unit [log in to unmask] You could try: Walter SD, Eliasziw M, Donner A. Sample size and optimal designs for reliability studies. Statistics in Medicine 1998;17:101-10. It is quite easy to turn the formula in the paper into an Excel spreadsheet. Steff > dear allstater, > > I am looking for a good reference discussing the sample size/power > calculation for testing the kappa in the case of a two by two > agreement table. > > Thanks in advance > > Thierry >   --------------------------------------------------- Dr. Stephanie C. Lewis Medical Statistician Bramwell Dott Building Department of Clinical Neurosciences Western General Hospital Crewe Road EDINBURGH Tel: +44 (0) 131 537 2932 EH4 2XU Fax: +44 (0) 131 332 5150 UK Email: [log in to unmask] I'm sorry that I'm unable to help you with this but I would be very interested in a summary of the responses that you recieve. Thanks in advance. Louise Hiller. > -----Original Message----- > From: Thierry Gorlia [mailto:[log in to unmask]] > Sent: 27 September 2000 10:07 > To: [log in to unmask] > Subject: KAPPA ! > > > dear allstater, > > I am looking for a good reference discussing the sample size/power > calculation for testing the kappa in the case of a two by two > agreement > table. > > Thanks in advance > > Thierry > > > > Hi there Thierry, A good reference is 'An Introduction to Categorical Data Analysis', by Alan Agresti. Published by Wiley, 1996, page 246. (Easy to follow) regards Judi. At 11:07 27/09/2000 +0200, you wrote: >dear allstater, > >I am looking for a good reference discussing the sample size/power >calculation for testing the kappa in the case of a two by two agreement >table. > >Thanks in advance > >Thierry > >> > > Hi! I don't deal with kappa since my MSc dissertation, but I seem to remember that these were very useful. If they don't answer your question, they may have useful references. Brennan, P. and Silman, A. "Statistical methods for assessing observer variability in clinical measures" British Medical Journal 1992, vol. 304 pp 1491-1494 D. Altman "Practical statistics for medical research" I hope this helps - Miguel _ Ryan \ report.doc Page 1   of 6 _ On 27 Sep 2000, at 11:07, Thierry Gorlia wrote: > I am looking for a good reference discussing the sample size/power > calculation for testing the kappa in the case of a two by two > agreement table. > > Thanks in advance Dear Thierry, It would rarely be sensible to estimate kappa if there is any serious doubt about whether it is greater than zero. It is more important to be able to estimate it and get a confidence interval for it. For a 2 by 2 table, there is an excellent method by Donner (1992) which can be programmed in closed form. This really (Newcombe 1996) uses the symmetrised version of kappa, which was orginially formulated by Scott (1955) and known as pi - though it's always referred to as kappa - and arguably (Zwick 1989) much better anyway than the Cohen (1960) unsymmetrised kappa. References. Cohen J. A coefficient of agreement for nominal scales. Educational and Psychological Measurement 1960, 20, 37-46. Donner A, Eliasziw M. A goodness-of-fit approach to inference procedures for the kappa statistic: confidence interval construction, significance testing and sample size estimation. Statistics in Medicine 1992, 11, 1511-1519. Newcombe RG. The relationship between chi-square statistics from matched and unmatched analyses. Journal of Clinical Epidemiology, 1996, 49, 1325. Scott WA. Reliability of content analysis: the case of nominal scale coding. Public Opinion Quarterly 1955, 19, 321-325. Zwick R. Another look at interrater agreement. Psychological Bulletin 1988, 103, 374-378. Hope this helps. Robert Newcombe. .......................................... Robert G. Newcombe, PhD, CStat, Hon MFPHM Senior Lecturer in Medical Statistics University of Wales College of Medicine Heath Park Cardiff CF14 4XN, UK. Phone 029 2074 2329 or 2311 Fax 029 2074 3664 Email [log in to unmask] Macros for good methods for confidence intervals for proportions and their differences available at http://www.uwcm.ac.uk/uwcm/ms/Robert.html Fleiss, J. F. (1981) Statistical methods for rates and proportions. Has a formula for the standard error ---------- From: Thierry Gorlia <[log in to unmask]> To: [log in to unmask] Subject: KAPPA ! Date: Wednesday, September 27, 2000 11:07 AM dear allstater, I am looking for a good reference discussing the sample size/power calculation for testing the kappa in the case of a two by two agreement table. Thanks in advance Thierry   Hi, I found these two papers are quite useful: 1. for two rater Kappa: Sample size determinations for the two rater kappa statistic, by V. F. Flack, A. A. Afifi, P.A. Lachenbruch, Psychometrika, Vol. 53, No. 3, 321-325. 2. for manay raters: Measuring nominal scale agreement among many raters, by Joseph L. Fleiss, Psychological Bulletin, 1971, Vol.76, No. %, 378-382 Hope this helps, Arier ***************************** Arier Lee Biostatistician Clinical Trials Research Unit University of Auckland New Zealand ***************************** Dear Thierry, The NQUERY Advisor 2.0 software I have calculates sample size for Kappa coefficent (as well as for a lot of other tests and CIs). In the manual there is a reference to Kraemer HC (1989). 2x2 kappa coefficients: measures of agreeement or association. Biometrics 45:269-287. I have not actually read this paper. Hope it helps Roberto

Top of Message | Previous Page | Permalink

JiscMail Tools


RSS Feeds and Sharing


Advanced Options


Archives

April 2024
March 2024
February 2024
January 2024
December 2023
November 2023
October 2023
September 2023
August 2023
July 2023
June 2023
May 2023
April 2023
March 2023
February 2023
January 2023
December 2022
November 2022
October 2022
September 2022
August 2022
July 2022
June 2022
May 2022
April 2022
March 2022
February 2022
January 2022
December 2021
November 2021
October 2021
September 2021
August 2021
July 2021
June 2021
May 2021
April 2021
March 2021
February 2021
January 2021
December 2020
November 2020
October 2020
September 2020
August 2020
July 2020
June 2020
May 2020
April 2020
March 2020
February 2020
January 2020
December 2019
November 2019
October 2019
September 2019
August 2019
July 2019
June 2019
May 2019
April 2019
March 2019
February 2019
January 2019
December 2018
November 2018
October 2018
September 2018
August 2018
July 2018
June 2018
May 2018
April 2018
March 2018
February 2018
January 2018
December 2017
November 2017
October 2017
September 2017
August 2017
July 2017
June 2017
May 2017
April 2017
March 2017
February 2017
January 2017
December 2016
November 2016
October 2016
September 2016
August 2016
July 2016
June 2016
May 2016
April 2016
March 2016
February 2016
January 2016
December 2015
November 2015
October 2015
September 2015
August 2015
July 2015
June 2015
May 2015
April 2015
March 2015
February 2015
January 2015
December 2014
November 2014
October 2014
September 2014
August 2014
July 2014
June 2014
May 2014
April 2014
March 2014
February 2014
January 2014
December 2013
November 2013
October 2013
September 2013
August 2013
July 2013
June 2013
May 2013
April 2013
March 2013
February 2013
January 2013
December 2012
November 2012
October 2012
September 2012
August 2012
July 2012
June 2012
May 2012
April 2012
March 2012
February 2012
January 2012
December 2011
November 2011
October 2011
September 2011
August 2011
July 2011
June 2011
May 2011
April 2011
March 2011
February 2011
January 2011
December 2010
November 2010
October 2010
September 2010
August 2010
July 2010
June 2010
May 2010
April 2010
March 2010
February 2010
January 2010
December 2009
November 2009
October 2009
September 2009
August 2009
July 2009
June 2009
May 2009
April 2009
March 2009
February 2009
January 2009
December 2008
November 2008
October 2008
September 2008
August 2008
July 2008
June 2008
May 2008
April 2008
March 2008
February 2008
January 2008
December 2007
November 2007
October 2007
September 2007
August 2007
July 2007
June 2007
May 2007
April 2007
March 2007
February 2007
January 2007
2006
2005
2004
2003
2002
2001
2000
1999
1998


JiscMail is a Jisc service.

View our service policies at https://www.jiscmail.ac.uk/policyandsecurity/ and Jisc's privacy policy at https://www.jisc.ac.uk/website/privacy-notice

For help and support help@jisc.ac.uk

Secured by F-Secure Anti-Virus CataList Email List Search Powered by the LISTSERV Email List Manager