Trisha Greenhalgh writes:
>I've been trying to collect some up to date data on clinical
>disagreement using Cohen's kappa scores. In Sackett et al's big
>book, there are some examples that are beginning to look rather
>dated. A Medline search got me several thousand articles with
>'kappa' as a textword but my attempts to refine the search have not
>struck gold so far.
It's unclear what you want. If you want a good explanation of what Kappa is
and how to compute it, there are several good textbooks. One of my favorites
is:
Norman, Geoffrey R. and Streiner, David L. (1994) Biostatistics. The Bare
Essentials. St. Louis MO: Mosby-Year Book, Inc. (ISBN: 1-55664-369-1)
I don't have the book in front of me so I can't give page numbers.
If you want good published research examples of the use of Kappa, then you
have to live with the fact that there are thousands of good examples out
there.
Perhaps you should refine the search by limiting the search to a specific
medical discipline like radiography. Here's a good recent example:
ARTICLE TITLE: Teleradiology for rural hospitals: analysis of a field
study.
ARTICLE SOURCE: J Telemed Telecare (England), 1995, 1(4) p202-8
AUTHOR(S): Franken EA Jr; Berbaum KS; Smith WL; Chang PJ; Owen DA; Bergus
GR
AUTHOR'S ADDRESS: Department of Radiology, University of Iowa College of
Medicine, Iowa City, USA. [log in to unmask]
MAJOR SUBJECT HEADING(S): Hospitals, Rural [statistics & numerical data];
Teleradiology [statistics & numerical data]
MINOR SUBJECT HEADING(S): Case-Control Studies; Iowa; Prospective Studies;
Radiography [methods] [statistics & numerical data] [standards]; Sensitivity
and Specificity; Teleradiology [methods] [standards]
INDEXING CHECK TAG(S): Comparative Study; Human
PUBLICATION TYPE: JOURNAL ARTICLE
ABSTRACT: We compared the accuracy of a low-cost teleradiology system with
plain film at a small rural hospital. The comparison was a case-control,
paired-comparison study. In total 377 consecutive cases were read
prospectively by teleradiology and later by independent interpretation of
the plain films. 'Truth' was determined in discrepant cases by further
investigation of available records and images. Sensitivity and specificity
were determined for each modality, and agreement using the kappa statistic.
There was 90% agreement between teleradiology and plain film, with no
significant differences. Sensitivities (0.88, 0.89) and specificities (0.98,
0.98) of the two methods were almost identical. McNemar's test indicated no
significant differences in the accuracy of the two modalities. We conclude
that inexpensive teleradiology for small rural hospitals is equivalent to
plain film for radiologists' interpretation.
MEDLINE INDEXING DATE: 199803
ISSN: 1357-633X
LANGUAGE: English
UNIQUE NLM IDENTIFIER:
Here's a cute article dealing with malpractice. That should be a good
attention-getter.
ARTICLE TITLE: Variation in expert opinion in medical malpractice review
[published erratum appears in Anesthesiology 1997 Mar; 86(3):754]
ARTICLE SOURCE: Anesthesiology (United States), Nov 1996, 85(5) p1049-54
AUTHOR(S): Posner KL; Caplan RA; Cheney FW
AUTHOR'S ADDRESS: Department of Anesthesiology, University of Washington
School of Medicine, Seattle 98195-6540, USA. [log in to unmask]
MAJOR SUBJECT HEADING(S): Anesthesiology [standards]; Expert Testimony
[standards]; Malpractice
MINOR SUBJECT HEADING(S): Insurance Claim Review; Research Design;
Statistics
PUBLICATION TYPE: JOURNAL ARTICLE
ABSTRACT: BACKGROUND: Expert opinion in medical malpractice is a form of
implicit assessment, based on unstated individual opinion. This contrasts
with explicit assessment processes, which are characterized by criteria
specified and stated before the assessment. Although sources of bias that
might hinder the objectivity of expert witnesses have been identified, the
effect of the implicit nature of expert review has not been firmly
established. METHODS: Pairs of anesthesiologist-reviewers independently
assessed the appropriateness of care in anesthesia malpractice claims. With
potential sources of bias eliminated or held constant, the level of
agreement was measured. RESULTS: Thirty anesthesiologists reviewed 103
claims. Reviewers agreed on 62% of claims and disagreed on 38%. They agreed
that care was appropriate in 27% and less than appropriate in 32%.
Chance-corrected levels of agreement were in the poor-good range (kappa =
0.37; 95% CI = 0.23 to 0.51). CONCLUSIONS: Divergent opinion stemming from
the implicit nature of expert review may be common among objective medical
experts reviewing malpractice claims.
MEDLINE INDEXING DATE: 199702
ISSN: 0003-3022
LANGUAGE: English
UNIQUE NLM IDENTIFIER: 97074395
I'm sure you could find a lot more. Good luck!
Steve Simon, [log in to unmask], Standard Disclaimer.
STATS - Steve's Attempt to Teach Statistics: http://www.cmh.edu/stats
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|