Thank you again to those who responded to a colleague's kappa
questions. I've restated them below and placed the responses beneath.
Brett Larive
PART I
I have not dealt with kappa for a while so refresh my memory. We have
data of 2 by 2 agreement:
YY 4217
YN 54
NY 20
NN 1
So percentage of same response is high (4218/4292 ~ 98.3%), but kappa
is
lousy (0.0194). This is presumably because only one response was NoNo
(NN), which is rare compared to the Yes/No responses.
So my feeling is to report the percentage of same response and the
kappa,
and just describe why the kappa is so low.
Any other comments or insights into this wonderful 2 by 2 table??
PART II
I guess the reason my kappa question seemed odd is the nature of what
we're using kappa for, which is probably questionable. We have an
on-line patient registry with several thousand patients. Now that the
data has accumulated, the investigators (who thought at first no one
would participate), now want to assess the "quality" and "validity" of
the data. We have been scratching our heads about how to deal with
it, and one way was assessing the internal consistency of the data.
The 2 by 2 table I posed before was basically "Did the patient have
the disease" and "Did the patient get treated for the disease"
questions. So getting a lot of Yes/Yes was good, even though the
kappa was poor. We have other combinations of questions where we
would expect a better distribution of Yes/No results
Do any of you have other thoughts about how to deal with this question
of
validity of an existing database?
THANKS AGAIN!!!
From: "Slattery" <[log in to unmask]>
To: "Brett Larive" <[log in to unmask]>
Subject: RE: QUERY: Kappa questions
Date: Sun, 15 Aug 2004 08:03:41 +0100
My preference would be to forget about Kappa, which rarely if ever
provides any information you can act on, and go back to check on
some or all of the very small number of 'no' answers. Were they
correctly coded? Was there a good reason for them.
Regards
Jim Slattery
Date: Mon, 16 Aug 2004 09:27:06 +0100
From: "Robert Newcombe" <[log in to unmask]>
To: <[log in to unmask]>
Subject: Re: QUERY: Kappa questions
Dear Brett,
I think the bottom line here is that there are nuerous things that
can be calculated from a 2 by 2 table, and kappa isn't the most
relevant one for the context you describe. Kappa is for agreement
between what purport to be two measures of the same thing. The
context you describe is quite different. Better to report the
proportions wrongly treated and missed, either as proportions of
the total 4292 or as proportions of relevant row or column totals.
Then get confidence intervals using the first block of the Excel
spredasheet available at my website
http://www.cardiff.ac.uk/medicine/epidemiology_statistics/research
/statistics/proportions.htm
My first instinct was slightly different - before reading Part II,
my reaction was that it may be more relevant to look at marginal
heterogeneity i.e. (54-20)/4292 or really (55-21)/4292 - a CI for
this can be got from section 3 of my spreadsheet. But, on reading
Part II, I realised that this is just saying whether the disease
is being under- or over-treated overall, which isn't entirely
relevant. We would get the same if we had had 4217 NN and 1 YY,
which would have been a totally different situation. So it's
better to follow what I said in the first paragraph above.
Hope this helps.
Robert G. Newcombe PhD CStat FFPH
Reader in Medical Statistics
University of Wales College of Medicine
Heath Park
Cardiff CF14 4XN
Phone 029 2074 2329
Fax 029 2074 2898
http://www.uwcm.ac.uk/study/medicine/epidemiology_statistics/resea
rch/statistics/newcombe.htm
Date: Mon, 16 Aug 2004 13:25:58 +0100
From: "Bland, M." <[log in to unmask]>
To: Brett Larive <[log in to unmask]>
Subject: Re: QUERY: Kappa questions
1. Your relationship is not significant:
| col
row | 1 2 | Total
-----------+----------------------+----------
1 | 4,217 54 | 4,271
2 | 20 1 | 21
-----------+----------------------+----------
Total | 4,237 55 | 4,292
Fisher's exact = 0.238
1-sided Fisher's exact = 0.238
Kappa is almost always low when the prevalence is low. See the
attached figure, which shows what happens when we have the same
probablilty of being correct for all subjects. As the proportion
of "yes" assessments moves away from 0.5, the expected kappa gets
smaller, plummetting after 0.1 or 0.9. So the combination of the
two means very low kappa is guaranteed.
2. It is not really a question of agreement is it? These are not
the same question, so kappa seems inappropriate.
The problem is that although diagnosis and treatment are
independent
wtihin the database, they would not be in a random sample of the
general population. So it all depends on how peole got into the
database in the first place.
martin
Date: Mon, 16 Aug 2004 08:10:01 -0500
From: "Zoann Nugent" <[log in to unmask]>
To: <[log in to unmask]>
Subject: Re: QUERY: Kappa questions
No- not a Kappa - style problem.
Report the % error in both independently.
I am assuming the NN group is the
size you would expect, based on the
error frequency of both questions.
You can also report that - which may be
a interesting error.
From: Robert Grant <[log in to unmask]>
To: "'Brett Larive'" <[log in to unmask]>
Subject: RE: QUERY: Kappa questions
Date: Wed, 18 Aug 2004 11:59:16 +0100
Dear Brett
The kappa test is very sensitive to prevalence which is the
problem here. There are some supposed solutions to this but I
don't have experience of them. An internet search on kappa and
prevalence will reveal some of the literature. Chi-squared and
Fishers exact tests are also invalid because of the one very small
cell value. Given the details in Part 2 of the question, I would
inclined not to test validity in this way. The database can have
patient with disease X who are not treated for X and still be
valid (for example, if they are terminally ill or have refused
treatment). I would suggest just presenting the percentages in
each category and then giving as
much supporting clinical information as possible to back up the
reasoning that went into the decisions to treat or not to treat.
Good luck!
Rob Grant
From: "Susan Mallett" <[log in to unmask]>
To: [log in to unmask]
Subject: Re: QUERY: Kappa questions
Date: Wed, 18 Aug 2004 15:59:45 +0100 (GMT/BST)
Dear Brett,
Have you thought about using analysis for diagnostic test e.g.
sensitivity and specificity for comparing two tests?
Your gold standard is your disease positive.
People treated is maybe a surrogate for the diagnostic test you
are testing against the gold standard (? does that seem
appropriate in your case?).
This would put your sensitivity as high -98.7% as most of the
disease positive are treated, but low specificity 4.7% as most of
your disease negative are treated.
I'd be interested in being copied in on what other people think.
Best wishes,
Sue
Date: Thu, 19 Aug 2004 09:49:07 +0200
From: Werner Vach <[log in to unmask]>
To: Brett Larive <[log in to unmask]>
Subject: Re: QUERY: Kappa questions
Dear Brett,
I think this is a very good example why kappa is more useful then
the agreement rate. Of course, if Y is nearly the standard
response, then it is not difficult to agree on Y. What is
difficult is to agree on N, and here there seem to be nearly no
agreement, as
of 75 cases where at least one respone was N, there was only one
case of agreement. And this is perfectly shown by a very low
kappa...
Best
Werner
|