Print

Print


Dear all

Even in my limited experience i have seen Kappa used in several
different ways when measuring inter-observer variation in the 
context of radiology.  I would be extremely grateful if somebody 
could clarify the following few points for me:

(a) usually Kappa is used to measure reliability in the 
interpretation of films by two radiologists i.e. independent 
individuals within the same profession.  Would it be possible
to use Kappa to measure reliability between two independent 
individuals but from different professional groups (i.e. 
radiographer vs radiologist)?

(b) Kappa is used to measure reliability between two individuals 
who each report on the same batch of films.  Would it be possible
to use Kappa to measure consistency between two independent groups 
of people from different professions who shared the burden of reporting
on all the films?

(c) Finally, if an observers performance is compared with a "gold"
standard (e.g. consensus report) then this is a measure of accuracy and
Sn and Sp is calculated, rather than a measure of reliability.  Is it
possible to use Kappa by measuring reliability between the gold standard
and an independent observer (i.e. consistently accurate), or is this not
possible because you are changing your assumption regarding whether the
gold standard is providing the "true" diagnosis or not?  
 
Best regards, 
Stephen Brealey 
Department of Health Sciences
University of York


%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%