----- Original Message -----
From: Emily Knight <[log in to unmask]>
To: <[log in to unmask]>
Sent: Thursday, September 21, 2000 3:09 PM
Subject: Inter-rater reliability
> Could anyone tell me about statistics which can be used to evaluate
> inter-rater reliability?
> Health workers were asked to use a new tool to rate three different case
> studies. What we want to know is whether within groups (nurses, dieticians
> etc) the ratings were similar. I have heard of Cohen's Kappa but I believe
> this is only for two raters, and we have 2-10 in each group.
> any suggestions appreciated
> thanks Emily
Hello Emily
I have a document on my presentations web-page (off the homepage address
given below) - under the heading:
Effectiveness Set #3,
you can download the document: rater.pdf or rater.zip
This outlines the major concepts, formulae, and logic for some common rater
reliability studies - and includes worked examples using STATISTICA and
SPSS.
It might be of help.
Regards .. Paul
_____________________________________________________________________
Paul Barrett Direct Tel: (44)-1555-841343
email: [log in to unmask] Hospital Tel: (44)-1555-840293
CS2000: [log in to unmask] Fax: (44)-1555-840024
http://www.liv.ac.uk/~pbarrett/paulhome.htm
Chief Scientist, The State Hospital, Carstairs, Scotland, ML11 8RP, UK
Senior Research Fellow, Clinical Psychology, Liverpool University, UK
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|