Dear all
I have been using an intra-class correlation coefficient to analyse my data which is on an ordinal scale from 1 to 7. The analysis involves a two-way mixed effects model in which overall absolute agreement is being measured. I would like to complement the results to date with further results relating to the level of agreement for each category individually (under the assumption that there are two raters). As I understand from my reading, there are a number of definitions of Kappa statistics which allow for the assessment of chance-corrected inter-rater agreement over grade A only, say. However, it appears that the related calculations involve the assumption that there are only two categories (in the above example: 'grade A' or 'other grade'). The generalization 'other grade' removes the capacity to assess the extent to which individual examiners disagree on an ordinal scale when one examiner assings the grade A but the other does not. I wonder therefore if anyone is
aware of alternative chance-corrected approaches to assessing agreement between two raters for a single category whereby whenever the raters disagree, the extent of disagreement is taken into consideration.
I look forward to being educated!
Best wishes
Margaret
Send instant messages to your online friends http://uk.messenger.yahoo.com
|