Hi List-Members,
this query is somewhat between qualitative and quantitative issues.
I hope it's appropriate for this list; if not, I'd appreciate if you
could direct me to the appropriate forum.
My data are categorical ratings of two raters who rated each of 56
respondents on each of 18 items. Respondents had generated perceived
outcomes of violent or non-violent behaviors in intimate
relationships (e.g., power, fear, resolution, conflict avoidance
etc.). The number of outcomes varies, although few respondents
generated more than 5 outcomes. These outcomes were then
rated by the two raters using 40 outcome categories.
1. I thought of doing a generalizability analysis to assess to
reliability of the raters and the respective influence of item,
rater, or respondent. I am familiar with the Li & Lautenschlager
(1997) article on applying generalizability theory to categorical
data but I need a more practical description of how to do this. Are
you aware of any publication on this issue?
2. As I understand it, computing the various "error" components will
involve different matrices such as rater by item or item by category.
Is is possible to generate such matrices within NUD*IST4?
3. So far I have not found a source that described how interrater
agreement index kappa is calculated when each respondent is
categorized on a variable number of outcomes (I have looked
primarily at Psychological Bulletin). Are you aware of any
publication on this issue?
Please reply to [log in to unmask] Thank you!!
Renate
Renate Klein, Ph.D.
University of Maine, [log in to unmask]
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|