Hi all,
I could use a suggestion on what test to use to establish inter rater
reliability/ agreement in the following scenario:
We've been using video observations and taken measures of behaviour
from them. There are 12 types of behaviour overall, therefore
participants can be in any number of categories any number of times.
Due to this it is also considered free marginal as they won't do all
behaviours equally.
The problem is that the rows won't add up to the number of raters, so
standard formulas wont work. Is there a way round this?
Sent from my iPhone
This email is intended solely for the addressee. It may contain private and confidential information. If you are not the intended addressee, please take no action based on it nor show a copy to anyone. In this case, please reply to this email to highlight the error. Opinions and information in this email that do not relate to the official business of Nottingham Trent University shall be understood as neither given nor endorsed by the University.
Nottingham Trent University has taken steps to ensure that this email and any attachments are virus-free, but we do advise that the recipient should check that the email and its attachments are actually virus free. This is in keeping with good computing practice.
|