Hi Aisling,
Sorry to disappoint you, but it's (probably) the same.
You could just consider the correlation between them as well.
The ICC is the same as coefficent alpha (or can be, I always get my
S&F ICCs mixed up). However, remember that the longer the test, the
higher the reliability, and what you have is a test which has two
items, and hence it will be more likely to have a lower reliability.
You might look at the Spearman-Brown prediction formula, to tell you
more about this. The Wikipedia entry isn't bad.
Jeremy
2008/12/5 Aisling O'Donnell <[log in to unmask]>:
> Hi all,
> I recently ran a study where participants had to make paper aeroplanes (don't
> ask!), and myself and anotherb rater have both coded these for quality on a 5-
> point scale. I worked out the inter-rater reliability using intra-class
> correlations in SPSS (Case 3 according to Shrout & Fleiss). I had no problem
> doing the analysis but I can't seem to find anywhere a paper that clearly (i.e.,
> without jargon!) explains what level of agreement between raters is
> acceptable. I read that for other reliability statistics, such as Kappa, it should
> be around .7, and I *would* assume it is the same for ICC except I have a
> reason not to want to believe this... basically my inter-class correlation is
> lower than this and I want to know if it is acceptable!
>
> Does anyone have any idea about this?
> Thanks
> Aisling
>
--
Jeremy Miles
Learning statistics blog: www.jeremymiles.co.uk/learningstats
Psychology Research Methods Wiki: www.researchmethodsinpsychology.com
|