Hi Anna
There are a lot of different kinds of ICC - can you give a bit more
explanation of what you did (E.g. Cronbach's alpha is one kind, but
another is based on sums of squares in anova)? Can you just calculate
correlations and/or look at scatterplots?
Jeremy
On 30 November 2012 02:26, Anna Chisholm
<[log in to unmask]> wrote:
> Hi all,
>
> I have used one sample t-tests to compare participants' confidence ratings (out of 10) about their allocation of 33 different items to at least one of 8 different categories (i.e. how confident are you that your chosen allocation is correct), with a value of zero. This is in line with discriminant content validity methods to validate an educational booklet.
>
> I found that for 7 out of the 8 categories, there were significant differences - i.e. participants mapped items onto the 'correct'/corresponding categories as I had hoped (means were around 4 or 5 out of 10).
>
> However, I have also conducted intraclass correlations to investigate agreement between participants for each category. For all categories with significant t-test results, all the corresponding correlations were positive and high showing good agreement - apart from one which was negative. I don't know how to interpret this. The t-test result for this category was significant but the correlation identifying agreement between participants was -0.8. Does this mean participants scores were significantly different from zero but in the opposite direction (i.e. participants confidence scores were substantially less than zero) and that agreement on this was high? Or does it mean there was large disagreement between coders but that scores were still significantly higher than zero? If it helps the mean for this category was positive (3.55).
>
> Any help with this would be greatly appreciated.
>
> Anna
>
>
>
>
|