Thank you!
On Sun, 19 Feb 2006 22:03:40 +1100
Lyn Richards <[log in to unmask]> wrote:
> Melissa, if you still plan to enter your data, it will probably be into
> NVivo 7, since the upgrade comes out at the end of this month.
> NVivo 7 has a coder comparison report that gives full details.
> Researchers may use it for comparing documents to see how different
> their content is - or for the purposes you're asking about.
>
> To conduct a reliability check, you import two copies of the same
> document to your project. Two researchers can independently code these.
> The report will list all the coding of either document, and the amount
> of the document coded there. (So if you and I coded identical documents,
> with of course different names, we can now compare our coding.) The
> report will fully list this detail for each node. So you can use this
> report as a basis for discussion of which nodes you're using but I'm
> not, or how differently we are coding. (The same process can be used to
> check reliability of one researcher's coding over time. Import another
> copy of a document you coded early in the project, and code the copy.
> Now proceed to the coder reliability report and its interpretation as
> above.0
>
> Coder reliability checks are increasingly required of qualitative
> research, and the other list messages have offered great articles on the
> subject. My added advice is - think it out qualitatively! We don't
> expect (or usually seek) identical coding in qualitative research though
> we do need to know when we are coding very differently from our
> colleagues. I've tackled some of what I see as the methodological issues
> in Handling Qualitative Data, Chapters 5 and 10.
>
> Hope this helps,
> cheers,
> Lyn
>
> Lyn Richards,
>Founder and Director, QSR International.
> (Email)[log in to unmask]
> (Ph) +61 (03) 9840-1100. (Fax) +61 (03) 9840-1500
> (Snail) Second floor, 651 Doncaster Rd.,
> Doncaster, Vic 3108, Australia.
>
> -----Original Message-----
>From: qual-software [mailto:[log in to unmask]] On Behalf Of
> Stratford, Dale
> Sent: Saturday, 18 February 2006 4:23 AM
> To: [log in to unmask]
> Subject: Re: inter-rater reliability
>
> Melissa-
> Check Carey J et al. 1996. Intercoder agreement in analysis of responses
> to open-ended interview questions: Examples from Tuberculosis research.
> Cultural Anthropology Methods Journal (now Field Methods) 8(3):1-5.
>
> Dale
>
> -----Original Message-----
>From: qual-software [mailto:[log in to unmask]] On Behalf Of
> Melissa Kim Levy
> Sent: Friday, February 17, 2006 11:53 AM
> To: [log in to unmask]
> Subject: inter-rater reliability
>
> Hello -
> I and two other researchers are attempting to establish inter-rater
> reliability. We are coding interview data with a myriad of codes we've
> identified. We plan to enter our data into NVivo. Does anyone recommend
> any particular procedures to establish inter-rater reliability?
> Thank you,
> Melissa
|