Hi Chris and later contributors to this beaut discussion: thankyou! the
issues of cross-coder "reliability" are badly under-debated. Here's my
late-night tuppence-worth.
Yes you can do it in NVivo, in any of several ways, Chris. My own technique
has been to code duplicate copies of a document, and compare patterns of
coding by the two coders (or by one across time) either visually, using
coding stripes, or more finely using the search tool to show differences.
(Some researchers have done more sophisticated comparisons using command
files in N5 - Sylvain Bourdon from Montreal gave a paper on this at the last
Univ of London conference on QSR software.)As Susanne says, most programs
can generate some sort of quantitative output (if you're in NVivo, use the
ability to profile documents by any set of nodes).
But yes, the critical questions are what are you comparing and what does the
comparison mean? Usually for qualitative researchers it's the generation of
coding categories and their recognition as recurrent in subsequent data that
matters. I very much agree with Kath and others that if this becomes a
search for the "right" answer, we've lost the plot. The fact that two coders
totally agree will probably mean either that the coding is very descriptive
and basically pretty boring, or they are both coming at the data from
exactly the same perspective. If we "prove" the latter, what have we proven?
Perhaps that our team is dangerously homogeneous? So, other things being
equal I'd expect that my coding would differ from a colleague's - indeed I'd
regard it as a warning that we were becoming pedestrian in our analysis if
there were no differences! (There was a great debate on QSR Forum about this
a whiles back.)
On the other hand, my guess is that reliability measures can be really
important in three situations - when coding is descriptive and categories
fixed a priori; when we need monitoring as a way of steadying teams and when
we must log developments in longitudinal research. In these situations, we
seriously need to know whether two researchers "see" the same categories in
the same data. Descriptive coding can take either of two shapes - storing
information (e.g. demographics) or application of a priori categories, as in
Chris's study ("I am in fact using it as part of a quantative analysis of
behaviour and cognitive coding with a set of codes already developed to
examine questionnaire design.") In either of these cases, the codes aren't
going to move, so we must monitor how the researchers are using them (just
as we would in a survey - though that's another story :-))
Different is the situation in which the codes can and must move (the more
paradigmatic qualitative situation) - when we must monitor, if there are
several researchers involved, or time passing, how the codes are changed,
and whether coding done in January can legitimately be equated to coding
done now.
In both cases the software can tell you where you differ - I'm using this
category and you're not. And the methodologically proper response is surely
not to declare an invalidity or cower at unreliability but to *talk*! Which
will build all those other desirables Rachel mentions - credibility,
transferability, dependability and confirmability. Why are you not using it,
do we now differ about what it means? Is this interesting, a source of
theorizing energy? Used that way, coder comparisons are yet another source
of comparative friction, and qualitative research is always profoundly
assisted by comparison, whatever your method.
cheers
Lyn
Lyn Richards,
Director, Research Services, QSR.
(email) [log in to unmask]
(Ph) +61 3 9459 1699 (Fax) +61 3 9459 0435
(snail) Box 171, La Trobe University PO, Vic 3083, Australia.
http://www.qsrinternational.com
-----Original Message-----
From: Hopkin, Rachel2 [mailto:[log in to unmask]]
Sent: Friday, 25 May 2001 10:48 PM
To: [log in to unmask]
Subject: Re: Coding in Nvivo
Dear all,
this is all very interesting, I am intrigued by the use of
the term 'reliablity'. This term which is often ascribed to
the positivist paradigm has turned out to be completely
irrelevant to my qualitative research. I have moved
away from the ideas of generalisability, validity,
reliability and rigor, which tend to be more apt for
quantitative work and adopted approaches and more favorable
to my ontological standpoint. Lincoln and Guba (1985)
(Naturalist Inquiry), suggest that the ideas of
credibility, transferability, dependability and
confirmability are more appropriate for research which
claims to recognize the relativist perspective
of individuals expereinces. Admitting your own preferences,
bias, knowledge and experience can in fact prove most
useful when analyzing data without getting you all tied up
on the question, can I reliably say that this is what the
respondent is wanting to tell me. Don't forget that the
'expert' you can return to with your codes can also be the
respondent, for who else can 'expertly' say "yes that is
the essence of what I was trying to get across".
Just thought I would add my two penny's worth, hope I can
be of help!
Rachel
PhD student,
University of the West Of England.
On Fri, 25 May 2001 12:40:33 +0100 "Christine J. McPherson"
<[log in to unmask]> wrote:
> >X-Sender: [log in to unmask]
> >X-Mailer: QUALCOMM Windows Eudora Light Version 3.0.6 (32)
> >Date: Fri, 25 May 2001 12:29:15 +0100
> >Reply-To: qual-software <[log in to unmask]>
> >Sender: qual-software <[log in to unmask]>
> >From: "Christine J. McPherson" <[log in to unmask]>
> >Subject: Re: Coding in Nvivo
> >To: [log in to unmask]
> >
> >Dear Kath, Karen and Kevin,
> >
> >Your views are very interesting and I am in agreement with this approach
> >assuming that I am using Nvivo for some type of qualitative thematic
> >analysis (which I have in the past). I am in fact using it as part of a
> >quantative analysis of behaviour and cognitive coding with a set of codes
> >already developed to examine questionnaire design.
> >Having stated this, does anyone know if this is possible in any of the
> >qualitative software packages if it is not possible in Nvivo.
> >
> >Thank you
> >Chris
> >
> >At 07:02 AM 5/25/01 EDT, you wrote:
> >>
> >>Actually this is a VERY interesting question. I agree with the previous
> >>sentiment about reliability being something we all need to aim for. What
> >>should be acknowledged however is that (as Kath Checkland noted) there
is
> >>never going to be any 'right' answer, and subsequently never any
entirely
> >>valid solution ... at least not from parallel coding. As far as I'm
> >>concerned, this even includes having a supervisor and acknowledged
expert
> >>parallel coding some sections. The problem one faces there, is are you
> >>getting drawn into someone else's worldviews and perspectives.
Admittedly
> >you
> >>may be working from a very similar epistemology, but they will never be
> >>precisely the same ... and even similar or identical epistemologies do
offer
> >>a degree of flexibility dependent on your background, etc. Perhaps the
only
> >>real validity that can be achieved here is internal i.e. from that of
the
> >>individual coder, after all, that's half the point of qualitative
research.
> >>Its not necessarily expected that any two people will get the same
codes, as
> >>you are never operating from the same worldview. What is expected
however,
> >is
> >>if YOU did it again under the same circumstance, that you would make the
> >same
> >>coding. My thoughts would be to be very conscious in documenting what
you're
> >>doing and why you are making the decisions you are at any given point in
> >>time. Keep a running journal of your coding thoughts.
> >>
> >>Having said that though, its always interesting getting other
perspectives.
> >>I'm just not entirely convinced it makes it more reliable, certainly not
> >in a
> >>quantitative sense where you would need hundreds of perspectives.
Perhaps it
> >>does something for the validity though? That's another question.
> >>
> >>Ok, that's my 2 cents worth. Hope you found it interesting.
> >>Kevin
> >>
>
>>--------------------------------------------------------------------------
-
> >---
> >>
> >>------------
> >>Please reply to [log in to unmask]
> >>
> >>Kevin Franklin
> >>Corporate Citizenship Unit
> >>Centre for Creativity Strategy and Change
> >>Warwick Business School,
> >>University of Warwick
> >>Coventry, CV4 7AL
> >>UK
> >>http://users.wbs.warwick.ac.uk/ccu/
> >>Tel: 024 765 73130
> >>Home: 01225 318772
> >>Mobile: 07949 014593
> >>
> >>**********************************************************************
> >>This email and any files transmitted with it are confidential and
> >>intended solely for the use of the individual or entity to whom they
> >>are addressed. Thank you!
> >
----------------------------------------
Hopkin, Rachel2
Email: [log in to unmask]
"University of the West of England"
|