Anne,
Well done! I agree with almost everything you said. Well, actually I
cannot find anything that I would quibble with. In fact, most of what I
have to say simply restates some of your observations, hopefully without
distorting them too much. In the interests of keeping the discussion on a
higher level, I am going to avoid mentioning any particular QDA sotware
program.
In addition I would like to add some references for "pre-QDA software"
analysis, and for "non-QDA software" analysis which may or may not be
familiar to the "post-QDA software generation".
First, I always recommend Agar's article. May I humbly suggest another
article in Fielding, N. and Lee R, eds., (1991, 2/e 1993) 'Using computers
in qualitative research' , Sage. The article is "Method and Madness in
the Application of Computer Technology to Qualitative Data Analysis". In
that article, for which I can be held accountable, I suggested three
potential problem areas in the expanding use of QDA software.
1. an infatuation with the volume of data one can deal with, leading to a
sacrifice of resolution for scope.
The distinction here is the substitution of a broad but shallow analysis vs
a narrow but deep analysis. I was also concerned with the ways in which
the quantitative/distributional approach to research impinges on the
practice of qualitative depth analysis.
2. a reification of the relationship between the researcher and the data.
The concern here is that, just because something can be coded and counted,
does it really mean something. The meaning or significance of something
that you notice in your data is not necessarily transparent. But once you
code it, it seems to take on a life of its own. The fact that it occurred
in a specific context, that its idenfication is an artifact of a specific
and peculiar relationship between the data analyst and the data, and that
the data itself is an artificial reconstruction and transformation of the
phenomena itself, somehow gets lost. NOTE: This is not exclusively a QDA
Software phenomena. Further, I don't think that this problem is amenable
to a software based solution.
3. distancing the researcher from the data.
Analysis does not stop with coding or noticing interesting things in the
data. Rather marking up the data record materially changes the
record. This can lead to the mistake of
"reading-coding-analysing" instead of an iterative process of
"reading-coding-rereading-coding/recoding-rereading-and so on". The analyst
must always return to the data record rather than move away from
it. NOTE: One can argue that coding is part of the larger analytic
process, but as Agar rightly points out, it is a mistake to reduce coding
to analysis.
In regard to pre-QDA Software mechanical techniques, technology, and
analysis, there are a number of references and description of practices in
the literature. (NOTE: The following is only a sampling of what is
available concerning data analysis in the pre-QDA software period.)
For example, see Jack Douglas's (1976, Investigative Social Research, Sage)
ambivelence toward the impact of the technology of tape recorders on data
collection (page 36).
The appendix in Jaqueline Wiseman's Stations of the Lost (1979, U of
Chicago Press) addresses, among other things, the issues of moving between
the parts and the whole of the data record.
Lofland and Lofland' classic Analyzing Social Settings (1971, Wadsworth,
Chapter 6; 1984 2/e, Wadsworth, Chapter 9) address the mechanics of
managing and organizing data in the pre-software world.
The appendix in Freidson's "Doctoring Together" (1975, U of Chicago Press)
is an explicit description of processing data by members of the pre-QDA
software generation.
A very fine example of analyzing and interpretting data without the
"benefit" of QDA software is Susan Chase's Ambiguous Empowerment (1995, U
of Massachusetts Press). While Chase does not reveal mechanical specifics,
she presents a fine example of presenting, analysing, and interpretting
data without software.
Emerson et al, Writing Ethographic Fieldnotes (1995, U of Chicago Press) do
a fine job of discussing the recording, coding and analysis of qualitative
data without software. See Chapters 1,2 and especially 6.
Renata Tesch's encyclopedic historical overview and typology of various
forms of pre-software in the first half of Qualitative Research: Analysis
Types and Software Tools (1990, Falmer Press) is well worth a read.
And the mid 20th century debates on the pro's and con's of the techniques
of interview vs participant observation are worth reviewing. Unfortunately
I don't have references handy.
Finally one of my favorite articles is Jackson's discussion of the pre-QDA
software use of "head notes" vs "field notes" (1990, "I am a
Fieldnote: Fieldnotes as a Symbol of Professional Identity" in Sanjek,
ed. Fieldnotes: The Making of Anthropology, Cornell University Press).
I've made my own efforts to synthesize some of these materials and issues
in an unpublished essay on my website.
www.qualisresearch.com/QDA.htm While the essay is built around a
particular software program in which I acknowledge having a vested
interest, I think that the arguments and issues are general and applicable
to any QDA software as well as the synthesis of QDA mechanics both pre and
post software.
John
At 04:20 PM 7/7/00 +0100, you wrote:
>I want to stop another endless run of individual requests for the same
>information - ...so please no more requests - but if you have a contribution
>to make- or references to post - then we will ALL be interested;
>..do follow up Duncans suggestion to dig around in the CAQDAS bibliography
>at
>www.soc.surrey.ac.uk/caqdas/biblio.htm
>
>or follow up Sarahs very good suggestion to dig around in archives -
>www.mailbase.ac.uk/qual-software
>
>Personally I'd like to point out a much quoted essay by Michael Agar, 'The
>Right Brain Strikes Back' in
>Fielding, N. and Lee R, eds., (1991, 2/e 1993) 'Using computers in
>qualitative research' , Sage (see bibliography at above)
>
>......over-simplifying just some of his points from memory, he suggested
>that software available was not (at the time)suited to *tracking and
>identifying small processes and narratives - or for instance helping to
>discover what drove the narrative to emerge in the way it did. One could
>only discover these things by reading and re-reading, or at a different
>level of thinking, brainstorming around a room whose work surfaces,
>perpendicular and horizontal ,were covered with bits of information.
>( Comment: *I still don't think s/w does much for you in this respect, at
>least not for small amounts of data which require this type of 'immersion'
>analysis - though one or two softwares make a better attempt at it).
>
> >From previous discussions, the defence of software often revolves around the
>'tools' argument - and the nature of data itself -
>technology tools already changes data when they become reproduced in paper
>form - they are not the same as they were when we experienced a situation or
>discussion - and didn't technology anyway influence the way respondents
>produced the information?
>So where does the use of tools stop? It should stop if the tool itself is
>the problem. Maybe the tool is just plain redundant if what we need is
>complete immersion in a small dataset. Maybe we don't have a realistic
>timespan allocated to learn it - maybe having to learn a software is a
>distraction from more important stuff, like producing the dissertation in 3
>weeks and thinking about the data rather than the software! The software
>might become a tool like a pencil held the wrong way up - or worse, a
>calculator producing totals which bear no relation to the calculation we
>are attempting, because we don't know which buttons to press. Key issues
>here are familiarity with the tools - adequate time resources to enable
>that.
>
>The wrong thinking arrives for me with this popular assumption by nameless
>bodies who 'fund' - or talk vaguely about validity and science, that
>software should be used because it will somehow improve scientific value of
>the work. Or because some helpful person said vaguely - 'oh its easier with
>a software package - why don't you use ......?'. It may be all those things,
>in some circumstances. Not in all.
>
>We have some cultural influences which make us use the computer to do things
>which we managed with very nicely, thank you very much, before.
>But even typing got easier with the computer, so there are indeed some other
>things which are made easier with the computer - so we should take a
>balanced and individual view based on our individual needs.
>The usefulness of software or the 'which software? question' depends
>entirely on a number of factors; the size and type of your dataset - your
>analytic approach - descriptive and interpretive, discourse analysis,
>content analysis - or a mixture of all of those.
>For the more descriptive interpretive approaches I'd say and the developers
>of the software would say too, that MANAGEMENT of your data is going to be
>a key factor WHETHER YOU USE A SOFTWARE PACKAGE OR NOT . BUT whether you
>use coloured markers or the walls of your dining room or a piece of
>software - it is YOU that does the thinking.
>For large datasets - and we see them increasingly in e.g. the health field,
>of 40 or more - (sometimes 200!) interviews - I consider that there is a
>need for the added value from software's data 'management' tools. These are
>best supplied by software packages specifically developed for qualitative
>data analysis - for a number of different analytic approaches.
>
>Software helps to give you quick ACCESS to the data and later to those parts
>of the data that seemed interesting or valuable earlier in the exploration
>process. The software will allow you more flexibility to change your mind,
>more room for manoeuvre. It might make you more prepared to go right back to
>the beginning and work from a different angle of approach. The s/w may
>allow you to ask questions to test relationships between things and themes.
>But these questions come from you - not the software. You have to have the
>ideas before you can play with them..
>
>There is plenty of room too - for more discussion about the relationship of
>the researcher to the data inside a software package and how we move around
>them; - do we flit around data like a bee gathering pollen - or are we
>encouraged to be as fully immersed in the small processes and interactions
>as we need to be? Does the data actually demand full immersion... some data
>and some projects don't.... does the software help that process- or is it
>actually a barrier? Do we have better contact with the data a page at a
>time in paper form - than we do on screen? Are we so busy playing with s/w
>toys we lose the data? Do we have too much data even using a s/w package?
>.........so, back to things like dataset size, the aims and objectives of
>the project vary ........ we should not make generalities or have them
>forced upon us, we should assess our own needs according to the way we like
>to work, the size of the dataset - and what we need to pull out of the data.
>
>I'd say though it is too easy to remain negative about software just because
>of unfamiliarity with it. Resistance to the use of a qual-software package
>can be justified and sustained by many quotes and writings - but a balanced
>view needs to be taken - and thats easy to say !!!
>Oh and while you are storing up the negatives - there are some purely
>physical issues too... what about RSI? -
>......what about our eyesight? If you don't absolutely have to be sitting
>in front of computer, don't.
>
>OK, some very mixed messages there - do argue!
>
>regards
>
>Ann Lewins
>CAQDAS Networking Project
>(and list-owner qual-software)
>
>also at [log in to unmask]
>http://www.soc.surrey.ac.uk
[log in to unmask]
P.O. Box 3356
Salt Lake City, UT 84110
Ph: 801-532-3090
Fax:
http://www.QualisResearch.com
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|