Certainly it appears that quantitative software packages have their
biases; for example, differences in how they handle missing data or
(horrors!) outliers!
Date sent: Wed, 22 Sep 1999 10:29:20 -0400
Subject: Re: qual software discussion
From: "Dan Haug" <[log in to unmask]>
To: <[log in to unmask]>
Send reply to: [log in to unmask]
There is something that concerns me about the discussion of name recognition
in software, and how acceptable (or unacceptable) reported research may be
based on what software was used to analyze it. It seems to me that while
the software used certainly will effect the analysis process, the
*methodology* used is much more salient to the question of whether or not
conclusions drawn can be related in a meaningful way to the data. As the
use of a type of software becomes ubiquitous, it slowly fades from view. As
Jens noted, no one asks you which statistical analysis package you used, or
which word processor. This should be relevant only when it effects the
methodology.
This, of course, raises the question of what methodological biases different
software packages have, and even if those biases exist, whether or not they
have a major impact on the research conducted using that package. I know
that Coffey et. al (see link below) talked about this in relation to biases
towards grounded theory in CAQDAS in general, but I'm wondering if anyone
has any thoughts about the biases of specific packages?
http://www.soc.surrey.ac.uk/socresonline/1/1/4.html
Dan
--------------------------------------------
Daniel B. Haug
GeoVISTA Center
Department of Geography
Penn State University
Phone: (814)865-3472
Email: [log in to unmask]
URL: http://www.geovista.psu.edu/
--------------------------------------------
-----Original Message-----
From: [log in to unmask]
[mailto:[log in to unmask]]On Behalf Of Linda Gilbert
Sent: Wednesday, September 22, 1999 9:52 AM
To: [log in to unmask]
Subject: Re: Digest of qual-software - volume 1 #266
Tim,
I agree that in "the real world" all research is subject to compromises
and trade-offs, often driven by external circumstances (such as
committees!). Certainly cost, availability, and acceptability all factor
into a decision to use a particular software program.
Nonetheless, I would be concerned about anyone who just assumed that they
should use a particular program and never examined those assumptions --
much as I would be concerned if they assumed they should use
a particular data-collection method without evaluating its strengths and
weaknesses.
I also believe that even if trade-off have to be made, they should be made
consciously. When someone uses a program from necessity (for example, if
the project director mandates its use), the user should at least monitor
how they use the software. (It will do X. Do I *want* to do X? I want to
do Y. How can I make it do Y?) I think it helps to be aware that there are
a range of programs and that they have different strengths and weaknesses.
So, though I certainly don't expect someone to learn every program on the
market before making a choice, I do expect them to think about that
decision a little. And more and more information about programs is
becoming available, over and above what the developers offer -- demo's at
conferences, websites, publications. At the very least, talking to a
couple of users of different programs would be a good idea.
As far as "editors" and "unknown programs"... First, I wouldn't consider
any of the programs we've been discussing -- NVivo, Nud*Ist, Atlas-ti, The
Ethnograph, WinMax, HyperResearch (just to name a few) as "unknown
programs" at this point. That's not to say that there are not committee
members who haven't heard of them, of course! But there's plenty to choose
among within the "mainstream" group.
And the thought that any reputable editor would base their acceptance or
rejection of an article on the name-recognition of the software used (as
opposed to a description of *how* it was used and what the researcher did)
is truly scary. You may be all too right on that, but I sincerely hope
not.
So, I acknowledge your concerns about real-world constraints, but still
think that new users should consider their choices. :-)
Cheers!
Linda Gilbert
University of Georgia
[log in to unmask]
> ------------------------------
>
> Date: Tue, 21 Sep 1999 12:07:28 EDT
> From: [log in to unmask]
> To: [log in to unmask]
> Subject: Re: qual software discussion
> Message-Id: <[log in to unmask]>
>
>
> In a message dated 9/21/99 6:48:16 AM, [log in to unmask] writes:
>
> << My previous post was in response to inquiries from people just getting
> into qualitative software, and the big point I was trying to make is that
> there are a lot of very good programs, and people should take time to look
> around and consider how the software fits with the way they work and
> think and use computers already. >>
>
> On the other side of the coin in the real world....I would point out that
> one issue in doing research is getting it accepted by a thesis committee
or a
> peer journal. Yes, there are lots of qualitative research programs out
> there but for the novice researcher "market share" and disciplinary
> acceptance are also big considerations. In that light only the top two or
> three programs have been heard of by most professors and editors.
>
> Time is another consideration. Who actually has the time, knowledge, or
> money to review these programs to see if they "fit" with our research
needs.
> In many areas qualitative research itself pushes the boundaries of
acceptable
> research, adding an unknown research program is not in the best interest
of
> most of us untenured folks.
>
> tim lavalli
>
>
> ------------------------------
240 McCool Hall PO Box 5288
Mail Stop 9586 Mississippi State, MS 39762
phone 601-325-8122 FAX 601-325-8686
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|