Tim,
I agree that in "the real world" all research is subject to compromises
and trade-offs, often driven by external circumstances (such as
committees!). Certainly cost, availability, and acceptability all factor
into a decision to use a particular software program.
Nonetheless, I would be concerned about anyone who just assumed that they
should use a particular program and never examined those assumptions --
much as I would be concerned if they assumed they should use
a particular data-collection method without evaluating its strengths and
weaknesses.
I also believe that even if trade-off have to be made, they should be made
consciously. When someone uses a program from necessity (for example, if
the project director mandates its use), the user should at least monitor
how they use the software. (It will do X. Do I *want* to do X? I want to
do Y. How can I make it do Y?) I think it helps to be aware that there are
a range of programs and that they have different strengths and weaknesses.
So, though I certainly don't expect someone to learn every program on the
market before making a choice, I do expect them to think about that
decision a little. And more and more information about programs is
becoming available, over and above what the developers offer -- demo's at
conferences, websites, publications. At the very least, talking to a
couple of users of different programs would be a good idea.
As far as "editors" and "unknown programs"... First, I wouldn't consider
any of the programs we've been discussing -- NVivo, Nud*Ist, Atlas-ti, The
Ethnograph, WinMax, HyperResearch (just to name a few) as "unknown
programs" at this point. That's not to say that there are not committee
members who haven't heard of them, of course! But there's plenty to choose
among within the "mainstream" group.
And the thought that any reputable editor would base their acceptance or
rejection of an article on the name-recognition of the software used (as
opposed to a description of *how* it was used and what the researcher did)
is truly scary. You may be all too right on that, but I sincerely hope
not.
So, I acknowledge your concerns about real-world constraints, but still
think that new users should consider their choices. :-)
Cheers!
Linda Gilbert
University of Georgia
[log in to unmask]
> ------------------------------
>
> Date: Tue, 21 Sep 1999 12:07:28 EDT
> From: [log in to unmask]
> To: [log in to unmask]
> Subject: Re: qual software discussion
> Message-Id: <[log in to unmask]>
>
>
> In a message dated 9/21/99 6:48:16 AM, [log in to unmask] writes:
>
> << My previous post was in response to inquiries from people just getting
> into qualitative software, and the big point I was trying to make is that
> there are a lot of very good programs, and people should take time to look
> around and consider how the software fits with the way they work and
> think and use computers already. >>
>
> On the other side of the coin in the real world....I would point out that
> one issue in doing research is getting it accepted by a thesis committee or a
> peer journal. Yes, there are lots of qualitative research programs out
> there but for the novice researcher "market share" and disciplinary
> acceptance are also big considerations. In that light only the top two or
> three programs have been heard of by most professors and editors.
>
> Time is another consideration. Who actually has the time, knowledge, or
> money to review these programs to see if they "fit" with our research needs.
> In many areas qualitative research itself pushes the boundaries of acceptable
> research, adding an unknown research program is not in the best interest of
> most of us untenured folks.
>
> tim lavalli
>
>
> ------------------------------
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|