Raymond Lee makes an interesting observation. If the list will indulge a
few questions...
>I do also have to say that I found some (though by no means all) of the
>qualitative studies we looked at were an embarrassment. Even allowing
>for the emergent character of fieldwork, it was clear that potential
>problems weren't always thought through. (This was often true of issues
>relating to data protection.) People often used blithe invocations of
>'grounded theory' as justifications for their methods when it was quite
>clear they knew very little of the literature or the debates around
>that tradition.
How does one invoke grounded theory to circumvent questions about data
protection? Obviously, a study informed by grounded theory will have some
open ended research questions. You can't, for instance, prepare an
interview schedule, or a coding scheme, until you've been in the field and
started "theoretical sampling" (or to translate, decided what's
important). But, still don't we all start with a general research
topic? It seems to me that one should be able to describe a research plan
that describes the topic, the site and the initial intent of the fieldwork;
then overviews the importance of inductive coding and theoretical sampling
for this inquiry, and finally discusses data protection measures.
What kinds of issues were not thought through? Perhaps not thinking enough
about potentially embarrassing omissions from informants?
>Purposive sampling methods often weren't clearly
>understood. Bringing matters back to the purpose of the list,
>qualitative software was often mentioned in ways that suggested
>applicants didn't really have much idea of how to use it, or of the
>bottlenecks produced by transcription, etc, etc. Sometimes we could do
>better than we do.
It's curious that the software comes up at all in an IRB review setting (I
presume that our Institutional Review Boards' are similar to your
LRECS). I don't for instance, tell an IRB if I'll be using SAS or STATA to
analyze survey data. In fact, I don't even tell the IRB if I'll be using
Logit, Probit, or another model. They don't ask about how responses will
be numerically coded, collapsed or recoded. So in my field research,
should it matter if I'm using N[#], NVIVO, ATLAS, HYPER-RESEARCH or a
yellow high-liter on white paper? [Yes, I know the substantive issues are
different, but should the IRB be concerned with substantive analytic issues?]
Full circle, back to the topic of the list.
Should the software package or analysis methods matter to such review
boards? Are there things that IRBs squawk about beyond ensuring that data
are protected appropriately [like software use]? I'm interested to hear more.
Curiously yours,
/Corey Colyer
ICPSR - University of Michigan.
*********************************************************
Corey J. Colyer
Research Associate
Substance Abuse and Mental Health Data Archive
ICPSR
University of Michigan
Toll Free Helpline: 1-888-741-7242
*********************************************************
|