These issues are a real problem. I remember only two well on one of the
first projects that I led we were interviewing general practitioners about
handling people with drinking problems and discovered that the medical
members of the team got different responses from non medical people - often
saying that their respondents would not answer the question. We got round
the problems by using a variety of different sources of information and
respondents. My interest in content analysis stemmed from the fact that we
could get the most reliable information by analyzing the actual words
that people used rather than our interpretations of them.
When we later published our qualitative studies (in which we developed an
intervention and successfully applied it) many of our recommendations were
implemented at a national level -but the key ones about implementation at
the local level were ignored because - as it was pointed out in one of the
journals - we were a uniquely qualified team and therefore anything we
reported about intervention lacked external validity! I personally think
it was because managers did not like the cost implications for training.
My response was to design a questionnaire based on the qualitative work
which could be used for evaluations using larger numbers. That was some 25
years ago. The questionnaire is still in use - in many parts of the world-
and the sad thing about it is that it consistently shows that the issues we
wanted to change have remained much the same. Which is not to do with the
questionnaire which is sensitive to change.
On the other hand I found my own training unit being analyzed by a research
team who ignored our goals and imposed their own on the project. I felt
that was a great loss because the research team, rather than being there to
help us develop, seemed to see themselves as carrying a specific academic
banner which they had to impose on us no matter how irrelevant to our needs.
I left that area of work many years ago but have also had similar
experiences in clinical situations elsewhere. Qualitative studies are
loved by clinicians who can see some sense in them but are hated by
managers. While quantitative studies are loathed by clinicians but managers
can use them even if they don't intend to implement. if change is to occur
its going to have to be at both levels.
As for the researcher, in fields where the results might make a difference,
they have to try and act with integrity, try and stay away from the
politics and believe that what they do might one day make a difference. Its
much easier to do that when ones young which is why I retired early -
though past memories still leave me simmering.
regards
Alan Cartwright
At 11:24 26/05/2001 -0700, you wrote:
>Toby Lipman wrote:
> >
> > The problem I have (not really a problem, more a conundrum) is how to
> > interpret my own input and reactions. I share their culture, know many
> > of them personally, and am deeply involved (and known to be deeply
> > involved) in teaching and developing evidence-based practice. I think
> > this has many advantages, in that I can understand and empathise with
> > what they are saying, but the downside is that:
> >
> > 1) they may tell me what they think I want to hear in order to seek my
> > approval
> > 2) I may not challenge shared assumptions about the nature and value of
> > evidence (during analysis rather than during the interviews)
> > 3) my experience as a clinician may be too close to theirs to see issues
> > that a more detached researcher might detect more easily
> >
>
>When I was interviewing people for a dissertation in Comparative
>Religion, I was immersed in the same problem. I finally decided that it
>couldn't be solved: to stand outside of my own culture, I had to stand
>outside myself. It just couldn't be done. I had to leave the
>outside-critic position for those who could genuinely occupy it.
>
>What I could do was twofold. I could attempt to represent the
>experience of my respondents faithfully. That meant in part deliberately
>making an analysis that the informants would agree with. The other part
>was to be a thoughtful commentator, which might mean making comments the
>informants would not agree with.
>
>Steinar Kvale, in his *Interviews : An Introduction to Qualitative
>Research Interviewing* suggested that one might return what one wrote to
>the informants for their comments. I did so, and included their further
>responses in each chapter. The dissertation became a conversation among
>the participants.
>
>This was a wonderful solution. I could attempt to understand what the
>informants said with the assurance that they might correct my
>misunderstandings. I could disagree with their beliefs openly, and they
>could respond.
>
>So may I second Rachel Hopkin's comment:
>
> > > ... Don't forget that the
> > >'expert' you can return to with your codes can also be the
> > >respondent, for who else can 'expertly' say "yes that is
> > >the essence of what I was trying to get across".
> >
>
>Toby Lipman continued:
>
> > So far I have found things that surprised me, such as that those GPs who
> > have the most sophisticated understanding of the evidence are also the
> > most relaxed about accepting decisions by high risk patients not to have
> > the treatment. This begins to reassure me a little, but I'd be
> > interested in others' views on studying one's own culture.
> >
>
>Learning something surprising is the best part of qualitative research,
>methinks.
>
>Birrell Walsh
>MicroTimes Magazine
>San Francisco
Alan Cartwright PhD
Developer Code-A-Text MultiMedia Products
Hon. Senior Lecturer Kent Institute of Medicine and Health Studies.
Email [log in to unmask]
CISAID: Powerful Multi-Media Software for Analysing Interviews and Dialogues.
CTANKS: Word processing, Recording, Transcription, Searching and Report
Generation in a single user friendly package.
Information at
Code-A-Text Web Page <http://www.code-a-text.co.uk>
|