Print

Print


In message <9921C1171939D3119D860090278AECA20206DC14@EXCHANGE>, Sarah
Delaney <[log in to unmask]> writes
>Try triangulation - use a number of different approaches - if patterns agree
>using a number of different methods you have a strong case?
>
Although I'm relatively new to qualitative research I've been thinking
about issues such as validity and reliability for a long time. As I
become familiar with different research paradigms I am more and more
struck by how each paradigm would see the reality differently.

In the case of the patients with atrial fibrillation one way of
examining the decisions would be to accept a positivist view - "the
correct treatment for certain patients with atrial fibrillation is
warfarin" - in other words that the evidence for effectiveness is true
and should be applied. I could then compare real decisions with the
recommendations of a guideline and interview GPs where they conflict. I
could ask GPs to take a test or questionnaire to measure their knowledge
of the evidence and see whether their score correlates with their
proportion of "correct" decisions. I could see whether "wrong" decisions
correlated with any particular characteristics of patients such as age,
sex or co-morbidity. (this is not a joke - I know people who would think
this a perfectly sensible way of going about things). I could also
interview the GPs, do a straightforward thematic analysis, and then say
that I have used triangulation.

The problem is that I don't think this would improve understanding at
all, and would certainly change the "reality" that I ultimately
reported. By looking for ways to triangulate I would already have
changed the way I looked at the problem so that the question itself
changes from "how do GPs make decisions...." to "what factors influence
GPs' to make incorrect decisions..."

So, because each paradigm (and therefore method) necessarily views
reality in a different way, I am wary of triangulation. I think that the
strongest studies are those that stick to one paradigm whatever that is
- but that one paradigm is never sufficient to explain "everything".

As a researcher I need to find the paradigm that is closest to the
"reality" that I want to investigate. If I want to find out the most
effective treatment for a disease then the strongest research design is
a randomised controlled trial - firmly positivist. But, as a user of
research in practice, if I want to see what is the most effective
treatment for an individual with a certain disease then my use of
"positivist" evidence from an RCT must be post-positivist, as one is
translating population effects to an individual and the effects for that
individual can only be expressed as probabilities. If I want to study
how the decision is made, then I must use a constructivist paradigm.

One of the major issues in "service delivery" research in the health
services is the place of different methods and paradigms. Developers of
guidelines use RCTs to produce recommendations for each particular
condition ("patients with angina below the age of 65 should be
prescribed...") and use RCTs to measure the effects of implementation
programmes, with outcome measures being "proportion of patients on drug
X in intervention group compared to control". This approach suits the
Department of Health very well as it produces a means of measuring
"performance indicators" and fits in with a rational technical approach
to quality control.

However many of us feel that service delivery is complex, messy and
ambiguous, and that even the management of a particular disease, for
which there appears to be strong evidence for effective management from
RCTs, has many ramifications to do with individuals' culture, their
working environment, their personal circumstances and so on.

Since the social environment is so complex, is it not better to focus on
a particular viewpoint, explore it in depth and be explicit about its
narrow focus? My own preference is look at a wider area (in this case
"use of research evidence in primary care practice") by asking questions
that are narrowly but clearly focussed one at a time, and accepting that
while each question requires a clear theoretical stance, understanding
of the social environment as a whole requires many questions to be
studied, each in its appropriate paradigm.

It's a bit like putting a book of photographs together about "Newcastle
upon Tyne". One photographer might produce beautifully crafted
architectural studies, another spontaneous shots of the club scene,
another might spend a day at the races, and yet another might spend
weeks getting to know people in a deprived neighbourhood in order to
illustrate their lives. All would show the "reality" of Newcastle, but
all would be different and none would show the whole - nor (and this is
really my point) is it likely that any one photographer would be able to
show all these aspects of Newcastle equally well, particularly if they
tried to do it as a single project.

Toby
--
Toby Lipman
General practitioner, Newcastle upon Tyne
Northern and Yorkshire research training fellow

Tel 0191-2811060 (home), 0191-2437000 (surgery)

Northern and Yorkshire Evidence-Based Practice Workshops

http://www.eb-practice.fsnet.co.uk/