In message <[log in to unmask]>, alistair grant
<[log in to unmask]> writes
>
>This debate has prompted me to ask the list about a problem that I have
>often considered with the evidence based practice model. If Sackett et al's
>definition of evidence based medicine is acceptable ( essentially evidence
>based medicine involves the use of quality, up-to-date reliable research
>evidence INTEGRATED with clinical judgement and expertise)then there would
>appear to be a relative imbalance in our approach to using evidence based
>medicine. This is because although much debate occurs over the 'quality' of
>research evidence, little attention is given to the value of clinical
>judgement. Surely even the highest level of research evidence needs clinical
>judgment (of equal value?) to assist in the implementation of that evidence?
>Is this not also one of the multi-factoral reasons why research evidence is
>so difficult to get into clinical practice?
>
On the one hand we have evidence from analytic research (of which by
definition the greatest internal validity is achieved by randomised
controlled trials), and on the other we have patients and clinicians
whose problems and interactions are complex and cannot be reduced to
'pure' questions which can be unequivocally answered by 'pure' (high
quality, high in the hierarchy) evidence.
We therefore have a spectrum of implementation. At one end is high
quality evidence from RCTs which is being 'poorly implemented' (a
recurrent theme in literature about implementation). At this end of the
spectrum evidence-based practice is seen as being about implementation
of research, and therefore the hierarchy of evidence is important in
order to select which research to implement.
At the other end are individual patients and clinicians. In certain
situations, the patient may have a well defined condition for which high
quality evidence exists about effective therapy. This is the case
particularly with cardiovascular disease - ischaemic heart disease,
atrial fibrillation and so on. So good is the evidence that it is
possible to draw up lists detailing ideal practice such as the National
Service Frameworks in the UK. However evidence about implementation even
of known effective therapies suggests that at the individual
clinican/patient level life gets very complicated. There is a very nice
study by Howitt and Armstrong (BMJ 1999;318:1324-7) in which a group of
very keen GPs tried to identify all their patients with atrial
fibrillation and warfarinise those who would benefit. They put in a lot
of effort, held long consultations in which they explained the benefits
and harms, but still found that:
"Out of 13 239 patients, 132 had a history of atrial fibrillation of
which 100 were at risk of thromboembolism. After the study, 52 patients
were taking warfarin. Of the remaining 48 patients (of whom 41 were
taking aspirin), eight were too ill to participate, 16 were unable to
consent, four refused the interview, and 20 declined warfarin. Patients
declining warfarin were inclined to seek a higher level of benefit than
those taking it, as measured by the minimal clinically important
difference. Qualitative data obtained during the interviews suggested
that patients' health beliefs were important factors in determining
their choice of treatment."
So patients' beliefs and values (and also probably the degree to which
clinicans are prepared to 'sell' a therapy) influence uptake as much as
the merits of the therapy itself or the clinician's knowledge. I also
suspect that the 'implementers' are rather too optimistic where it comes
to the realistic likelihood of applying research in practice. It is also
relevant I think that many interventions promoted by the implementation
lobby have been demonstrated to be effective on populations but, by the
nature of RCTs, cannot be demonstrated to be effective for any
particular individual. The individual clinican and patient then have to
grapple with the paradox of the ecological fallacy.
It is even more difficult where the diagnosis is unclear, the symptoms
confusing, the patient's needs complex and perhaps multiple, even
contradictory. Here, knowing which therapy has the highest quality
evidence supporting its use may be irrelevant because to use such
evidence you need to have a clear objective - Scott Richardson's "what
can we hope to achieve by doing this?". This is where clinical skills
combine with human skills such as listening and empathising in order to
discover what question most needs to be answered to go further. This has
been studied by researchers such as Pendleton, Balint, Byrne, Long and
others and there is an extensive general practice literature about it -
most of which is qualitative and comes nowhere in the hierarchy of
evidence. However if we want to understand how clinicians come to
identify the most useful questions and answer them we need to use this
kind of work, and will get little help from analytic studies of any
kind.
I've found as a clinician that identifying the question(s) is the key
part of any consultation. This is not quite the same as formulating a
structurable answerable questions (although that may follow). There are
several types of questions: "what is going on here?", "what does this
patient want/need/expect?", "what could we achieve by...?". If they were
research questions the first two would be answerable by qualitative
methodology, only the last by analytic studies. In primary care,
clarification of the first two may be all that is needed. In many or
most situations the clinician may know the answer to the last without
having to search for additional information - however the skill in
evidence-based practice is to be alert to what we don't know, and to be
vigilant not to kid ourselves that we know more than we really do. Or to
ask "why do we always...?" and revisit practice traditions.
Thus clinicians need to understand much more than the conclusions of
RCTs - and need to incorporate much more than that into their management
of individual patients. When addressing the complexity of individuals
issues of applicability become more important, and an RCT with high
internal validity may be very difficult to apply - because the
therapeutic intervention cannot be blinded in routine practice, because
each individual's response will vary, because any confounding factors
cannot be corrected for by randomisation! Most important of all, unless
the success or failure of the outcome is detectable in and known to the
individual, the outcome itself cannot replicate the outcome in a group.
Thus an individual will know whether or not epigastric pain has been
relieved by a proton pump inhibitor, but not whether a stroke has been
prevented by warfarin.
Therefore, although the hierarchy of evidence is valid for classifying
the weight to be given to evidence about the clinical effectiveness of
an intervention, it may still play only a minor part in the decision
making process in real life clinical practice, and is of little help in
studying or understanding that process.
>Apart from the existing obvious 'measurements' of clinical performance
>(post-graduate qualification etc.) which I presume have not been thoroughly
>investigated in the context of evidence based practice is anyone aware of
>research in this area?
>
I'm currently (still - it takes a long time!) analysing GPs' accounts of
how they make decisions about acutely ill preschool children. The next
step is to measure how well the factors they have said influence them
(which they have learned by experience) predict outcomes. Watch this
space!
Toby
--
Toby Lipman
General practitioner, Newcastle upon Tyne
Northern and Yorkshire research training fellow
Tel 0191-2811060 (home), 0191-2437000 (surgery)
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|