The responses to this question about who should use evidence-based medicine
to answer medical questions have touched on some of the main points. I come
at this from a somewhat different perspective. My organization is
approached not by individual physicians with a question, but by policy
makers (who are usually physicians in administrative positions) in national
specialist organizations, large provider organizations, insurance companies
and managed care organizations, and foreign and domestic government
agencies. They come to us precisely because they cannot get a reliable
evidence-based consensus from physicians. We haven't had many queries about
GP procedures, the questions are mostly about specialist treatments and
diagnostics; although, some of the topics, such as cancer screening
procedures might come up in primary care.
It is certainly preferable for physicians to be able to answer their own
questions using evidence-based medicine principles; however, many, perhaps
most, questions that come up are likely to be beyond the capabilities in
terms of time and skill for any single individual. Of course we don't get
asked questions about easy topics that have a nice set of large, well
reported randomized controlled trials that can be found and interpreted
easily. I suppose there are some of those out there. But we get the messy
topics that others have given up on. These topics typically take months of
full-time work by searchers, analysts and writers. Our searches frequently
turn up thousands of article titles and abstracts, much of it background
material or only periferally related material. From these an analyst will
order about 30%. From these several hundred articles, perhaps a hundred or
so will be cited in the final report, most of it as background material,
perhaps related to the etiology and epidemiology, literature history,
natural history, regulatory status, cost information, etc. From this maybe
a few dozen will be primary data articles that are subjected to formal
analysis. If a homogeneous set of RCTs is present, they will be
meta-analyzed, usually more than one way (e.g., fixed effects, random
effects; with outliers, without outliers). Usually there is no such set, so
other types of controlled trials must be examined, or uncontrolled case
series must be examined, possibly combined and compared to historical
controls. Frequently, settling on the proper historical controls (e.g. the
competing standard of care) and collecting and analyzing that literature is
a bigger job than for the new technology of interest, because the standard
of care has been around much longer and accumulated so much more literature.
Most topics have been studied using several different experimental
approaches with possibly many outcome measures and followup times, etc.
These may require several separate analyses, and then some effort to
synthesize the findings of the separate analyses into some meaningful
summary. It is all very complicated, very messy and very time consuming.
The worse the data, the more effort is required to analyze it.
We have found that the best people for this type of analysis are Ph.D. or
M.D. level people with a background in experimental biomedical research.
They understand about experimental design and know or can be taught about
statistical analysis. Statisticians, epidemiologists and M.D.s without pure
research experience have not fared well for various reasons. It is
difficult to conceive of full-time clinicians, let alone those in primary
care, as having the time or depth of knowedge in research design and
statistical analysis to be able to do this with any but the most simple and
limited topics. The problem is that most medical research is not designed
to meet the needs of evidence-based medicine. It is designed to establish
and promote careers in academic medicine, or worse, commercial ventures.
Thus, it cannot be taken at face value. There have been dozens of
systematic analyses of the medical literature spanning decades (it is not
getting better), and all of them demonstrate that the literature is full of
research design flaws, arithmetic errors, statistical errors, reporting
errors and inadequacies, unsupported or wrong conclusions, etc., etc. RCTs
themselves suffer all these same problems. It takes us several years to
train analysts in how to deal with all of this, and our knowledge of how to
best analyze all this flawed literature is still in its infancy. In some
cases, proper statistical methods don't exist or have not been well
disseminated, and we have to struggle to adapt or invent adequate methods.
No individual analyst can be trusted with such a complicated task, so there
are rounds of internal review meetings with other analysts. Then, because
the analysts are not physicians, or if they are may not have the right
specialty experience (there is no way we could keep a staff of physicians of
every possible specialty and subspecialty), the analysis is sent out for
external review by working clinicians, world reknowned specialists and
methodologists. This is where the clinical experience is added to the
analysis. In some of the larger projects, clinicians are brought into the
loop from the beginning to help frame the questions and scope. Then the
draft goes back to the analyst to incorporate the external review input;
then back through several more rounds of internal review, and possibly
another round of external review.
What comes out in the end is a description of the state of knowledge on the
topic and whatever synthesis and analysis can be carried out with that
knowledge. It reports what has been found on populations of patients and
subpopulations; in other words, averages for certain broad patient types.
There is rarely any attempt to prescribe medical practice in any but the
most general terms. Mostly it is information that can be used to make
medical decisions, rather than medical decisions themselves. It is not even
guidelines, but could be used to inform the making of guidelines. Certainly
there is nothing that relates specifically to any individual patient.
It will always be the task of the clinician to take such general
information, hopefully systematic and evidence-based as ours is, and to
adapt the information to the specific situation of each individual patient.
The evidence-based information will never cover every situation. There is
no way to do a large RCT on every new twist on every procedure. Clinical
experience will always have to take up where the evidence leaves off. But,
as someone already mentioned, it is always too easy to slip from trusting
experience and "expert opinion" when evidence is lacking, to trusting
experience and opinion when systematic evidence is available. Can
physicians take evidence-based analysis from non-physicians? Well of
course, the driver has to drive the race; but woe be to the driver who
ignores his mechanics. It must be a team effort. Much of modern medicine
is too complicated for anything less.
Finally, many policy makers (and individual clinicians) complain that they
are faced with a patient today, and cannot wait for the above laborious
process, let alone the clinical research it should be based on. Of course
this is true for that patient; but unless the process is started now, in
spite of its uselessness for that patient, then there will be other patients
a year from now or five years from now, who could have benefited if the
process had only been started.
Sorry to be so long-winded. Hope some of this is helpful.
David L. Doggett, Ph.D.
Medical Research Analyst
Technology Assessment Group
ECRI, a non-profit health services research organization
5200 Butler Pike
Plymouth Meeting, PA 19462-1298, USA
Phone: +1 (610) 825-6000 ext.5528
Fax: +1(610) 834-1275
E-mail: [log in to unmask]
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|