While it is true that sometimes observational studies may correctly
inform us about benefits of therapies for serious illness, they may also
lead us to false conclusions about the benefits of a therapy. In that we
cannot know a priori which observational studies correctly inform us and
which inform us falsely, we cannot trust observational studies for
questions of efficacy of therapeutic interventions. The following
provides some information as to why this is so:
Problems with the Use of Observational Studies to Draw Cause and Effect
Conclusions About Interventions
Definition Observational Study: Epidemiological study in which
observations are made but investigators do not control the exposure or
intervention and other factors. Changes or differences in one
characteristic are studied in relation to changes or differences in
others, without the intervention of the investigator. Observational
studies are highly prone to selection, observation bias and confounding.
Tip: If an intervention is "assigned" through the research, it is an
experiment. If it is chosen, then the study type is observational.
Key Points While observational studies might give us a cause and
effect answer for interventions, many times they have failed to do so
accurately, such as with hormone replacement studies. This is due to
special opportunities for bias and confounding which are reduced or
eliminated by an experimental design, especially if randomization is
used.
Key Problems
Choice and lack of control create special challenges in
observational studies which may affect the observed outcomes.
Conclusions Observational studies cannot be relied upon for
conclusions about cause and effect for interventions.
Discussion There are numerous instances of observational studies
resulting in agreement with RCTs or providing helpful solutions to
health care problems. Many of these examples involve public health
applications (e.g., cholera, vaccines, etc.) However, there are
numerous instances in which observational studies were not in agreement
with RCTs. Examples: HRT, anticoagulants in acute MI, cardiology
research, beta-carotene, vitamin E, tocopherol. Outcomes of a review
of 18 meta-analyses (1211 clinical trials) — outcomes of non-randomized
trials varied from outcomes of RCTs (8 studies) from 76% underestimation
to 160% overestimation. (Kunz R, Oxman A. BMJ 1998. Vol317:1185-90.)
Special Issues with Bias in Observations: Choice
Choice uniquely adds potential for confounding. Patient choices may be
associated with other differences that could affect the results.
Physician choices (“channeling”) may be associated with other
differences that could affect the results. These differences in choices
could be associated with —
§ Health status
§ Provider skills
§ Provision of care (e.g., affordability which can link to
socioeconomic factors, patient demand which could drive provision of
care and link to other confounding factors)
§ Patient perceptions (e.g., risk aversion) and personal
characteristics (e.g., more well read, healthy user effect)
§ Other unknown confounders (e.g., genetic issues, exposures, risk
factors)
Differing Choices Means Differing Baseline Characteristics
You need two equal groups to understand if the intervention — and not
some difference between the groups — caused the outcome. “Choice” is a
difference between groups in addition to what we are interested in
studying — and may be linked with confounders which are the true reason
for the outcomes.
Examples of how “Choice” May Affect Outcomes
§ Physician chooses patients who are more likely to have favorable
outcomes with one treatment over another
§ Physician favors one treatment over another due to training or
skill level
§ A more demanding patient may receive more monitoring or
additional care
§ A patient who is more educated, intelligent, information-seeking
or affluent might engage in other behaviors that contribute to the
outcomes
§ Risk aversion be associated with other personal factors which
may be confounders
Special Issues with Observations Introducing Bias: Control
Lack of control uniquely adds opportunities for confounding.
Observations are not controlled; experiments are. Lack of control
increases the potential differences between groups. Much more is likely
to be unknown (and unknowable) in an observational study leading to —
unknown confounders.
Lack of Control Means Differences Between Groups
You need to treat groups in the same way, except for the intervention,
to understand if the intervention — and not some difference between how
the groups were otherwise treated, measured, followed and assessed —
caused the outcome. Differences between how the groups were otherwise
treated, measured, followed and assessed can affect the observed outcome
Examples of “Controls”
Elements that may be controlled in experiments, but not in the “natural
world” of care (or natural patient life) include —
§ Comparability of baseline risks through randomization
§ Blinding
§ Uniformity in procedures, follow-up length and methods such as
dosing or specifications on which and how procedures are done
§ Allowed and disallowed medications and wash-out
§ Adherence measurements and concomitant medication monitoring
§ Prevention of contamination or “migration” to other arm
§ Quality control for standardization such as in training,
measurement and reporting
§ Just-in-time monitoring for pre-specified definitions of
treatment success or failure or other endpoints
Example: What is the cause of prostate cancer?
§ Case Control Study:
§ Cases: Men with prostate cancer
§ Controls: Matched men without prostate cancer
§ Multiple Exposures: (e.g., diet, weight ) are identified in
cases and controls and compared.
§ Independent variables are identified and adjustments are made,
but adjustments cannot be counted on to eliminate confounding
§ When multiple exposures are compared, however, there is an
increased probability of finding a difference in exposures between the
groups by chance
Examples of How “Lack of Control” May Affect Outcomes
§ Adherence is more likely to be poorer outside of a controlled
trial
§ Patients may be taking other medications (there are no
“disallowed” medications in observational studies — and you really can’t
be sure what a patient is doing)
§ Other procedures may be used which could be the actual
explanation for the observed results
§ Monitoring, measurement and reporting of outcomes could be
driven by the choice of treatment and thus could be different
§ Reporting may differ depending on the arm
§ People tend to root for the intervention — you will notice what
you are primed to notice
You Can’t Blind in an Observational Study
Blinding matters. Outcomes can be affected by lack of blinding when
there are subjective or even objective measures.
Chalmers TC et al. Bias in Treatment Assignment in Controlled Clinical
Trials. N Engl J Med 1983;309:1358-61. has shown that results of
non-blinded studies may be biased in favor of the intervention even with
objective measures — mortality.
Even if the Observational Study Leads us in the Right Direction…
Observational studies are more prone to bias and bias tends to favor the
intervention.
-----Original Message-----
From: Evidence based health (EBH)
[mailto:[log in to unmask]] On Behalf Of Jim Walker
Sent: Wednesday, August 30, 2006 10:45 AM
To: [log in to unmask]
Subject: Re: Deconstructing the evidence-based discourse in
healthsciences (Problems with Observational Studi
This discussion would be strengthened by recognizing that therapy and
prevention pose very different questions: The more serious an illness
is, the greater the likelihood that a therapy supported only by
observational studies will make the patient better. In the case of an
asymptomatic person without illness the evidence for a preventive
intervention must be much stronger (that is, RCT level) for there to be
the same likelihood that the person will be made better by the
intervention.
Jim
James M. Walker, MD, FACP
Chief Medical Information Officer
Geisinger Health System
>>> Mike/Linda Stuart <[log in to unmask]> 08/25/06 5:41 PM >>>
I've noticed that several list members have advocated the use of
observational studies if they are the "best available" evidence. For
interventions dealing with therapy, prevention or screening it has been
well-established that even well-done observational studies can provide
completely misleading results. For example, the observational studies
done on HRT were correct in that there was an association between HRT
use and secondary prevention of cardiac events, but it was false that it
was a cause-effect relationship (it was confounded by the healthy-user
effect). The following reading might be helpful to those who would like
further evidence as to why observational studies can mislead in
addressing these kinds of clinical questions. At the following link,
chose the title, "The Problems with the Use of Observational Studies to
Draw Cause and Effect Conclusions About Interventions [PDF] "
-- Michael Stuart MD
President, Delfini Group,
Clinical Asst Professor, UW School of Medicine
6831 31st Ave N.E.
Seattle, Washington 98115
206-854-3680 Mobile Phone
206-527-6146 Home Office
[log in to unmask]
www.delfini.org
-----Original Message-----
From: Evidence based health (EBH)
[mailto:[log in to unmask]] On Behalf Of brnbaum
Sent: Friday, August 25, 2006 6:22 AM
To: [log in to unmask]
Subject: Re: Deconstructing the evidence-based discourse in
healthsciences)
From my perspective as a hospital epidemiologist, the important division
of perspective here certainly isn't simply quantitative vs. qualitative
and nursing vs. medicine. Infection control, for example, cuts across
all these disciplines and much of the evidence behind infection control
is based on knowledge gleaned from observational study designs in areas
where RCT's aren't ethical, feasible or both. Much in healthcare
administration and management has been rooted in tradition and
assumption, dealing with fundamental questions where a mix of
quantitative and qualitative research would better guide decisions. Many
decisions in medicine need to be informed about effectiveness as well as
efficacy...
We certainly need to advance our knowledge by critical appraisal and
grading of research evidence. When information needs relate to questions
of efficacy, then RCT's are the best form of evidence. When questions
relate to effectiveness, then observational studies like cohort &
case-referent studies probably are best. When questions relate to
efficiency or cost-effectiveness or perceived utility, yet other
research paradigms are better tools.
That being said, I believe educational deficits are part of the root
cause for the chasm between these various camps. Poor numeracy skills
hamper many of the students, entering nursing and other disciplines,
that I've seen over the years. Inadequate emphasis on interdisciplinary
education reinforces many of the silo mentalities I've encountered
throughout health care organizations. Simplistic audit approaches
reinforced by well-intentioned but short-sighted accreditation mandates
have kept the position qualifications and program expectations too low
in hospitals' safety, infection control, quality improvement and other
such programs. There have been a number of interesting articles
published in CLINICAL GOVERNANCE related to these points, including one
with a nice flowchart to help distinguish audit from quality improvement
from research per se - a spectrum of activity we should be seeing within
every healthcare organization (not a spectrum dividing hospital-based
health professional activity from university-based researcher activity).
This has been an interesting thread. Let's bring our focus back to a
convergence of useful tools!
--
David Birnbaum, PhD, MPH
Adjunct Professor
School of Nursing
University of British Columbia
Principal, Applied Epidemiology
British Columbia, Canada
IMPORTANT WARNING: The information in this message (and the documents
attached to it, if any) is confidential and may be legally privileged.
It is intended solely for the addressee. Access to this message by
anyone else is unauthorized. If you are not the intended recipient, any
disclosure, copying, distribution or any action taken, or omitted to be
taken, in reliance on it is prohibited and may be unlawful. If you have
received this message in error, please delete all electronic copies of
this message (and the documents attached to it, if any), destroy any
hard copies you may have created and notify me immediately by replying
to this email. Thank you.
|