As David Sackett had famously stated “12 x bias is still bias”
bd
From: Evidence based
health (EBH) [mailto:[log in to unmask]] On Behalf Of Tom
Jefferson
Sent: Wednesday, June 03, 2009 7:06 AM
To: [log in to unmask]
Subject: Re: U.S. AHRQ ...comparative effectiveness research (CER)
Dear all, I agrre with the
sentiments expressed in this debate. We have a very good example of confounders
at work in the topic of influenza vaccines in the elderly (see attached
explanation). Confounding comes from the use of datalinked reimbursement data.
These are huge datasets which have had a great impact on decision-makers and
editors alike. Few of these have looked closely at the data and their logical
implications. One of the potentially negative effects of the use of these data
is prviding a dubious rationale for not carrying out randomised controlled
trials that would stand a better chance of giving us a definitive answer on the
effects of current influenza vaccines in the elderly.
The attached is just one example of the now quite large literature on
thisspecific topic.
Best wishes,
Dr Tom Jefferson
Via Adige 28
00061 Anguillara Sabazia
(Roma)
Italy
tel 0039 3292025051
2009/6/2 Mark V. Johnston, Ph.D. <[log in to unmask]>
I share your concern. People really believe that large
numbers, combined with matching or covariance analysis based on whatever-is-available
will yield reliable evidence of treatment effectiveness. They want an
easy way out, avoiding the difficulties of planning a controlled trial.
There are, however, any number of examples showing that uncontrolled causal
inferences from observational studies are fraught with great risk.
Although I have spent most of my career doing correlational outcomes research,
I have moved to the advocacy and conduct of RCTs.
Despite the need for more RCTs, I believe that we should acknowledge that there
are specific limited circumstances in which cohort comparison studies can
provide relatively strong or useful evidence. Occasionally one may
compare groups which are really very similar (based on measurement of disease
severity and other factors known to strongly predict the outcome) which however
receive very different treatment and/or have very different
outcomes. There are also circumstances where group assignment can
be accurately modeled using variables unconnected with outcomes (strong instruments).
There are also strong quasi-experimental designs (e.g. interrupted time series
with randomized intervention timing, planned regression discontinuity studies
based on quantified assignment criteria) which are commonly ignored in the rush
of the debate. Generalizability of results can be greater in controlled
studies of treatment in practice.
Correct inference of effectiveness is a huge and technically advanced
issue. The topic needs much more discussion, education, and action than a
brief email or the oversimplifications heard in so many
conferences. Can our discussions shed light on the issue?
Mark
V. Johnston, Ph.D.,
Professor, Occupational Therapy Department,
College of Health Sciences, 963 Enderis Hall,
University of Wisconsin - Milwaukee
2400 E. Hartford Ave.,
Milwaukee, WI 53211
(414) 229-3616
----- Original Message -----
From: "Teresa Benson" <[log in to unmask]>
To: [log in to unmask]AC.UK
Sent: Tuesday, June 2, 2009 9:50:49 AM GMT -06:00 US/Canada Central
Subject: U.S. AHRQ symposium on comparative effectiveness research (CER)
Has
anyone else been listening in to the AHRQ symposium on CER (broadcast via
WebEx) yesterday and today? I was thrilled when our government allocated
large funds for CER, but have been disturbed by the fact that most of the
presenters are talking about mining data, not conducting huge multi-center
randomized controlled trials. (Some presenters have suggested that if you
have enough subjects in your database, an RCT really isn’t necessary to
determine effective treatments.) When one participant raised a concern
about this, and specifically mentioned hormone replacement therapy as an
example of where even large numbers of subjects can’t overcome the fundamental
design problem of using associations to declare effectiveness, two different
presenters responded-- and yet both failed to answer the concern; one referred
to smoking as an example of how we can “know” cause-and-effect based on “a
preponderance” observational research.
I
could just write them off as being enamored by new technological capabilities,
such as soon being able mine large numbers of personal electronic health
records instead of just claims data. But because AHRQ and the individuals
presenting at this symposium have key roles in defining what is
“evidence-based” (and thus payable by Medicare and other funders), as well as
directing all the new research funding, I’m concerned that cross-sectional and
descriptive studies based on large databases will be declared “Effectiveness”
research, and be seen and used as evidence of “effectiveness” regardless of the
absence of RCT-based evidence. If anyone else has been listening in on
this webcast, I’d love to hear your thoughts on this.
Teresa
Benson
McKesson
Health Solutions
Prior
Lake, MN, USA
Mark
V. Johnston, Ph.D.,
Professor, Occupational Therapy Department,
College of Health Sciences, 963 Enderis Hall,
University of Wisconsin - Milwaukee
2400 E. Hartford Ave.,
Milwaukee, WI 53211
(414) 229-3616
----- Original Message -----
From: "Teresa Benson" <[log in to unmask]>
To: [log in to unmask]
Sent: Tuesday, June 2, 2009 9:50:49 AM GMT -06:00 US/Canada Central
Subject: U.S. AHRQ symposium on comparative effectiveness research (CER)
Has
anyone else been listening in to the AHRQ symposium on CER (broadcast via
WebEx) yesterday and today? I was thrilled when our government allocated
large funds for CER, but have been disturbed by the fact that most of the presenters
are talking about mining data, not conducting huge multi-center randomized
controlled trials. (Some presenters have suggested that if you have
enough subjects in your database, an RCT really isn’t necessary to determine
effective treatments.) When one participant raised a concern about this,
and specifically mentioned hormone replacement therapy as an example of where
even large numbers of subjects can’t overcome the fundamental design problem of
using associations to declare effectiveness, two different presenters
responded-- and yet both failed to answer the concern; one referred to smoking
as an example of how we can “know” cause-and-effect based on “a preponderance”
observational research.
I
could just write them off as being enamored by new technological capabilities,
such as soon being able mine large numbers of personal electronic health
records instead of just claims data. But because AHRQ and the individuals
presenting at this symposium have key roles in defining what is “evidence-based”
(and thus payable by Medicare and other funders), as well as directing all the
new research funding, I’m concerned that cross-sectional and descriptive
studies based on large databases will be declared “Effectiveness” research, and
be seen and used as evidence of “effectiveness” regardless of the absence of
RCT-based evidence. If anyone else has been listening in on this webcast,
I’d love to hear your thoughts on this.
Teresa
Benson
McKesson
Health Solutions
Prior
Lake, MN, USA
--