Your praise of the motivation for comparative effectiveness research (CER)

is welcome.  Thanks especially for the methods articles: the challenge of

doing valid CER is daunting in the absence of more controlled trials

comparing new interventions to a sensible treatment alternative.

Mark V. Johnston, Ph.D.,
Professor, Occupational Therapy Department,
College of Health Sciences, 963 Enderis Hall,
University of Wisconsin - Milwaukee
2400 E. Hartford Ave.,
Milwaukee, WI 53211
(414) 229-3616


----- Original Message -----
From: "Jeremy Howick" <[log in to unmask]>
To: [log in to unmask]
Sent: Wednesday, July 1, 2009 8:48:39 AM GMT -06:00 US/Canada Central
Subject: Praising the motivation for CER research, and a call for better methods

Dear All,

In order to make a sound therapeutic judgment, the clinician must know which, from among the available established alternative therapies, is the most effective (or most convenient, or cheapest). An RCT comparing a new agent with a placebo does not provide an answer to this clinically relevant question. The praiseworthy motivation for the CER research is that it addresses. The problem is that the problems with observational studies remain problematic...

Is there a way to satisfy the motivation for CER without risking an invalid answer? Teresa Benson mentioned Large-scale RCTs, and this is the ideal solution. However the cost of conducting such RCT might be prohibitive, so meta-analyses of the existing RCTs might be more practical, and deserve more attention. See Song et al (2009), and Glenny et al. (2005) for an discussion of appropriate methods, and Becker for a good description of why we need 'umbrella reviews':

1.  http://www.bmj.com/cgi/content/full/338/apr03_1/b1147
2. http://www.hta.ac.uk/execsumm/summ926.htm
3. http://www.slideshare.net/Cochrane.Collaboration/umbrella-reviews-what-are-they-and-do-we-need-them-160605

Jeremy

 
>>> "Djulbegovic, Benjamin" <[log in to unmask]> 06/27/09 6:53 PM >>>

As David Sackett had famously stated "12 x bias is still bias"
bd


From: Evidence based health (EBH) [mailto:[log in to unmask]] On Behalf Of Tom Jefferson
Sent: Wednesday, June 03, 2009 7:06 AM
To: [log in to unmask]
Subject: Re: U.S. AHRQ ...comparative effectiveness research (CER)

Dear all, I agrre with the sentiments expressed in this debate. We have a very good example of confounders at work in the topic of influenza vaccines in the elderly (see attached explanation). Confounding comes from the use of datalinked reimbursement data. These are huge datasets which have had a great impact on decision-makers and editors alike. Few of these have looked closely at the data and their logical implications. One of the potentially negative effects of the use of these data is prviding a dubious rationale for not carrying out randomised controlled trials that would stand a better chance of giving us a definitive answer on the effects of current influenza vaccines in the elderly.

The attached is just one example of the now quite large literature on thisspecific topic.

Best wishes,

Dr Tom Jefferson
Via Adige 28
00061 Anguillara Sabazia
(Roma)
Italy
tel 0039 3292025051

2009/6/2 Mark V. Johnston, Ph.D. <[log in to unmask]<mailto:[log in to unmask]>>
I share your concern.  People really believe that  large numbers, combined with matching or covariance analysis based on whatever-is-available will yield reliable evidence of treatment effectiveness.  They want an easy way out, avoiding the difficulties of planning a controlled trial.  There are, however, any number of examples showing that uncontrolled causal inferences from observational studies are fraught  with great risk.  Although I have spent most of my career doing correlational outcomes research, I  have moved to the advocacy and conduct of RCTs.

Despite the need for more RCTs, I believe that we should acknowledge that there are specific limited circumstances in which cohort comparison studies can provide relatively strong or useful evidence.  Occasionally one may compare groups which are really very similar (based on measurement of disease severity and other factors known to strongly predict the outcome) which however receive very different treatment and/or have very different outcomes.   There are also circumstances where group assignment can be accurately modeled using variables unconnected with outcomes (strong instruments).    There are also strong quasi-experimental designs (e.g. interrupted time series with randomized intervention timing, planned regression discontinuity studies based on quantified assignment criteria) which are commonly ignored in the rush of the debate.  Generalizability of results can be greater in controlled studies of treatment in practice.

Correct inference of effectiveness is a huge and technically advanced issue.  The topic needs much more discussion, education, and action than a brief email or the oversimplifications heard in so many conferences.   Can our discussions shed light on the issue?

Mark V. Johnston, Ph.D.,
Professor, Occupational Therapy Department,
College of Health Sciences, 963 Enderis Hall,
University of Wisconsin - Milwaukee
2400 E. Hartford Ave.,
Milwaukee, WI 53211
(414) 229-3616


----- Original Message -----
From: "Teresa Benson" <[log in to unmask]>
To: [log in to unmask]<http://AC.UK>
Sent: Tuesday, June 2, 2009 9:50:49 AM GMT -06:00 US/Canada Central
Subject: U.S. AHRQ symposium on comparative effectiveness research (CER)

Has anyone else been listening in to the AHRQ symposium on CER (broadcast via WebEx) yesterday and today?  I was thrilled when our government allocated large funds for CER, but have been disturbed by the fact that most of the presenters are talking about mining data, not conducting huge multi-center randomized controlled trials.  (Some presenters have suggested that if you have enough subjects in your database, an RCT really isn't necessary to determine effective treatments.)  When one participant raised a concern about this, and specifically mentioned hormone replacement therapy as an example of where even large numbers of subjects can't overcome the fundamental design problem of using associations to declare effectiveness, two different presenters responded-- and yet both failed to answer the concern; one referred to smoking as an example of how we can "know" cause-and-effect based on "a preponderance" observational research.



I could just write them off as being enamored by new technological capabilities, such as soon being able mine large numbers of personal electronic health records instead of just claims data.  But because AHRQ and the individuals presenting at this symposium have key roles in defining what is "evidence-based" (and thus payable by Medicare and other funders), as well as directing all the new research funding, I'm concerned that cross-sectional and descriptive studies based on large databases will be declared "Effectiveness" research, and be seen and used as evidence of "effectiveness" regardless of the absence of RCT-based evidence.  If anyone else has been listening in on this webcast, I'd love to hear your thoughts on this.



Teresa Benson

McKesson Health Solutions

Prior Lake, MN, USA

[log in to unmask]<mailto:[log in to unmask]>



Mark V. Johnston, Ph.D.,
Professor, Occupational Therapy Department,
College of Health Sciences, 963 Enderis Hall,
University of Wisconsin - Milwaukee
2400 E. Hartford Ave.,
Milwaukee, WI 53211
(414) 229-3616


----- Original Message -----
From: "Teresa Benson" <[log in to unmask]<mailto:[log in to unmask]>>
To: [log in to unmask]<mailto:[log in to unmask]>
Sent: Tuesday, June 2, 2009 9:50:49 AM GMT -06:00 US/Canada Central
Subject: U.S. AHRQ symposium on comparative effectiveness research (CER)

Has anyone else been listening in to the AHRQ symposium on CER (broadcast via WebEx) yesterday and today?  I was thrilled when our government allocated large funds for CER, but have been disturbed by the fact that most of the presenters are talking about mining data, not conducting huge multi-center randomized controlled trials.  (Some presenters have suggested that if you have enough subjects in your database, an RCT really isn't necessary to determine effective treatments.)  When one participant raised a concern about this, and specifically mentioned hormone replacement therapy as an example of where even large numbers of subjects can't overcome the fundamental design problem of using associations to declare effectiveness, two different presenters responded-- and yet both failed to answer the concern; one referred to smoking as an example of how we can "know" cause-and-effect based on "a preponderance" observational research.



I could just write them off as being enamored by new technological capabilities, such as soon being able mine large numbers of personal electronic health records instead of just claims data.  But because AHRQ and the individuals presenting at this symposium have key roles in defining what is "evidence-based" (and thus payable by Medicare and other funders), as well as directing all the new research funding, I'm concerned that cross-sectional and descriptive studies based on large databases will be declared "Effectiveness" research, and be seen and used as evidence of "effectiveness" regardless of the absence of RCT-based evidence.  If anyone else has been listening in on this webcast, I'd love to hear your thoughts on this.



Teresa Benson

McKesson Health Solutions

Prior Lake, MN, USA

[log in to unmask]<mailto:[log in to unmask]>





--