JiscMail Logo
Email discussion lists for the UK Education and Research communities

Help for EVIDENCE-BASED-HEALTH Archives


EVIDENCE-BASED-HEALTH Archives

EVIDENCE-BASED-HEALTH Archives


EVIDENCE-BASED-HEALTH@JISCMAIL.AC.UK


View:

Message:

[

First

|

Previous

|

Next

|

Last

]

By Topic:

[

First

|

Previous

|

Next

|

Last

]

By Author:

[

First

|

Previous

|

Next

|

Last

]

Font:

Proportional Font

LISTSERV Archives

LISTSERV Archives

EVIDENCE-BASED-HEALTH Home

EVIDENCE-BASED-HEALTH Home

EVIDENCE-BASED-HEALTH  May 2000

EVIDENCE-BASED-HEALTH May 2000

Options

Subscribe or Unsubscribe

Subscribe or Unsubscribe

Log In

Log In

Get Password

Get Password

Subject:

RE: Some very naive questions

From:

"Doggett, David" <[log in to unmask]>

Reply-To:

Doggett, David

Date:

Mon, 8 May 2000 10:56:27 -0400

Content-Type:

text/plain

Parts/Attachments:

Parts/Attachments

text/plain (267 lines)

Your questions are of interest to us because we are presently involved in
technology assessments of several biofeedback topics.  You have asked
several rather controversial questions.  Some of them apply to the HCFA
process (which may or may not be evidenced based), and some of them are
challenges for evidence based medicine in general.  I can only reply to them
briefly here in general terms.

The first questions involve the HCFA process.  You should be aware that
their present procedures are a work in progress.  They have received much
criticism other than yours, and they expect consumers and interested parties
to continue to give them feedback as they work on the process.  There is
some history to this.  While medical devices and pharmaceuticals in the US
are regulated by the FDA, medical procedures are not regulated by any
government agency.  The only regulation that occurs for procedures is in
terms of policy on coverage.  Several years ago the Office of Technology
Assessment did some assessing of procedures, and this information was made
available to the public and to coverage decision makers.  A powerful lobby
of clinicians who did not want their procedures officially assessed
influenced congress to do away with the OTA.  Technology assessment is now
farmed out by the Agency for Healthcare Research and Quality (AHRQ) to the
private sector.  HCFA can tap into this information, or must develop the
needed information on its own.

Previously, HCFA's coverage decisions were made by unknown processes behind
closed doors.  Amid much consumer pressure congress recently required HCFA
to develop a defined procedure to arrive at its coverage decisions at least
partially in public view, with input from interested parties.  You are
seeing the first attempts at implementing this new policy.  While these
first attempts may not be pretty, hopefully with some more experience and
continued constructive input from the public and interested parties, the
process will improve and become acceptable to all involved.  But this
process will always be the center of a tug-of-war between consumers and
clinicians who want their favorite procedures covered, and taxpayers and
their representatives who want reasonable limits.

Evidenced based medicine was originally developed by physicians as a set of
principles to systematically determine the level of evidence that supports
an intervention of interest.  But "who decides the questions" addressed by
EBM principles can be anyone from a consumer to policy makers.  When the EBM
approach identifies high quality evidence in the form of one or more well
designed, large randomized controlled trials, this is very helpful in
decision making.  At the present, EBM groups, such as the Cochran group,
have concentrated on searching for such high quality evidence and placing
this evidence in accessible databases.  Unfortunately, there are not now,
nor are there likely to ever be, large RCTs on every possible intervention
and every new refinement of a previous intervention.  But there has been
very little effort addressed toward how to use "the available evidence," of
whatever quality it might be, to make necessary decisions.  When clinicians
encounter situations for which there is not high quality evidence, judgement
and personal experience must be used.  But this situation takes on other
dimensions for third party payers, whether governemment or private.  There
are unavoidable incentives to develop a knee-jerk attitude that nothing will
be covered unless there is good large RCT data.  This perfectionist approach
may not be sustainable over all decisions that must be made.  This will
always be a major difficulty for policy makers and is a serious challenge to
the proponents of EBM.  Nevertheless, it will always be the responsibility
of the promoters of new technologies to provide solid evidence for
interventions before asking consumers to foot the bill, whether as tax
payers or as workers whose pay is funneled into insurance companies or HMOs.

Another question you ask is whether an intervention should be compared to
placebo or to another available intervention.  The answer is both.  The
policy decision must hinge on how the new intervention compares, not to
placebo, but to other available interventions.  It is the best intervention
that we are after, not just anything that works better than nothing.
However, without a placebo control, there is no way to know if the competing
intervention has been carried out effectively.  It could be an ineffective
"strawman" that makes the new intervention look better than it really is (as
happened recently in a South African trial of high dose chemotherapy for
metastatic breast cancer).  If a placebo control is prohibitive ethically or
practically, then the competing intervention results must be compared
critically to historical controls.  By the same token, as you correctly
point out, there must be some effort to assure that the new intervention is
carried out with reasonable competence.  This is a particular problem with
biofeedback.  This would all seem to be common sense research, whether it is
called EBM or not.  If researchers and policy makers use unreliable methods
of research and research analysis, this reflects on the individuals and
organizations involved, and should not be blamed on EBM principles that may
have been violated.

I can only urge you (1) to stay active in a constructive way as HCFA works
out its new methods for determining coverage.  And (2) as a proponent of
biofeedback to be active in developing reliable evidence that will be useful
to technology assessors in and out of government.  Call it evidence-based
medicine if you want to, but we are really only talking about good
scientific research, meaning well described studies with good positive and
negative controls.  

David L. Doggett, Ph.D.
Senior Medical Research Analyst
Technology Assessment Group
ECRI, a non-profit health services research organization
5200 Butler Pike
Plymouth Meeting, PA 19462-1298, USA
Phone: +1 (610) 825-6000 ext.5509
Fax: +1(610) 834-1275
E-mail: [log in to unmask]


> -----Original Message-----
> From:	[log in to unmask] [SMTP:[log in to unmask]]
> Sent:	Tuesday, May 02, 2000 10:59 PM
> To:	EBHC
> Subject:	Some very naive questions
> 
> Dear List:
> Please forgive the intrusion of a retired psychologist from the 'States,
> but I urgently need suggestions from members of this list.  I have learned
> everything I know about "evidence-based-medicine" in the past six months,
> and now barely know enough to ask, hopefully, the right questions.
> 
> THE BACKGROUND
> As you may know, in the USA a federal agency called the "Health Care
> Financing Administration" (HCFA) administers the national health insurance
> program, "Medicare".  Although this program pertains only to the elderly,
> the majority of private health insurance programs look to Medicare for
> leadership in evaluating technology and coverage issues, so their
> influence extends far beyond their mandate.
> 
> Recently a new generation of staff members at HCFA decided to use an
> "evidence-based-medicine" approach to reach coverage decisions.   The new
> approach was applied first to the use of "biofeedback" and "electrical
> stimulation" to the treatment of urinary incontinence.  
> 
> (Biofeedback is much more highly developed in the United States than it is
> in Europe, where Electrical Stimulation has been widely used for many
> years.  But, as a reasonable estimate, 90% of the commercial support for
> biofeedback comes from very small businesses (less than 20 employees), who
> are not in a position to mount a serious challenge to government policy.)
> 
> Last fall HCFA invited professional organizations to submit written
> testimony by early December in preparation for public hearings in January,
> 2000.  At the last minute HCFA postponed the hearings and announced that
> they would be held on April 12-13.  Still, all professional organizations
> were under the assumption that their prepared testimony had been submitted
> to the "Medical - Surgical Panel" of the
> "Medicare Coverage Advisory Committee" for consideration.
> 
> Less than a month before the hearings professional organizations were
> informed that none of their prepared testimony had been distributed, but
> instead, an "Evidence-based" evaluation of these technologies, prepared by
> the "Technology Evaluation Center" of the "Blue Cross Blue Shield
> Association", would be the sole evidence debated at the Panel meeting.
> The professional organizations, which included the American Urological
> Association, American College of Gynecologists, the American
> UroGynecological Society, as well as the Association for Applied
> Psychophysiology and Biofeedback, the Agency for Health Care Policy and
> Research, as well as the national physical therapists' and nurses'
> organizations -- all scrambled to revise their testimony in two short
> weeks to accomodate the new rules for debate.  [BTW, NO professional
> organization desented on the value of biofeedback, and only one on
> electrical stimulation.]
> 
> Two days before the public hearings, HCFA posted to its website a list of
> "official" questions which would be addressed to the panel.  And it was
> only at the hearing itself that these questions were formulated into a
> formal "flow chart", such that if the panel could not conclude that there
> was sufficient "evidence-based" research PROVING the effectiveness of
> these technologies, they were not allowed to even address the issue of
> whether or not there was sufficient evidence, none the less, to justify
> coverage of them in the absence of conclusive evidence.   
> 
> THE VOTE ON SCIENCE
> After reviewing the TEC report, the physician-members of the Panel voted 9
> to 2 that the evidence was not conclusive for biofeedback.  All of the
> males voted against biofeedback, while the two females voted in favor of
> biofeedback.   Several male panel members remarked later that they agreed
> with the principle voiced by one of them, that "the level of evidence in
> science is 0.05, but the level in policy decisions is 0.50".  But the
> panel was NOT allowed to discuss policy.  
> 
> As one might imagine, the process used by HCFA is now the subject of a
> popular uprising that may or may not generate sufficient political
> pressure to reverse the results of the HCFA hearings.  That is another
> topic in its own right.  I have joined this list to pursue a different
> tact.  
> 
> MY NAIVE QUESTIONS
> 1. Who decides which questions are addressed in "evidence-based-medicine?"
> The TEC report investigated only those RCTs which compared "Biofeedback"
> with "Pelvic Muscle Exercises Alone".  In contrast, in the companion
> report, they investigated RCTs that (1) compared Stim to placebo (sham),
> as well as (2) Stim compared with alternative treatments.  
> 
> In the first instance, the Burgio et al 1999 JAMA study (which was given
> the highest methodological rating in Berghmans' recent review) was NOT
> considered at all because Burgio compared biofeedback to drugs and
> placebo, but NOT to PMEs alone.   [If the same criteria that was used for
> stim was used for biofeedback, Burgio would have been included, with very
> different results.]   Is this kind of manipulation common in
> "evidence-based-medicine"?
> 
> 2. By what standards are RCTs dismissed from consideration because they do
> not present sufficient evidence of not guarding against selection bias,
> etc.?   The TEC report(s) never established that several studies WERE
> defective in these areas, but merely intimated that they MIGHT be.
> 
> 3. To what extent has "evidence-based-medicine" investigated behavioral
> therapies such as biofeedback and pelvic muscle exercise?  Clearly there
> are many areas of medicine that lend themselves to clear and simple
> analysis.  In pharmacological studies, for example, it is fairly simple to
> ask how many milligrams of the active ingredient were used?   There is
> some controversy in the incontinence field, however.   In some of the
> studies that were classified as "Pelvic Muscle Exercises alone", subjects
> actually received repeated vaginal evaluations by a trained
> physiotherapist who coached subjects on how to contract their pelvic
> muscles.  These studies were then compared with studies where the
> "feedback" was through a machine only.   In the two largest studies
> (Burns, 1993;  Berghmans 1996) verbal feedback was not statistically
> different, in outcome, from machine-feedback.  (In the largest study of
> all, Burgio 1999, machine feedback was dramatically superior to drugs or
> placebo, !
>  bu!
> t that comparision wasn't included.)
> 
> 4. How do you experts account for the problems that may be created by
> uneven or incompetant treatments being tested?   In drug studies, for
> instance, its seems warranted to assume that the subjects who were given
> "50 mg of drug X" really GOT "50 mg of drug X", and those who got placebos
> got placebos.  Drug manufacturers are required to employ chemists who
> certify the integrity of their products.  
> 
> But in behavioral therapies, who is to decide if the subjects who got six
> sessions of biofeedback training REALLY GOT six session of competent
> biofeedback training?  The therapist in Burns' study was never formally
> trained in biofeedback techniques. At the time, her biofeedback subjects
> set a new record LOW for treatment success (61% Sx reduction).  The
> subjects in Berghmans' study used an electrical stimulation electrode
> (circular electrodes) instead of an EMG sensor (longitudinal electrodes);
> they then set a new record for poor results from biofeedback (55% Sx
> reduction).   Would it be considered appropriate to compare surgeries
> performed by first year medical students with surgeons who are board
> certified?
> 
> So my question is, who decides if the "treatment" meets minimal
> professional standards? Or isn't that a consideration in
> "evidence-based-medicine"?  If so, isn't that a serious limitation of
> evidence-based-medicine?  Or are these factors normally considered?
> 
> Well, I have more quesitons, but that's enough for now.   You can find the
> relevant documents at the HCFA website, http://www.hcfa.gov/quality/8b.htm
> and follow the links to biofeedback and Incontinence.  The documents from
> our professional societies are available on my website,
> http://www.incontinet.com; there are detailed descriptions of available
> testimony in the first three paragraphs in the text of the home page.
> 
> On behalf of tens of thousands of urologists, nurses, physicians, physical
> therapists, and their many patients, I hope that some of the members of
> this list will look into this subject and provide guidance as to the
> adaquacy of the "evidence-based-medicine" being promoted by our
> government.  It is even a possibility that this recent activity will
> reflect badly upon the "evidence-based" movement, at least in the United
> States.
> 
> Thank you in advance.
> 
> John D. Perry, PhD, MDiv, BCIA, FAACS
> Committee on HCFA of the Association for
> Applied Psychophysiology and Biofeedback
> 
> 1192 Lakeville Circle
> Petaluma, CA 94954 USA
> 


%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

Top of Message | Previous Page | Permalink

JiscMail Tools


RSS Feeds and Sharing


Advanced Options


Archives

May 2024
April 2024
March 2024
February 2024
January 2024
December 2023
November 2023
October 2023
September 2023
August 2023
July 2023
June 2023
May 2023
April 2023
March 2023
February 2023
January 2023
December 2022
November 2022
October 2022
September 2022
August 2022
July 2022
June 2022
May 2022
April 2022
March 2022
February 2022
January 2022
December 2021
November 2021
October 2021
September 2021
August 2021
July 2021
June 2021
May 2021
April 2021
March 2021
February 2021
January 2021
December 2020
November 2020
October 2020
September 2020
August 2020
July 2020
June 2020
May 2020
April 2020
March 2020
February 2020
January 2020
December 2019
November 2019
October 2019
September 2019
August 2019
July 2019
June 2019
May 2019
April 2019
March 2019
February 2019
January 2019
December 2018
November 2018
October 2018
September 2018
August 2018
July 2018
June 2018
May 2018
April 2018
March 2018
February 2018
January 2018
December 2017
November 2017
October 2017
September 2017
August 2017
July 2017
June 2017
May 2017
April 2017
March 2017
February 2017
January 2017
December 2016
November 2016
October 2016
September 2016
August 2016
July 2016
June 2016
May 2016
April 2016
March 2016
February 2016
January 2016
December 2015
November 2015
October 2015
September 2015
August 2015
July 2015
June 2015
May 2015
April 2015
March 2015
February 2015
January 2015
December 2014
November 2014
October 2014
September 2014
August 2014
July 2014
June 2014
May 2014
April 2014
March 2014
February 2014
January 2014
December 2013
November 2013
October 2013
September 2013
August 2013
July 2013
June 2013
May 2013
April 2013
March 2013
February 2013
January 2013
December 2012
November 2012
October 2012
September 2012
August 2012
July 2012
June 2012
May 2012
April 2012
March 2012
February 2012
January 2012
December 2011
November 2011
October 2011
September 2011
August 2011
July 2011
June 2011
May 2011
April 2011
March 2011
February 2011
January 2011
December 2010
November 2010
October 2010
September 2010
August 2010
July 2010
June 2010
May 2010
April 2010
March 2010
February 2010
January 2010
December 2009
November 2009
October 2009
September 2009
August 2009
July 2009
June 2009
May 2009
April 2009
March 2009
February 2009
January 2009
December 2008
November 2008
October 2008
September 2008
August 2008
July 2008
June 2008
May 2008
April 2008
March 2008
February 2008
January 2008
December 2007
November 2007
October 2007
September 2007
August 2007
July 2007
June 2007
May 2007
April 2007
March 2007
February 2007
January 2007
December 2006
November 2006
October 2006
September 2006
August 2006
July 2006
June 2006
May 2006
April 2006
March 2006
February 2006
January 2006
December 2005
November 2005
October 2005
September 2005
August 2005
July 2005
June 2005
May 2005
April 2005
March 2005
February 2005
January 2005
December 2004
November 2004
October 2004
September 2004
August 2004
July 2004
June 2004
May 2004
April 2004
March 2004
February 2004
January 2004
December 2003
November 2003
October 2003
September 2003
August 2003
July 2003
June 2003
May 2003
April 2003
March 2003
February 2003
January 2003
December 2002
November 2002
October 2002
September 2002
August 2002
July 2002
June 2002
May 2002
April 2002
March 2002
February 2002
January 2002
December 2001
November 2001
October 2001
September 2001
August 2001
July 2001
June 2001
May 2001
April 2001
March 2001
February 2001
January 2001
December 2000
November 2000
October 2000
September 2000
August 2000
July 2000
June 2000
May 2000
April 2000
March 2000
February 2000
January 2000
December 1999
November 1999
October 1999
September 1999
August 1999
July 1999
June 1999
May 1999
April 1999
March 1999
February 1999
January 1999
December 1998
November 1998
October 1998
September 1998


JiscMail is a Jisc service.

View our service policies at https://www.jiscmail.ac.uk/policyandsecurity/ and Jisc's privacy policy at https://www.jisc.ac.uk/website/privacy-notice

For help and support help@jisc.ac.uk

Secured by F-Secure Anti-Virus CataList Email List Search Powered by the LISTSERV Email List Manager