Dear Mel
Remember the quote from Disraeli the 19th century English politician "There
are lies, damnable lies and statistics". Regards Kevin
-----Original Message-----
From: [log in to unmask] <[log in to unmask]>
To: [log in to unmask] <[log in to unmask]>
Date: 16 September 1999 01:47
Subject: SCIENTIFIC PROOF
>Here is an interesting extract from an article by a scientist who raise
>questions about the application and interpretation of statistics in
medicine
>and health.
>
>< The Great Health Hoax
>
>By Robert Matthews
>
>Home Page: http://ourworld.compuserve.com/homepages/rajm/
>---------------------------------------------------------------------------
---
>------------------------------------------------------
>
>There seemed no doubt about it: if you were going to have a heart attack,
>there was never a better time than the early 1990s. Your chances of
survival
>appeared to be better than ever. Leading medical journals were reporting
>results from new ways of treating heart attack victims whose impact on
>death-rates wasn't just good - it was amazing.
>
>In 1992, trials in Scotland of a clot-busting drug called anistreplase
>suggested that it could double the chances of survival. A year later,
>another "miracle cure" emerged: injections of magnesium, which studies
>suggested could also double survival rates. Leading cardiologists hailed
the
>injections as an "effective, safe, simple and inexpensive" treatment that
>could save the lives of thousands.
>
>But then something odd began to happen. In 1995, the Lancet published the
>results of a huge international study of heart attack survival rates among
>58,000 patients - and the amazing life-saving abilities of magnesium
>injections had simply vanished. Anistreplase fared little better: the
>current view is that its real effectiveness is barely half that suggested
by
>the original trial.
>
>In the long war against Britain's single biggest killer, a few
>disappointments are obviously inevitable. And over the last decade or so,
>scientists have identified other heart attack treatments which in trials
>reduced mortality by up to 30 percent.
>
>But again, something odd seems to be happening. Once these drugs get out
of
>clinical trials and onto the wards, they too seem to lose their amazing
>abilities.
>
>Last year, Dr Nigel Brown and colleagues at Queen's Medical Centre in
>Nottingham published a comparison of death rates among heart attack
patients
>for 1989-1992 and those back in the clinical "Dark Ages" of 1982-4, before
>such miracles as thrombolytic therapy had shown success in trials. Their
aim
>was to answer a simple question: just what impact have these "clinically
>proven" treatments had on death rates out on the wards?
>
>Judging by the trial results, the wonder treatments should have led to
death
>rates on the wards of just 10% or so. What Dr Brown and his colleagues
>actually found was, to put it mildly, disconcerting. Out on the wards, the
>wonder drugs seem to be having no effect at all. In 1982, the death rate
>among patients admitted with heart attacks was about 20%. Ten years on, it
>was the same: 20% - double the death rate predicted by the clinical trials.
>
>In the search for explanations, Dr Brown and his colleagues pointed to the
>differences between patients in clinical trials - who tend to be
hand-picked
>and fussed over by leading experts - and the ordinary punter who ends up in
>hospital wards. They also suggested that delays in patients arriving in
the
>wards might be preventing the wonder drugs from showing their true value.
>
>All of which would seem perfectly reasonable - except that heart attack
>therapies are not the only "breakthroughs" that are proving to be damp
squibs
>out in the real world.
>
>Over the years, cancer experts have seen a host of promising drugs dismally
>fail once outside clinical trials. In 1986, an analysis of cancer death
>rates in the New England Journal of Medicine concluded that "Some 35 years
of
>intense effort focused largely on improving treatment must be judged a
>qualified failure". Last year, the same journal carried an update: "With
12
>more years of data and experience", the authors said, "We see little reason
>to change that conclusion".
>
>Scientists investigating supposed links between ill-health and various
"risk
>factors" have seen the same thing: impressive evidence of a "significant"
>risk - which then vanishes again when others try to confirm its existence.
>Leukaemias and overhead pylons, connective tissue disease and silicone
breast
>implants, salt and high blood pressure: all have an impressive heap of
>studies pointing to a significant risk - and an equally impressive heap
>saying it isn't there.
>
>It is the same story beyond the medical sciences, in fields from psychology
>to genetics: amazing results discovered by reputable research groups which
>then vanish again when others try to replicate them.
>
>Much effort has been spent trying to explain these mysterious cases of The
>Vanishing Breakthrough. Over-reliance on data from tiny samples, the
>reluctance of journals to print negative findings from early studies,
>outright cheating: all have been put forward as possible suspects.
>
>Yet the most likely culprit has long been known to statisticians. A clue
to
>its identity comes from the one feature all of these scientific disciplines
>have in common: they all rely on so-called "significance tests" to gauge
the
>importance of their findings.
>
>First developed in the 1920s, these tests are routinely used throughout the
>scientific community. Thousands of scientific papers and millions of
pounds
>of research funding have been based on their conclusions. They are
ubiquitous
>and easy to use. And they are fundamentally and dangerously flawed.
>
>Used to analyse clinical trials, these textbook techniques can easily
double
>the apparent effectiveness of a new drug and turn a borderline result into
a
>highly "significant" breakthrough. They can throw up convincing yet
utterly
>spurious evidence for "links" between diseases and any number of supposed
>causes. They can even lend impressive support to claims for the existence
of
>the paranormal.
>
>The very suggestion that these basic flaws in such widely-used techniques
>could have been missed for so long is astonishing. Alto- gether more
>astonishing, however, is the fact that the scientific community has been
>repeatedly warned about these flaws - and has ignored the warnings.
>
>As a result, thousands of research papers are being published every year
>whose conclusions are based on techniques known to be unreliable. The time
>and effort - and public money - wasted in trying to confirm the consequent
>spurious findings is one of the great scientific scandals of our time.
>
>The roots of this scandal are deep, having their origins in the work of an
>English mathematician and cleric named Thomas Bayes, published over 200
years
>ago. In his "Essay Towards Solving a Problem in the Doctrine of Chances",
>Bayes gave a mathematical recipe of astonishing power. Put simply, it
shows
>how we should change our belief in a theory in the light of new evidence.
>
>One does not need to be a statistician to see the fundamental importance of
>"Bayes's Theorem" for scientific research. From studies of the cosmos to
>trials of cancer drugs, all research is ultimately about finding out how we
>should change our belief in a theory as new data emerge. . . .>
>
>[The rest of the article is much more technical and serves to offer
>scientific corroboration for the above article - see Dr Matthews' home page
>for further information.]
>---------------------------------------------
>
>Dr Mel C Siff
>Denver, USA
>[log in to unmask]
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|