A paper of mine
Senn, SJ. Added Values: Controversies concerning randomization and additivity in clinical trials, Statistics in Medicine 2004; 23: 3729-3753.
discusses some of the issues. I point out that these are a number of questions we might have in connection with a trial
Q1. Was there an effect of treatment in this trial?
Q2. What was the average effect of treatment in this trial?
Q3. Was the treatment effect identical for all patients in the trial?
Q4. What was the effect of treatment for different subgroups of patients?
Q5. What will be the effect of treatment when used more generally (outside of the trial)?
One may( following Hume) doubt that Q5 can ever be satisfactorily answered but this does not mean that Q1 cannot be answered and it is this that the standard machinery is meant to address.
This is my point of view
"Given an assumption of what might be called local (or weak) additivity, that is to say
that the eect of treatment was identical for all patients in the trial (in other words that the
answer to Q3 is 'yes'), then Q1, Q2, & Q4 can all be answered using the same analysis: a
condence interval or posterior distribution for the mean eect of treatment says it all. The
eect on each patient is the average eect Q2 and is hence the eect in every subgroup Q4
and if it is implausible that this eect is zero, then the treatment has an eect Q1. Given
a further assumption of universal (or strong) additivity, this observed eect is the eect to
every patient to whom it might be applied; this also provides an answer to Q5.
Now, in my view, nobody believes literally in local, let alone universal additivity. However,
there are circumstances under which there is no point in worrying about it, primarily when
there is nothing much that can be done about the possible lack of it: for example, if we have
run a simple randomized trial in which we have failed to collect any covariate information
and if ethical considerations prevent us from running further trials. Suppose that we have
shown that on average a new treatment is (highly) eective in the patients we have studied.
It may be that this eect is an average of exceptional benet for some patients and none at
all for others but unless we can identify the sort of patient for whom it works there really is
no choice but to use the average to inform our decisions. Of course, if the data permit, then
some attempt should be made to nd evidence regarding Q3."
Stephen
-----Original Message-----
From: Evidence based health (EBH) [mailto:[log in to unmask]] On Behalf Of Jim Walker
Sent: 28 January 2011 16:00
To: [log in to unmask]
Subject: Re: Can RCT help establish causation?
Hi Ben.
I would think that the operative question might come in two strengths:
Weak: "What study designs can provide strong-enough evidence that a relationship is causal that to act on that evidence (particularly when physiologically and otherwise plausible) would be widely be considered to be responsible?" (E.g., statins for secondary prevention of CAD)
Strong: "What study designs can provide strong-enough evidence that a relationship is causal that not to act on that evidence would be widely be considered to be irresponsible?" (E.g., aspirin in acute MI)
This approach grants that Hume set a standard that renders the proof of causality impossible (by definition)--and regards that standard as irrelevant to life in the world, where the issue is to balance the resources for creating evidence against the estimated impacts of Type I and Type II errors, and so forth. (See Nobel laureate Herbert Simon's concept of satisficing, reference below.) For example, if the patient is poised to die, much weaker evidence is needed to justify an intervention than in the case of proposed disease prevention in asymptomatic individuals.
Simon, H. (1971). Designing Organizations for an Information-Rich World. Computers, Communications, and the public Interest. M. Greenberger. Baltimore: 44.
Best regards.
Jim
James M. Walker, MD, FACP
Chief Health Information Officer
Geisinger Health System
>>> "Djulbegovic, Benjamin" <[log in to unmask]> 1/28/2011 10:06 AM >>>
Dear all
I'd like to post this question to the group that I have been thinking about for some time... Is there a scientific method that allows us to LOGICALLY distinguish the cause-effect from the coincidence? David Hume, one of the most influential philosophers of all times, concluded that there is no such a method. This was before RCTs were "invented". Many people have made cogent arguments that (a well done) RCT is the ONLY method that can allow us to draw the inferences about causation. Because this is not possible in the observational studies, RCTs are considered (all other things being equal) to provide more credible evidence than non-RCTs. However, some philosophers have challenged this supposedly unique feature of RCT- they claim that RCTs cannot (on theoretical and logical ground) establish the relationship between the cause and effect any better than non-RCTs. I would appreciate some thoughts from the group:
1. Can RCT distinguish between the cause and effect vs. coincidences? (under which -theoretical- conditions?)
If the answer is "no", is there any other method that can help establish the cause and effect relationship?
I believe the answer to this question is of profound relevance to EBM.
Thanks
Ben Djulbegovic
IMPORTANT WARNING: The information in this message (and the documents attached to it, if any) is confidential and may be legally privileged. It is intended solely for the addressee. Access to this message by anyone else is unauthorized. If you are not the intended recipient, any disclosure, copying, distribution or any action taken, or omitted to be taken, in reliance on it is prohibited and may be unlawful. If you have received this message in error, please delete all electronic copies of this message (and the documents attached to it, if any), destroy any hard copies you may have created and notify me immediately by replying to this email. Thank you.
|