Print

Print


This seems to be rather like abduction and retroduction, Geoff.

Bill

From: Realist and Meta-narrative Evidence Synthesis: Evolving Standards [mailto:[log in to unmask]] On Behalf Of Geoff Wong
Sent: Friday, 14 November 2014 7:28 AM
To: [log in to unmask]
Subject: Re: Example of causal inference in evaluation without a counterfactual

I can't help with "... any good examples of an impact evaluation that uses non-counterfactual causal inference" I am afraid. But I did want to pick up on Ray's comment, "So you are right about the desperate need for good examples of non-counterfactual causal explanations. Finding them in medicine would be the ultimate coup de grace."

I may be wrong here, but I do think that medics use (what I think is a form of) non-counterfactual causal explanations all the time in their day to day work. Not for trials or evaluations, but when trying to make a diagnosis for their patient.In such a situation, there is no 'control' condition, just a single case.
The classic model of reaching a diagnosis is meant to be hypothetico-deductive. I am sure it has a formal definition, but in essence it is as follows:
- listen to the patient's account
- based on the story and how it might fit in with existing medical knowledge about illnesses speculate as to what the diagnosis might be
- ask some more questions to confirm / refute diagnosis
- listen to the patient's account
- based on the story and how it might fit in with existing medical knowledge about illnesses speculate as to what the diagnosis might be
And so on...
And add in examinations / tests as addition means of generating data to confirm / refute diagnosis
(Incidentally, as clinicians become more experience they rely less on the above and instead use pattern recognition and heuristics).

So what?
Well perhaps a way of explaining things to us medics may be to use the analogy of making a diagnosis.
Might this help in triggering a 'Eureka!' moment about the value of using non-counterfactual causal inferences????

Geoff


On 12 November 2014 11:08, Raymond Pawson <[log in to unmask]<mailto:[log in to unmask]>> wrote:
Hi Patricia (and all)

Now this is really important. Real causal inferences in science are made in the manner you suggest. Theories are built, contested and refined over time using a plethora of different empirical tests – none of them decisive in isolation. This is well described by the philosophers of science – Popper, Lakatos, Campbell (the real one) etc. Hill’s criteria capture the balance of many considerations that contribute to strong causal explanations and remain influential in the public health community. None of these sources translate simply into programme evaluation.

The great irony is that these ‘conjectures are refutations’ models describe perfectly the many different empirical tests that are applied in the lengthy process of developing clinical interventions. These start in basic science, pre-clinical work hypothesising the underlying disease pathology and potential mechanisms of action that might target the particular condition. Then there is a phase of therapeutic discovery in which compounds and techniques are tried, refined in an attempt to embody the conjectured mechanisms and then laboratory tested the see if the initial explanation holds promise. Then, still in pre-clinical phases, there are tests, for instance, about the absorption, distribution, metabolism and excretion of a drug. Only then we get to effectiveness with patients and the three phases of clinical trials, starting with proof of concept, dose-finding and safety studies, moving eventually to RCTs and regulatory proof. There is even a further stage about long-term follow-up and the detection of ‘rare events’. Failure is commonplace at any stage – in which case the hypotheses are refined and the hypotheses-testing cycle is resumed.

Here’s the problem. A) Cochranites have managed to convince themselves (and much of the medical establishment) that only the penultimate stage counts in assessing causality and effectiveness. B) Even those keenly attuned to the full cycle explanation often don’t like to use the language of ‘theory’ testing (they often prefer rather technical or mechanical accounts of each individual stage).

So you are right about the desperate need for good examples of non-counterfactual causal explanations. Finding them in medicine would be the ultimate coup de grace. Rather late in the day I’m trying to become an amateur medic in trying to piece together the full story as above. Any good sources graciously welcomed!

Thanks by the way for the links to Julian and Jane’s excellent work.

RAY
________________________________________
From: Realist and Meta-narrative Evidence Synthesis: Evolving Standards [[log in to unmask]<mailto:[log in to unmask]>] On Behalf Of Patricia Rogers [[log in to unmask]<mailto:[log in to unmask]>]
Sent: Tuesday, November 11, 2014 9:36 PM
To: [log in to unmask]<mailto:[log in to unmask]>
Subject: Example of causal inference in evaluation without a counterfactual

Dear RAMESES colleagues,

After such great suggestions of examples of realist evaluations, I'd like to make a related request.

Can you suggest any good examples of an impact evaluation that uses non-counterfactual causal inference - that is, doing lots of small tests that the data fit the theory of a causal relationship, and ruling out alternative explanations?

While there's good material that discusses these strategies (eg Bradford Hill's classic 1965 paper,<http://www.edwardtufte.com/tufte/hill> on Edward Tufte's site, Julian King's new e-book<http://www.julianking.co.nz/wp-content/uploads/2014/08/140826-BHC-web.pdf> on the Bradford Hill criteria. and Jane Davidson's webinar<http://betterevaluation.org/events/coffee_break_webinars_2013#webinarPart5> on causal inference) I struggle to find good examples that can be used in a workshop to clearly demonstrate the logic of non-counterfactual causal inference, can be readily understood and are clearly relevant.

The old epidemiological examples of John Snow's investigation of cholera in London and the link between lung cancer and smoking explain the logic but are too often dismissed as being not relevant because they are "the cause of an effect, not the effect of a cause"  - as in an impact evaluation of a known program.  (I think this is a specious argument but that's a hard position to change).

I'd really appreciate suggestions of good examples I can add to the BetterEvaluation site <http://betterevaluation.org/plan/understandcauses> and share in workshops and presentations.

Patricia Rogers

Disclaimer: The contents of this email and any attachments may be confidential. It must not be used, distributed, copied or read by any person other than the intended recipient(s). Unauthorised use, disclosure, copying, or reliance on the contents and attachments of this email may be unlawful. If you have received this email in error, please notify the sender immediately by return email and delete the original message. The sender believes that this email and any attachments were free of any virus or other malicious code when sent.