Dear Ray
That’s an impressive landscape of approaches to generating knowledge you’ve described. Teaching for international development this week I’ve been emphasising the inherent subjectivity of impact evaluation – both in setting the questions and finding the answers. I hope I was sharing the spirit of your email. Nevertheless, I probably count as a Cochranite, even if pushing for change from the inside.
One of the questions I struggled with during yesterday’s session I hope you or other members of this list may be able to answer. ‘What is the skill set for doing good qualitative analysis?’ This came from someone who could see the statistical skills needed for quantitative research but not the skills for qualitative research. Skills for study planning, managing and data collection were straightforward. But does anyone have a really good way of explaining what happens in our heads when analysing qualitative data, and doing it well? How do we get to those aha! moments that will subsequently convince others too? I’m not convinced it’s very different whatever the formal method. Saying its inductive gives it a label but not an explanation. Explaining that analytical solutions often arrive when walking in the woods or peeling potatoes may be reassuring to students but doesn’t tell them how to do it. Is it possible to tell someone how to do it? Maybe there are lots of explanations, and they’re obvious to everyone except me – if you know some, please do share them.
Many thanks, Sandy
Sandy Oliver, PhD, Professor of Public Policy
Social Science Research Unit and EPPI-Centre, Institute of Education, University of London.
Public engagement with academic research: outsiders bring
(a) independence for oversight
(b) experiential knowledge for designing studies
(c) practical and problem solving skills for data collection and analysis, and
(d) an inquiring mind for research informed citizenship. http://bit.ly/YeT0w2
Twitter @profsandyoliver
________________________________________
From: Realist and Meta-narrative Evidence Synthesis: Evolving Standards <[log in to unmask]> on behalf of Raymond Pawson <[log in to unmask]>
Sent: 12 November 2014 11:08
To: [log in to unmask]
Subject: Re: Example of causal inference in evaluation without a counterfactual
Hi Patricia (and all)
Now this is really important. Real causal inferences in science are made in the manner you suggest. Theories are built, contested and refined over time using a plethora of different empirical tests – none of them decisive in isolation. This is well described by the philosophers of science – Popper, Lakatos, Campbell (the real one) etc. Hill’s criteria capture the balance of many considerations that contribute to strong causal explanations and remain influential in the public health community. None of these sources translate simply into programme evaluation.
The great irony is that these ‘conjectures are refutations’ models describe perfectly the many different empirical tests that are applied in the lengthy process of developing clinical interventions. These start in basic science, pre-clinical work hypothesising the underlying disease pathology and potential mechanisms of action that might target the particular condition. Then there is a phase of therapeutic discovery in which compounds and techniques are tried, refined in an attempt to embody the conjectured mechanisms and then laboratory tested the see if the initial explanation holds promise. Then, still in pre-clinical phases, there are tests, for instance, about the absorption, distribution, metabolism and excretion of a drug. Only then we get to effectiveness with patients and the three phases of clinical trials, starting with proof of concept, dose-finding and safety studies, moving eventually to RCTs and regulatory proof. There is even a further stage about long-term follow-up and the detection of ‘rare events’. Failure is commonplace at any stage – in which case the hypotheses are refined and the hypotheses-testing cycle is resumed.
Here’s the problem. A) Cochranites have managed to convince themselves (and much of the medical establishment) that only the penultimate stage counts in assessing causality and effectiveness. B) Even those keenly attuned to the full cycle explanation often don’t like to use the language of ‘theory’ testing (they often prefer rather technical or mechanical accounts of each individual stage).
So you are right about the desperate need for good examples of non-counterfactual causal explanations. Finding them in medicine would be the ultimate coup de grace. Rather late in the day I’m trying to become an amateur medic in trying to piece together the full story as above. Any good sources graciously welcomed!
Thanks by the way for the links to Julian and Jane’s excellent work.
RAY
________________________________________
From: Realist and Meta-narrative Evidence Synthesis: Evolving Standards [[log in to unmask]] On Behalf Of Patricia Rogers [[log in to unmask]]
Sent: Tuesday, November 11, 2014 9:36 PM
To: [log in to unmask]
Subject: Example of causal inference in evaluation without a counterfactual
Dear RAMESES colleagues,
After such great suggestions of examples of realist evaluations, I'd like to make a related request.
Can you suggest any good examples of an impact evaluation that uses non-counterfactual causal inference - that is, doing lots of small tests that the data fit the theory of a causal relationship, and ruling out alternative explanations?
While there's good material that discusses these strategies (eg Bradford Hill's classic 1965 paper,<http://www.edwardtufte.com/tufte/hill> on Edward Tufte's site, Julian King's new e-book<http://www.julianking.co.nz/wp-content/uploads/2014/08/140826-BHC-web.pdf> on the Bradford Hill criteria. and Jane Davidson's webinar<http://betterevaluation.org/events/coffee_break_webinars_2013#webinarPart5> on causal inference) I struggle to find good examples that can be used in a workshop to clearly demonstrate the logic of non-counterfactual causal inference, can be readily understood and are clearly relevant.
The old epidemiological examples of John Snow's investigation of cholera in London and the link between lung cancer and smoking explain the logic but are too often dismissed as being not relevant because they are "the cause of an effect, not the effect of a cause" - as in an impact evaluation of a known program. (I think this is a specious argument but that's a hard position to change).
I'd really appreciate suggestions of good examples I can add to the BetterEvaluation site <http://betterevaluation.org/plan/understandcauses> and share in workshops and presentations.
Patricia Rogers
The Institute of Education: Number 1 worldwide for Education, 2014 QS World University Rankings
|