Merci

Bayero Diallo

Chaire de recherche appliquée des IRSC sur les services

et politiques de santé en maladies chroniques en soins de première ligne
Centre de santé et services sociaux de Chicoutimi (CSSSC)
Téléphone 418-541-1000 poste 3680

Télécopieur 418-541-7091


De : Realist and Meta-narrative Evidence Synthesis: Evolving Standards <[log in to unmask]> de la part de Jagosh, Justin <[log in to unmask]>
Envoyé : 24 mai 2017 12:25
À : [log in to unmask]
Objet : Re: Establishing the counter factual situation in realistic evaluation?
 

Rasmus,

If you have the time and resources after the project is complete to publish a methodology reflection paper in which you detail the insights you’ve gained from triangulating data from your counter-factual and realist work packages – we need more examples and reflections from the field around this. Through my informal conversation around these recent debates my understanding is that there are loud objections to conflating “successionist” and generative causation, but room to explore how these paradigms may work in tandem, for certain, carefully thought out lines of inquiry to produce useful knowledge. So I would agree with your idea of keeping the analyses separate, but you may find interesting points of connection as well – and the knowledge outputs from the quasi-experimental element may feed in to your CMO configurations, which have been promoted as incorporating qualitative, quantitative and mixed methods data

 

Best of luck with your progress!

Justin  

 

Justin Jagosh, Ph.D

Senior Research Fellow

Director, Centre for Advancement in Realist Evaluation and Synthesis (CARES)

University of Liverpool, UK

www.liv.ac.uk/cares

 

From: Rasmus Ravn [mailto:[log in to unmask]]
Sent: May 24, 2017 4:32 AM
To: Jagosh, Justin; Realist and Meta-narrative Evidence Synthesis: Evolving Standards
Subject: SV: Establishing the counter factual situation in realistic evaluation?

 

Dear Sam, Eleanor and Justin.

Thank you all for your great replies. They have given me at lot to think about.

To reply to the questions posed by Sam and Justin, I will have to elaborate a bit on the programme I and evaluating, the methods used and the requirements I have to meet for those who commissioned the evaluation.

I am basically evaluating selected parts of a whole municipality's active labour market policy. The municipality drastically reduced the caseloads of the caseworkers at the employment office and offer (and require) more intense and active labour market measures for the unemployed and sick listed individuals. (The municipality made a major investment in order to facilitate this).

As part of the overall evaluation, I am doing a realistic evaluation of why the caseload reduction might improve labour market outcomes for the target groups. Here I use a middle ranged theory (working alliance theory), where it is hypothesized that the caseworker-client relationship is of utmost importance, if groups far away from the labour market are to obtain employment. The theory states that a "good" relationship fosters motivation, active participation in the active measures and clarity about the purpose of participating in the measures.

I mainly use intra program variation to compare for whom the case-worker client relationship matters the most. The data used in this particular evaluation includes survey-data, qualitative interviews with staff and clients, observational studies, and outcome data on self-support based on administrative data. My sampling strategy for the qualitative interviews was based on how the clients rated the relationship with caseworker (Good as well as poor relationships) in the survey I conducted (my hypothesized mechanism). I compare outcomes based on the "strength" of the mechanism in the survey. All in all, this is a realistic evaluation in its own right.

I also evaluate some of the active labour market measures the target groups participate in, but this is besides the point.

Now to the point about the counterfactual situation: The municipality commissioned the evaluation and also requested a more "conventional" impact study of their investment. Therefore I am obligated to do such a study.

I intend to do a quasi-experiment using register data from Statistics Denmark (the data includes anonymised information on every adult person in Denmark). Using propensity score matching I aim to statistically establish a control-group that is as similar to the program participants as possible (but living elsewhere in Denmark). The matching process could potentially include hundreds of variables.

It could be argued that the control group receives the "average" active Danish employment measures. Besides from establishing a control group based on the whole unemployed and sick listed Danish population, I also establish a control-group based on the unemployed and sick listed population in the nearby municipalities. The contextual factors would be more similar for the control and the participant groups in this instance, because the two groups more or less compete for the same jobs.

My main concern is about whether I should aim for a clear division a labour between the two evaluation approaches. By this I mean that the explanations of the effects are left out from the quasi-experimental study, and the insights from each evaluation approach are "kept separate". This would of course entail a
successionist view of causality in the quasi-experiment and a generative view of causality in the realistic evaluation(s).

Last but not lest, I am very happy to be part of such a helpful community of skilled realist researches.

I apologize for the very long answer.

Kind regards,

Rasmus Ravn, PhD Student, Aalborg Univeristy, Department of Political Science, Denmark
 


Fra: Jagosh, Justin [[log in to unmask]]
Sendt: 23. maj 2017 04:16
Til: Realist and Meta-narrative Evidence Synthesis: Evolving Standards; Rasmus Ravn
Emne: RE: Establishing the counter factual situation in realistic evaluation?

Dear Rasmus and all,

I think it is worth thinking through the language and terminology around the "counterfactual realist" debate to explore articulating the complexity.

First, the contention and objection has to do with the conflation of realist methodology with counterfactual logic and I think the objection is quite valid. Bear in mind the term 'counter-factual' is used differently in philosophy of science, psychology and other disciplines.  If we take a step back and start with a very simple definition, we could say that counterfactual reasoning is about gathering "facts" of one situation, setting or time point and comparing this information or data with "facts" from another situation, setting or time point. The goals of this is to generate explanation about the impact of an intervention or experimental manipulation. The quotations around the word 'fact' are meant to remind us that from a realist inquiry perspective, fact is not a concept quite embraced (in the way theory is). While realism promotes the ontology of a singular reality, it does not assume a taken-for-granted position on easy access to that reality through our knowledge processes (even the "gold standard" processes).

The issue is that the concept of a 'fact' obscures contextual variations and leaves us with the rather 'positivist' impression that we can unearth regularities. We do indeed unearth things that look like 'regularities' but there will always be exceptions - contexts that would change the outcome of that regularity.  Tony Lawson has advanced this using the term 'demi-regularity.'  This means that regular patterns can be detected but are always conditional and there will always be situations (context) that can modify the pattern. This gives us reasons to examine context on an on-going basis. This same principle holds true about 'facts'. And the realist critique of a "fact" is that it assumes a kind of taken-for-granted knowledge about the truth of things. 

Having said that, it would be safe to say that humans and likely other living beings use counter-factual logic everyday for our advancement and survival. For example, if I wanted to purchase a new computer, I may gather the facts for one device and gather similar facts about another device and make a comparison. This comparison may lead to new knowledge which would be great. But that is counterfactual observation which is quite different from counterfactual experimentation that uses aggregation of quantitative data - and this has not really been clarified. While it could be argued that all experiments involve counterfactual processes, not all counterfactual processes involve experimentation. In counterfactual experimentation the process is to compare the 'facts' of a control group with 'facts' of the experimental, or some other variation - whether it be natural experiment or through randomization. In order to undertake the comparison, the facts must have symmetry and be converted to a numerical dimension because the calculation processes can not compare numbers with other kinds of data a statistical formula.  

The problem with counterfactual experimentation has been described well in Ray's body of work. I would add is that it is not the comparison that is the problem per se - but rather it is the reduction of contextual elements into a numerical dimension which creates a 'flat' ontological view of these elements - what I would call an 'artificially prescribed stability of concept.' This means that these elements have to be shaped up along certain dimensions for the quantification and comparison to take place. Maybe that works, and maybe it doesn't. But consider concepts like: "Well-being", "Resiliency" or "Empowerment".  Perhaps the end product of a counter-factual experiment for such outcomes of interest has value, but in principle this still does not sound so reasonable for complex interventions involving core mechanisms of a socially contingent nature. This is because through quantification we loose the opportunity to theorize resources, responses, and outcomes in their contexts - with all their complex contradictions intact, using the idea of ontological depth to guide our understanding.

Qualitative cross-case comparisons can look like comparing the facts of one case with facts of another, and this is not objectionable from a realist perspective because for that, the comparison happens in our interpretive process and as Ray has suggested, the realist paradigm is aligned with a constructivist epistemology.  It's not comparison that is the contention.

Rather, in counter-factual experimentation the comparison requires homogenization and aggregation of data bits, which are compared not through interpretation, but a statistical process. The extent to which complex concepts and elements are reduced to variables, such reduction likely precludes a kind of theorizing that more accurately (deeply?) aligns these concepts to the reality we are seeking to understand.  - Ultimately to support insight and solutions building. Perhaps RCT-style counter-factual experimentation should be called 'numerical aggregate comparison' and the kind of counter-factual observation that would be used in a realist project be called "counter-theoretical comparison".

Thanks for the opportunity to stir the pot. It will be great to see other ideas emerge.

sincerely
Justin   

Justin Jagosh, Ph.D
Honorary Research Associate
Institute of Psychology, Health and Society
University of Liverpool, United Kingdom
www.liv.ac.uk/cares

Centre for Advancement in Realist Evaluation and Synthesis (CARES)
www.realistmethodology-cares.org



From: Realist and Meta-narrative Evidence Synthesis: Evolving Standards [[log in to unmask]] on behalf of Rasmus Ravn [[log in to unmask]]
Sent: May 22, 2017 1:59
To: [log in to unmask]
Subject: Establishing the counter factual situation in realistic evaluation?

Dear all.

 

For some time I have been reflecting on a rather simple question: Is it feasible to establish the counterfactual situation as part of a realistic evaluation?

 

To elaborate, my question concerns whether it is in accordance with the principles of realist evaluation (primarily generative causality) to use control groups (established either by randomization or through statistical matching).

 

Reading through the realist literature, my own impression is that factuals are compared in realistic evaluations (through inter- and intra-program variation)  and that the counterfactual situation is not established.  

 

I am aware of the discussion that followed the paper by Jamel et al. (2015) “The three stages of building and testing mid-level theories in a realist RCT: a theoretical and methodological case-example”.

 

There are of course differences of opinions but I cannot help by wonder if the critique put forward of the “realist RCT” would also extend to any type of evaluation that tries to establish the counterfactual situation?

 

My initial thoughts on the subject would be that the critique would apply to every type of evaluation that establishes the counterfactual situation, because these evaluation approaches tries to “imitate” the RCT.

 

One of the arguments against using the counterfactual situation in realistic evaluation could be that you cannot randomly or statistically be assigned to receive a mechanism.

 

I am hoping some you might enlighten me.

 

Kind regards,

 

Rasmus Ravn, PhD Student, Aalborg University, Denmark