Hi Larraine

Sounds like a really interesting project.  Evaluation of prevention is itself always interesting (how do you assess/measure what didn’t happen as well as what caused it to not-happen?).  The format of the question about ‘how and why do some families become homeless where others in apparently similar situations do not’ is ideal for a realist investigation.

 

Realist review of existing literature is, as I think you’re implying, exactly appropriate to develop your initial theory.  Your candidate theories will need to be quite clearly expressed in relation to the particular issue (there are of course many forms of empowerment theory, for example – which one are you using and exactly how do you think it plays out in relation to homelessness prevention, etc etc).

 

In terms of ‘pairing’ quasi-experimental and realist designs: it’s a contested topic, of course, but I think it’s possible.  I’m involved in two projects currently that are trying to do exactly that – one just winding up, the other about to start.  Andrew Hawkins has written on this as well – it was published in the special issue of Evaluation on realist methods, I think.

 

Experimental and realist designs of course do have fundamentally different underlying assumptions, particularly about causality, but also (very importantly) about whether or not ‘control’/non-intervention groups do actually show what their underlying theory says they do (read Pawson on this, and the series of ‘debate’ articles about ‘realist experimental designs’).  They also serve different functions.  The experimental/quasi-experimental design, if really done well, might tell about effect size (although there will always be people, including me from some perspectives, who will debate that!).  It does not tell you about causation.  The realist design tells you about causation in context, but also uses intra-group comparisons to tell you about outcomes (remembering here that one cannot identify a mechanism without having identified its outcome – it ‘is’ a mechanism because it generates the outcome).

 

The important thing then becomes ‘does one / how does one integrate the two designs?’.  The simple version is to run the two designs alongside each other, using the experimental/quasi-experimental design to tell you ‘that’ there is an effect and it appears to be of about this magnitude and the realist design to tell you how and why.  The primary weaknesses in this are that

  1. the quasi-experimental design may give you an ‘average effect’ but unless you designed it properly, will not tell you whether the impacts do vary by sub-group as predicted, and without that, you don’t have the ‘equivalence’ that integrating the designs is supposed to provide;
  2. the realist design, if only used with an intervention group, tells you ‘how and why’ for the intervention group, but doesn’t tell you ‘how and why’ the outcomes did or didn’t vary for the non-intervention group.  There are of course lots of reasons why they might vary (including but not limited to spill-over effects from the intervention)
  3. The implication is that one should be investigating ‘how and why’ in the non-intervention group, but this requires at least a modified version of ‘mechanism’ than Pawson and Tilley’s version – which requires identification of which resources provided by the program the participants ‘reasoned in response’ to, how they reasoned etc.  (The non-intervention group presumably did not get these resources, but that is of course one of the things one needs to check).

 

So I argue that to integrate the two properly, you need to develop your theory in advance and then collect CMO data from both groups – albeit the ‘M’ might be different.  The more complicated (many parts) and complex (including emergent effects) the interventions, the harder it is to integrate the two designs. 

 

All this said – your design of course depends whether you are investigating an intervention or not.  If there is no intervention, you can still investigate the question (how come some do and some don’t end up homeless) but you can’t use a quasi-experimental design if there’s no intervention to ‘control’ for. You can still use a realist design but you again need a modified version of ‘mechanism’ if it’s not a ‘program mechanism’ you’re investigating.   

 

So I’d ask the question:  even if it CAN be done – is it WORTH doing it that way?  Should the realist design be undertaken first to find out whether, how and why particular interventions/aspects of interventions work for whom, etc  -  and then have quasi-experimental design elements added later, when you know what you’re looking for?  This, as powerfully argued by Pawson at the last CARES conference (see his presentation on the website if you weren’t there), is what happens in medicine; it’s also closer to the way that engineering works (as powerfully argued by Tilley at the previous CARES conference.  As he rightly points out, they don’t use experimental designs to test parachutes.)

 

Glad you found the realist action research approach useful.  I have some theoretical work on social inclusion and exclusion, also from my thesis (which is where the realist action research work comes from) which might also be relevant to your study which I’d be happy to share off line if that would be of value.

 

Cheers

Gill

 

From: Realist and Meta-narrative Evidence Synthesis: Evolving Standards [mailto:[log in to unmask]] On Behalf Of larraine
Sent: Saturday, 18 March 2017 1:45 AM
To: [log in to unmask]
Subject: homelessness prevention study?

 

Hi everyone, Justin at CARES recommended the group so thought that I would post some details of an up-and-coming study that may be of interest. My background and day job is in policy consultancy but I am also a student at London University (UCL EOI). Job wise, I am currently supporting a council just outside London with the development of its homelessness prevention work. I have encouraged them to commission a realist study as, in common with most councils, they struggle to address homelessness in a strategic, coherent way. It seems that this is largely because there is little understanding of the mechanisms of cause and factors of context associated with homelessness prevention outcomes and their variation. This makes it difficult to establish the effectiveness of prevention activity and to establish its validity and reliability. While prevention has been widely adopted within homelessness services, it suffers from an absence of work to understand the causal mechanisms for homelessness per se at the micro, meso and macro contextual levels and there is a pressing local need to address questions such as “why do some households who are economically disadvantaged become homeless and others do not?”. It seems likely, therefore, that the study will need to adopt a balanced approach to understanding the causal mechanisms by which homelessness is produced and relieved, including through temporary accommodation, and understanding the mechanisms associated with its prevention and the effectiveness of the homelessness prevention team.

 

My thinking in encouraging a realist study is informed by the view of Bernadette Pauly and colleagues (2014)  that theory driven approaches to evaluation are well suited to understanding the effectiveness of  homelessness interventions. A realist study of homelessness prevention might also support the development of realist work in social welfare advice. This currently includes a study of the role of the advice sector in enabling improved health but not, I believe, homelessness outcomes (Forster et al 2016). I found the social welfare study Gill Westhorp and colleagues (2016) undertook using realist action research really valuable and helpful and the council are keen to incorporate realist action research to support the  development of the homelessness prevention team. It also it seems to me that a realist study of homelessness prevention might advance understanding of the interconnected policy priority areas of housing, welfare and work and support the development of realist public health work in general.

 

It seems likely that some synthesis will be needed to identify the patterns of context and outcomes in the literature on homelessness prevention and explain the mechanisms through which outcomes occur and then develop an explanatory framework. Possible candidate theories to test include solution focussed theory, the theory of self-efficacy/empowerment, the theory of self-sufficiency, behaviour theories of choice, the theory of motivational interviewing, the theory of patient activation and the theory of reflective practice. Method wise would a quasi-experimental design compatible with realist methodology such as Interrupted Time Series (ITS) observations also be possible? 

 

Anyway, at risk of too long a post any initial comments, thoughts and expressions of interest would be very welcome. This will be a 2 year study to start in the next financial year. DCLG have provided initial some funding as although this is a local evaluation the council is a DCLG ‘trailblazer’ homelessness prevention site and they will be interested in generalisable knowledge.

 

Thanks

Best wishes

Larraine

 

Forster N, Dalkin SM, Lhussier M,Hodgson P, Carr SM et al (2016) Exposing the impact of Citizens Advice Bureau services on health: a realist evaluation protocol et al.  BMJ Open 6 e009887.

doi:10.1136/bmjopen-2015- 009887

Pauly B, Wallace B, Perkin K (2014) Approaches to evaluation of homelessness interventions, Housing Care and Support, 17 (4) 177-187, Emerald Group Publishing, UK

Westhorp G, Stevens K, Rogers P (2016). Using realist action research for service redesign, Evaluation, 22(3) 361–379, Sage, UK