Not sure if the mushroom does it for me. But I certainly agree that
meta-narrative review is good for the inchoate early phase. HOWEVER I've
seen a meta-narrative(ish) review which was otherwise excellent but which I
felt missed a major seam of literature because it didn't broaden out enough
early in the process. I think the mistake these people made was not to
browse around before homing in on the research traditions. See attached and
my commentary (also attached).
Trisha Greenhalgh
Professor of Primary Health Care and Director, Healthcare Innovation and
Policy Unit
Centre for Primary Care and Public Health
Blizard Institute
Barts and The London School of Medicine and Dentistry
Yvonne Carter Building
58 Turner Street
London E1 2AB
t : 020 7882 7325 (PA) or 7326 (dir line)
f : 020 7882 2552
e: [log in to unmask]
-----Original Message-----
From: Realist and Meta-narrative Evidence Synthesis: Evolving Standards
[mailto:[log in to unmask]] On Behalf Of Andrew Booth
Sent: 05 October 2011 11:14
To: [log in to unmask]
Subject: Re: Using analysis techniques from different synthesis methods
Joanne's discussion on the prequelae stage to the actual realist
synthesis reminded me of a discussion I had earlier this week with
colleagues in Exeter.
Given that there is a general mushroom shape to the realist reviews i.e.
you go broad first (the cap) - Joanne mentions scoping searches - and
then you go specific (the stalk) - the realist review - I wonder if
there is a role for fusing together formal methods for both stages. (I
am not alone in worrying that the candidate theories sometimes seem to
be magicked into the picture and then everything becomes more systematic
from then onwards).
We had initially thought (in our community engagement project here at
ScHARR) of:
Concept Analysis e.g. Walker & Avant (cap) -> Realist Synthesis (stalk)
We are now thinking of possibly:
Meta-Narrative(cap) -> Realist Synthesis (stalk)
Key to this choice would be whether candidate theories are located
within a single discipline (suggesting traditional concept analysis) or
within multiple disciplines/traditions/narratives (suggesting innovative
meta-narrative methods). i.e. the issue is theoretical heterogeneity.
Whether the "instrument" is scoping review, mapping review (as described
by Monika), concept analysis, meta-narrative or prestidigitation it
seems we would all agree that there needs to be some process for
sensitisation to the literature before the realist synthesis can begin
for real?
Forgive me if the mushroom analogy doesn't work for you - I guess I am
just a fungi!
Andrew
On 05/10/2011 10:36, Joanne Greenhalgh wrote:
> Hi Monika
>
> Well - it's the first time I've posted on here too - so that makes two of
us!
>
> I'm interested in your area and the issue of guideline implementation is a
thorny one - there is a vast amount of literature out there on this, as you
have found! Gill's points were very helpful I think in answering your
specific questions, but I suppose I wanted to make a broader point about
realist synthesis in this area. Again - happy for others to jump in and
correct me if they disagree.
>
> For me, what lies at the heart of realist synthesis (and realist
evaluation for that matter) is the development and refinement of theory.
So, it's one thing to build up a catalogue of potential contexts, mechanisms
and outcomes of guideline implementation, but the trick with realist
syntheses is to make connections between them to produce a theory of how
guideline implementation works, which can then be tested and refined
throughout the synthesis. This may well produce a number of alternative
theories, which can be tested and refined. So in which contexts do certain
mechanisms fire to produce which outcomes? Rather than, here's a list of
mechanisms, here's a list of contexts, here's a list of outcomes. So -
Gill's question about your conceptualisation of guidelines attributes is
relevant here - are these part of the mechanism (ie people's perceptions of
the guidelines) or part of the context (eg resources/infrastructure
available to support them). My point, I think, is that you make these
connections early on in the review, rather than trying to piece them
together later.
>
> In this way, the key to deciding whether an article is relevant to
inclusion in the review is about whether it helps you to define or refine
your theory. Ideally, searches within realist synthesis are iterative, as
you develop theory and then seek out specific pieces of evidence to refine
that theory. There's an initial search to help develop the theory and then
a series of other iterative searches to refine and test the theory. In other
words, it's a lot more messy than a Cochrane style review where you do one
big search (ok, maybe a scoping search to begin with) and use a set of
incl/excl critera to define what is included and what is excluded (though
having also done a Cochrane type review - these are actually a lot more
messy than people make out!). So again, here, it's about having the theory,
or series of theories, albeit fledgling ones that may be lacking in some
respects, at the outset of the review (well, following your initial searches
for theory).
>
> So - I guess my advice would be - let your theory decide what to include.
>
> Finally (!) A pertinent issue with guideline implementation is - what is
the outcome? Is it simply use of the guidelines - or is it changes to care
and practice as a result of guidelines use? Guideline implementation is a
great example of a programme theory with a long implementation chain -
realist synthesis can be really helpful in exposing the assumptions
underlying this implementation chain and assessing their integrity - does it
really work like that in practice?
>
> You may well have this in your database already - but an interesting paper
in this area is:
> Rycroft-Malone et al (2010) A realist evaluation: the case of protocol
based care, Implementation Science, 5:38. I'm not holding it up as a gold
standard - it has many good features, some candidate theories but I think
table 4 reads a little bit like a list of contexts, outcomes and mechanisms,
without a theory to connect them. That said, I think there's some very
useful analysis in here.
>
> Ok - I'll stop now, apologies for the long e-mail, hope it was helpful.
>
> Best wishes
> Joanne
>
> Dr Joanne Greenhalgh
> Principal Research Fellow
> School of Sociology and Social Policy
> University of Leeds
> Leeds LS2 9UT
> 0113 343 1359
>
>
>
>
>
> -----Original Message-----
> From: Realist and Meta-narrative Evidence Synthesis: Evolving Standards
[mailto:[log in to unmask]] On Behalf Of Monika Kastner
> Sent: 03 October 2011 18:40
> To: [log in to unmask]
> Subject: Using analysis techniques from different synthesis methods
>
> Hi everyone,
>
> I'm a postdoc fellow at the Li Ka Shing Knowledge Institute of St.
Michael's hospital (University of Toronto), working with a group of
researchers who are part of a collective called Knowledge Translation (KT)
Canada. I have recently joined the RAMESES listserv and have read with great
interest the many posts related to realist review/meta-narrative review
methods. I wanted to express a big thank you for providing such a great
learning forum for our group! We have also embarked on a realist review, and
would now like to jump in and describe what we are doing to hopefully
advance knowledge around conducting realist reviews, and also to ask a few
question related to our work.
>
> Briefly, we are investigating the concept of guideline implementability by
identifying the perceived characteristics of guidelines that affect uptake
of recommendations, and then figuring out what works for whom in what
circumstances and why. We have completed an iterative, multi-level search
strategy (which was an incredibly arduous and lengthy process) and are now
gleefully looking at our data. My questions to the group are related to our
synthesis methods, which are slightly unconventional as we decided to extend
the realist review analysis to include techniques borrowed from other
analysis techniques such as those from qualitative methods. We did this
because we felt that the realist review could sort out our underlying theory
(ie, what works for whom and in what circumstances) but may not work so well
for interpreting specific attributes of guideline recommendations that may
facilitate uptake, and to build a framework of guideline implementability.
We searched for other potential methods that may help us do this and felt
that techniques from meta-ethnography (eg., reciprocal translation analysis)
could help us generate a compete list of unique attributes and their
definitions, and then use both an integrative and interpretive approach (to
come up with first, second and third order interpretations) to reveal the
relationships between guideline attributes and their uptake. At this point
we used these techniques to classify ~700 unique guideline attributes (from
215 articles) into 28 categories and 6 major categories (this was done in
duplicate among 2 groups of researchers within KT Canada). Our next steps
are to develop a codebook of definitions for the
attributes/subcategories/categories, which we believe will help us reveal
relationships within/between categories to provide a better understanding of
guideline implementability and the tradeoffs between their attributes.
>
> Our specific questions are:
>
> 1. What does everyone think about the idea of borrowing other
techniques to supplement analysis of a realist review (or other synthesis
methods)?
>
> 2. At one point, we decided to stop data extraction because we felt
that we were not finding anything new to add to our understanding. However,
we still have about 200 articles that were identified as potentially
relevant (by experts and other search strategies) but feel that the
potential for these articles to add anything new did not outweigh the effort
to review them. Thus, we came up with the idea of developing a codebook of
definitions (which we were going to do at the conclusion of data extraction
anyway) and use this as a way of testing saturation for the remainder of
these potentially relevant articles. We thought that this technique would be
less effort with greater potential to identify anything new... plus this
process fits nicely with the idea that realist reviews are not supposed to
be "exhaustive" - is this a legitimate technique for deciding saturation,
etc? Would operationalization of this saturation process help others doing
such reviews?
>
> 3. We are thinking that developing a framework of
categories/subcategories of guideline attributes (and the codebook of
definitions) may become "live" documents that other researchers interested
in this area could use as a repository of information to help answer other
research questions, and to allow them to add to it as the field expands -
what do you think of this?
>
> Your thoughts on these issues are much appreciated (and apologies for the
long message), and thanks again for creating this forum so that we can
collectively advance this knowledge as these new synthesis methods evolve.
>
> Monika
|