Hi Gill,

Thanks for your interest in our case summaries and what our themes/mechanisms end up looking like. 

It's really not easy to discuss themes and mechanisms casually as each word has to be carefully considered... I'm convinced that this is the nature of the beast!  We are in the process of finalizing a manuscript describes our methodology for our case studies (including how we develop initial "case summaries" and turn them into "case reports"), and this also includes a rich example of some mechanisms related to a theme.  I'd rather wait to share that (hopefully in the not-so-distant future) rather than give you a "thin" answer here.

Ketan.


On Mon, Dec 17, 2012 at 9:42 PM, Gill Westhorp <[log in to unmask]> wrote:

Hi all

My turn to give thanks –  this time, thanks to Ketan and Geoff for their replies below.   I’ve just had a preliminary conversation with one of the team here about this and I’m going to propose that we adapt our process in response to this input.   We’ve got a meeting with our NVivo trainer in a few minutes – if she comes up with ‘gold’ I’ll share it. J

 

Ketan – I had two questions arising from your response.  Firstly, did you have a particular format for your case summary document?  If so, and if you’re able to share it, that would be fantastic.  Secondly, I’m curious about the notion of ‘themes’ for CMOs.  Are you able to provide an example of a theme and an associated mechanism or two?

 

Thought:  a paper by a few of us, a bit later down the track, on the advantages and pitfalls of various bits of software (not just NVivo) for various functions in realist synthesis would probably be really useful…

 

Very best to all

Gill

 

 

From: Realist and Meta-narrative Evidence Synthesis: Evolving Standards [mailto:[log in to unmask]] On Behalf Of Ketan Shankardass
Sent: Tuesday, 18 December 2012 7:14 AM
To: [log in to unmask]
Subject: Re: Software for reviews

 

Hi all -

First off, I'd like to publicly thank Simon and Raymond for their generous advice/resources in responding to a request on this list during the summer for strategies to CONCISELY describe a realist approach in a funding proposal. We feel that our proposal was greatly improved following your contributions!

Our team wanted to share our experience in using Nvivo for our realist analysis.  We concur with much of what Geoff mentioned before.  However, we are doing a multiple explanatory case study rather than a review, so our experience is somewhat distinct.

In brief, we are doing explanatory case studies of how and why progress is made in implementing complex Health in All Policies initiatives that rely on intersectoral action by governments.  In our work, we started by having analysts use Nvivo to screen our data by coding for both passages in interview data and literature where "barriers" and "facilitators" to implementation were being discussed (i.e., where potential mechanisms may be discussed) and to code for a specific context or mechanism or outcome (i.e., individually) in relation to those passages. These codes started as a list of specific (and sometimes broad) possible Cs, Ms, and Os that we identified based on our initial understanding of the cases and the broader "quintain", but it was an "open" list that grew as we identified new specific components.  We felt that this approach would make the construction of CMOs more straightforward in team meetings occurring later on.

In practice, this approach to coding wasn’t as useful as we expected.  It was easy enough (although time consuming) for our analysts to identify Cs, Ms, and Os in the screening stage, but often times an individual analyst wouldn’t be able to discern all three components for a given passage (i.e., the whole CMO), and it was really a process of group work to analyze passages and 'restate' the text in the passage into a CMO that was needed. In the end, the most value for money (or time, I guess) was to flag passages of interest (barriers and facilitators) and review/discuss as a group to construct CMOs.

One exception is that we continue to use software to highlight specific aspects of the CONTEXT that are mentioned in our data that aren’t in our preliminary “case summary” (a document that we use to familiarize ourselves with each case before constructing CMOs), and compile those passages back into a revised version of our case summary for future reference.  

Finally, we make links across CMOs after they are constructed by iteratively sorting and summarizing them into distinct themes.  We do this by first labeling each individual CMO with a theme, and then look for CMOs with similar themes so that we can summarize the relevant CMOs with attention to the various contexts and outcomes of relevance.  This occurs both within and across data sources, eventually resulting in narrative summaries of mechanisms for progress in implementation within each case (usually across several themes).

As Geoff suggested, the process of finding the right method for our purposes occurred in the course of one very long, intense pilot case study, but the whole team feels much more liberated having found a workable method for our other cases!

Please let me know if this description was too vague or if you're curious about other details.

Ketan.

On Sat, Dec 15, 2012 at 6:26 AM, Geoff Wong <[log in to unmask]> wrote:

Hi Gill,

 

A good question and right up front I have to admit that I have not used NVivo with such a large team and independent data extraction and analysis. So I can't provide any helpful advice on how best to do this with NVivo. However, I have read that NVivo 9 is meant to be more 'user friendly' for collaborations across teams. You are I am sure going to do this, but it would be worth making sure you pilot and iron out any idiosyncrasies NVivo may have in 'collaboration mode' before doing the real thing??

 

As you may (or may not recall), I have only ever:

1) used a fraction of the functionality of NVivo ... so some of the stuff you are asking about is way above me!

2) used it mainly as a tagging / filing system

 

As such its use has been more as a support tool .. and (sorry to repeat this cliche but) fancy software is no substitute for repeated detailed discussion, debate and analysis within the review team. For what it is worth:

 

a) I have always tried to get some idea of what data I need to extract first. Have done this through developing a programme theory of varying sophistication. Have then used the programme theory to guide what data I need to test it.

b) I tend to make up a bunch of free nodes which support, refute, refine the various components of a programme theory. I don't initially break these down in to C, M or O, but do later on if necessary. So I guess my point is start without a tree structure and then reorganise later - either by using a tree structure or using sets?

c) I found that 'piloting' was very helpful. So once I had a small set of seemingly relevant papers, I would read them, make up some codes (free nodes) and then check if they captured the relevant data within these initial papers, adding or nodes if needed. I guess you could do this process as a team, come up with an initial set of free nodes which everyone will use but still allow each researcher to create additional nodes. In the team meetings you could then discuss the value of the agreed set of nodes AND also then have a discussion about the value of any new 'individual' nodes. These new 'individual' nodes could then be included (or not) into the agreed set of common nodes for all to use .. and the process goes on.

A process of iterative and gradual refinement and re-organisation of the nodes.

The key here is to then go back and recode the documents using any nodes you have added (a laborious but important step).

 

Hope this helps and any thoughts from anyone out there who has also used NVivo or any other similar software would be welcomed by me too as it would be nice to have some idea of how we all operationalise this aspect of realist reviews.

 

Geoff

 

 

On 14 December 2012 23:14, Gill Westhorp <[log in to unmask]> wrote:

Hi all
There was a brief discussion a year or so ago about using software to assist with analysis (or more precisely, coding and sorting material ready for analysis). My team are currently struggling with the question: What's the best way to set up coding in NVivo9 to support a realist analysis?

Situation: The question we're attempting to answer is relatively broad, looks across multiple kinds of interventions in the international development arena, with a correspondingly diverse literature, and does not have a particularly detailed initial theory.  There is a relatively large group of analysts (6 people), some working remotely, on different copies of NVivo (so if we want to merge copies later, the node structure has to be identical across all copies).  Every document has to be analysed by 2 team members.

The main question we're grappling with is:  What's the most efficient way to be able to draw links between C, M and O, both within and across texts?  Subsidiary questions:  What level of detail should be pre-established in the coding guide?  Is it better to have fairly broad codes or quite detailed ones?  Is it better to use classifications and nodes (and therefore be able to use matrix searches) or nodes with see also links?  Or just nodes and annotations?

If anyone has suggestions or experiences we'd be delighted to hear about them.

Best wishes of the season to all
Gill