Print

Print


This thread could be boiled down to two important questions:

1) In a realist synthesis (RS) should reviewers infer/make assumptions/interpret beyond what is reported in the included studies?

2) If we do make these 'leaps' how do we know these are 'true'?


GOING 'BEYOND' THE REPORTED DATA
There was agreement that this was in fact almost a requirement of RS. One argument was that there would rarely ever be enough data to banish all uncetainties and so staying too close to the data would result in a RS ending with the cliched phrase of 'more research is needed'.
One strength of RS was that it is specifically geared at requiring this leap to be made - for example in working out what a mechansims might be that is generating the outcome of interest. Such leaps were seen as being the value that RS adds.
Reviewers were in a good position to make such leaps as they would be immersed in the literature on the topic and had the advantages of being able to look beyond just the topic and/or across studies and "critical distance". The key was to be explicit and explain that inferences/assumptions/extrapolations/interpretations were being made.

THE 'TRUTH'
If you are a realist you would not expect to ever get to the 'truth' but you might expect to get closer and closer :-)
There are many challenges associated with making inferences/assumptions/extrapolations/interpretations.
How do you or others know if you haven't just "hijacked" the data for your own ends?
How do you know is your 'leap' is 'true'?
etc.
These questions raise issues about 'quality' and 'rigour' and so on. As a secondary researcher (unlike in primary research such as realist evaluation), you can't go back and ask participants what they think about your leaps. However, you can be TRANSPARENT about what you did and why. This should allow others to see for themselves that your 'leap' was COHERENT and PLAUSIBLE. As one contributer put it "... this is what I think is going on, and this is the way I came to that decision...". Briefly, any judgement of coherence and plausibility would rest on how well your explanation fits in with not only what we already know, but also with the reported data in included studies.
Transparency might involve reporting revelant detail and also processes - such as searching was designed to get the 'right' kind of data,  that the review team was reflextive etc.
Others can then judge for themselves the coherence and plausibility of your inferences/assumptions/extrapolations/interpretations. If they don't like it, then it's up to them to provide an alternative coherent and plausible inferences/assumptions/extrapolations/interpretations.

This thread came up with two pther points which I hav just noted here but not explored further.
- Is there such a thing as "interpretation free" research?
- Any outputs for a review should think about who the audience might be and tailor their output to their needs - and if possible make them think!

A final point arose which was about how do you come up with theories... this will be covered in another Interim summary.

Geoff