Hi all
Ah yes, I so agree. I consider theory building to be a necessary part of science and therefore of good science. I consider good theory building practice to include being as well informed as one can be, within all the practical constraints. And I see realist (and probably meta-narrative but I haven't tried one of those) approaches as being purpose designed for the task... it's one of the reasons why I like it.
To expound: I believe that there is no such thing as 'interpretation free' analysis - there are after all levels of interpretation and decision-making reflected in every step of every piece of prior research or evaluation that we synthesise, as well as every step of our review process itself. I don't see a problem with taking that one step further and making 'next step interpretations' (ie theory building) - so long as we are explicit about the fact that that's what we're doing. In fact, I almost consider it a duty. Who else is in a better position to do it than those who have immersed themselves in the research and evidence? So while I have sympathy for the desire to stay 'close to the literature', I see that as 'being as well informed as one can be' before proposing a theory that accounts for the findings.
As for practitioners not having theory - here I in fact disagree. I think they do have theories - NB plural - albeit at the naieve level sometimes! It's possible in realist evaluation to unpack those. I tell stories in my training about experiences in doing so - it's often when practitioners find out that they're operating on different theories than their colleagues, even if they work together all the time. This provides a wonderful opportunity to deepen reflective practice, and to assist them to access relevant MRT and evidence related to same. The difficulty in realist synthesis is that we don't have the same direct access to the practitioners to find out what their theories are, and those who did the primary research/evaluation didn't always either find out or didn't record same.
That's both a strength and a weakness for the synthesis analysts - a weakness because it reduces the clues about where to look for theory and because it means we can't check whether interventions were in fact built on 'the same' theory (now there's an issue for reflective thought: what would constitute adequate evidence that the theories were in fact 'the same'?). But it can be turned into a strength because our job as synthesists is not necessarily to test THEIR theories but to build and test theory ACROSS (insert here whichever version you're doing) ... across manifestations of an intervention; across interventions using similar theories; across interventions using different theories but similar mechanisms; or in MNR, across similar topics but from widely disparate theoretical bases...)
Gotta go. Day job calls.
Cheers
Gill
|