Hi Melanie
This might sound trite but it’s not (at least it’s not intended to be!): your investigation stops when you have at least an ‘adequate’ explanation for whatever it is you’re
trying to explain. But to be honest I’m not sure what that is.
Firstly: what is the purpose of your research / the evaluation? – By which I mean – what do you expect the results to be used for, by whom? (That should help define what
it is you need to achieve – i.e. your investigation is finished when you’ve produced whatever it is that they’ll need in order to do whatever it is.)
Secondly, how is the outcome of interest defined? Is it lecture capture (i.e. lecturer behaviour)? Is it viewing of lectures (i.e. student behaviours)? Is it improved
learning outcomes as a result of viewing? Is it changed student behaviours in areas other than viewing? Changed pedagogy by teachers when they review their lectures and decide that they could do better? Or is it cost effectiveness of LC? In formal realist
terms - each of these is a different outcome so each would have different mechanisms generating it – so you have to be specific about that before you can theorise what the mechanisms might be. But also – each implies somewhat different research/evaluation
methods, because it takes different information to answer whether the outcome has been achieved.
Of course these outcomes aren’t mutually exclusive and they can easily be organised in a hierarchy of outcomes – so it’s not the case that you only have to examine one.
It’s just that you do need to be clear what your final ‘outcome of interest’ is in order to structure the rest of your project.
In answer to your question about whether you can infer reasons from the data without asking why, my response would be: you can hypothesise on
the basis of patterns in the data, but you can’t provide evidence of mechanisms without actually investigating them. I can think of at least three or four different
mechanisms that might underpin lecturer choices about capturing or not capturing... at least as many again for why some sub-groups of students would view or not... several different mechanisms related to (re)viewing that might affect whether student learning
actually improves as a result... etc etc. How would you know whether your hypothesis was right / what the explanation was if you hadn’t checked? (And of course – as realists we don’t assume a single explanation will cover all groups – any explanation is
likely to comprise multiple CMOs that explain how different outcomes are generated for different groups). To frame the general issue in RE terms: To undertake a realist evaluation, you have to - at minimum - provide evidence of each element of the realist
explanatory framework: evidence of outcomes, evidence of mechanisms, and evidence of the elements of context that affect whether and which mechanisms operate. Preferably, you also use analytic techniques that demonstrate the relationships between them.
Cheers
Gill