Print

Print


Hmm... forgive me putting this a bit bluntly, but if I’ve understood you correctly, you have a policy not just with patchy take-up, but even where it is implemented, with minority use and high cost... but you’re going to try to increase implementation without finding out whether it is in fact feasible that it would achieve its intended longer term outcomes or do so cost effectively?

 

It’s your research project of course! But my day job is as an evaluator – and if you were a client, I’d be advising that you turn this around, and investigate student use first – on the basis that ‘why would you invest in increasing implementation if implementation doesn’t necessarily result in use?’

 

Forgive my total ignorance of the area – but is there evidence that lecture capture improves student learning outcomes?

Gill

 

 

From: Realist and Meta-narrative Evidence Synthesis: Evolving Standards [mailto:[log in to unmask]] On Behalf Of Melanie King
Sent: Thursday, 27 August 2015 11:31 PM
To: [log in to unmask]
Subject: Re: Potential theories for demand, supply and take-up

 

Gill,

 

Many thanks – this is my first attempt at applying RE and so I realise I am in need of this sanity check!

 

Original purpose: a new policy is in effect to promote a rapid uptake in the use of LC based on persuasion, guidelines and greater support.  I need to know, ‘why has this worked in some departments and not others?’  If uptake by staff is patchy in some areas, why is this and what can be changed about the intervention to expedite uptake?  So the original outcome of interest was staff adoption.

 

However, ..

 

From a mass of data on use, it has emerged that despite staff adoption student use is still very patchy.  So from a policy point of view, the original programme theory perhaps is flawed?  The objective of this evaluation is to come up with a LC policy (new programme theory) that maximises the use of the captured sessions.  Geoff mentioned about certain staff behaviours triggering an ‘advantage’ mechanism in students for example.  So I can collect evidence on this I think.

 

Although, I must admit, using RE is generating more theories than I am able to test, which provide tantalising paths to go down but I just don’t have the time.  This time I can evidence the mechanisms at play for staff WRT take up and also investigate the ‘advantage’ mechanism in students.  However, the rest will have to be theory building, as you say.

 

All of your help is so very much appreciated!

 

Best wishes,

 

Melanie

 

From: Gill Westhorp <[log in to unmask]>
Date: Thursday, 27 August 2015 14:09
To: "'Realist and Meta-narrative Evidence Synthesis: Evolving Standards'" <[log in to unmask]>, Melanie King <[log in to unmask]>
Subject: RE: Potential theories for demand, supply and take-up

 

Hi Melanie

This might sound trite but it’s not (at least it’s not intended to be!): your investigation stops when you have at least an ‘adequate’ explanation for whatever it is you’re trying to explain. But to be honest I’m not sure what that is. 

 

Firstly: what is the purpose of your research / the evaluation? – By which I mean – what do you expect the results to be used for, by whom?  (That should help define what it is you need to achieve – i.e. your investigation is finished when you’ve produced whatever it is that they’ll need in order to do whatever it is.) 

 

Secondly, how is the outcome of interest defined?  Is it lecture capture (i.e. lecturer behaviour)? Is it viewing of lectures (i.e. student behaviours)?  Is it improved learning outcomes as a result of viewing?  Is it changed student behaviours in areas other than viewing? Changed pedagogy by teachers when they review their lectures and decide that they could do better? Or is it cost effectiveness of LC?  In formal realist terms - each of these is a different outcome so each would have different mechanisms generating it – so you have to be specific about that before you can theorise what the mechanisms might be.  But also – each implies somewhat different research/evaluation methods, because it takes different information to answer whether the outcome has been achieved.

 

Of course these outcomes aren’t mutually exclusive and they can easily be organised in a hierarchy of outcomes – so it’s not the case that you only have to examine one.  It’s just that you do need to be clear what your final ‘outcome of interest’ is in order to structure the rest of your project.

 

In answer to your question about whether you can infer reasons from the data without asking why, my response would be: you can hypothesise on the basis of patterns in the data, but you can’t provide evidence of mechanisms  without actually investigating them.  I can think of at least three or four different mechanisms that might underpin lecturer choices about capturing or not capturing... at least as many again for why some sub-groups of students would view or not... several different mechanisms related to (re)viewing that might affect whether student learning actually improves as a result... etc etc.  How would you know whether your hypothesis was right / what the explanation was if you hadn’t checked?  (And of course – as realists we don’t assume a single explanation will cover all groups – any explanation is likely to comprise multiple CMOs that explain how different outcomes are generated for different groups). To frame the general issue in RE terms:  To undertake a realist evaluation, you have to - at minimum - provide evidence of each element of the realist explanatory framework: evidence of outcomes, evidence of mechanisms, and evidence of the elements of context that affect whether and which mechanisms operate.  Preferably, you also use analytic techniques that demonstrate the relationships between them.

 

Cheers

Gill