Print

Print


PLEASE NOTE: When you click 'Reply' to any message it will be sent to all RAMESES List members. If you only want to reply to the sender please remove [log in to unmask] from the 'To:' section of your email.

Hi everyone,

 

I hope this finds you well. I wonder if anyone can help?

 

I’m using some of the thinking/concepts from RE, but not doing an actual evaluation for my PhD. My research area is ‘enterprise education’ and I’m exploring particular types of competitive activities which are recommended to schools (compulsory one day competitions, smaller group ‘long form’ competitions). Research approaches in enterprise and entrepreneurship education so far have been based around measuring constructs such as entrepreneurial self efficacy, business knowledge, entrepreneurial intention. Results for experimental style studies are mixed/inconsistent/contradictory.

 

My PhD objectives are around – questioning these research approaches/what works stance, using the theory driven thinking to develop new knowledge about outcomes and how they are generated, working towards being able to say something to help practitioners about how they might refine their programmes/targeting, and being able to share with my research community some of the reasons/causes for the mixed results/unintended outcomes.

 

To do this, I’ve conducted 16 ‘realist informed’ interviews with experienced people at three levels: 1) Commissioners/managers (ideas about assumptions/rationale behind activities and what they should do)  2) school based educators/coordinators (deep knowledge within their own school experience/pupil reactions) 3) enterprise education consultants (knowledge from working across many schools/contexts). I shared a linear framework/more realistic framework with interview participants to prompt their thinking before interviews.

 

In my approach to analysis I’ve done a first coding in Nvivo, where I’ve coded chunks of data against those two frameworks. Then I had a lot coded data, under all the headings in the framework… then I’m sort of using some of Danermark’s advice (summarise), and the next thing I’ve done is summarise/synthesise what different people are saying about the different codes. The next thing I’ll do, because it’s a lot easier to work with a summary, is start to look at outcomes, and what mechanisms seem to be working (or not) in what contexts, and what unintended outcomes are happening where, and why etc etc.

 

So, I’m trying to pull together, synthesise across all these experiences… and be able to say something more generally about this family of interventions.

 

I’m getting to my question now, which is, basically, to check that this is ok? Is it ok to be summarising/synthesising across all these experiences… and aiming to say something more generally about this family of interventions. And if it is ok, whether there are example papers which do this, or studies which do this, which I can refer to?

 

A lot of RE papers I read are more about an evaluation of a specific programme.

 

Sometimes I’m getting anxious that I’m doing it all wrong and this approach (trying to surface more generic family of intervention knowledge/advice), is not legitimate. I’m using it because I want to be able to speak more broadly to my research and practice community about an activity that is so taken for granted that in all of the research about programmes, it is not even dealt with that activities are structured competitively.

 

All advice, reassurance or warning/red flags, and most welcome and sincerely appreciated….

 

Thanks and best wishes,

 

Catherine.

 

Catherine Brentnall

Ready Unlimited

Mob: 07825 125438

Web: www.readyunlimited.com

Twitter: @areyoureadyteam

 

sq_ready_unlimited_email_footer

 

 

 

To UNSUBSCRIBE please see: https://www.jiscmail.ac.uk/help/subscribers/faq.html#join