Print

Print


PLEASE NOTE:
When you click 'Reply' to any message it will be sent to all RAMESES List members.
If you only want to reply to the sender please remove [log in to unmask] from the 'To:' section of your email.

Hi all

Fabulous conversation and resources here – greatly appreciated.

 

I think some of these issues apply not just to screening, but also to data extraction.  For data extraction, they can apply to extraction from interviews, as well as extractions from literature. 

 

The purpose of the overall project can also make a difference. We do quite a lot of literature review as part of evaluation, sometimes of literature that has been generated within an organisation or a program. One of the issues that becomes important in evaluation (and policy advice too) is ‘how often’ (roughly speaking) something turns up, and how serious/significant it is when it does.  So a particular CMO might be well evidenced, but occur relatively infrequently: is it significant enough to amend policy or practice to address it? Another might be well evidenced, relatively common but have minor consequences – again, how significant is it? 

 

The implication for whether to have 10% or 20% of the sample checked relates here to the significance of the findings. You might want a higher proportion of the sample checked if the likelihood of ‘high significance’ findings is high, and even more so if ‘very high significance but rare’ is possible. 

 

The other advantage of a having a higher proportion checked relates to the process of analysis in research teams (and/or students and their teams of supervisors).  A higher proportion of work read by more than one person makes it easier to have discussions about ‘what this means’ and ‘why this matters’.  One strategy is to have 10% of each team members’ material (be that literature or interviews) extracted by a second person as well. This means that we end up with a higher proportion of material that has been ‘validated’ across the project as a whole, contributes to both quality improvement for the project and capacity development for team members, and facilitates discussions across the team.  

 

Cheers

Gill 

 

From: Realist and Meta-narrative Evidence Synthesis: Evolving Standards <[log in to unmask]> On Behalf Of Geoff Wong
Sent: Thursday, 25 July 2019 10:28 AM
To: [log in to unmask]
Subject: Re: Screening literature results

 

PLEASE NOTE: When you click 'Reply' to any message it will be sent to all RAMESES List members. If you only want to reply to the sender please remove [log in to unmask] <mailto:[log in to unmask]>  from the 'To:' section of your email. 

I would agree with you that the 10% is not evidence based (and as you say, more eminence based 😯), hence my use of the inverted commas for "'accepted' practice."

 

There is a more fundamental point here being the difference between needing to have review processes in place so that there are:

1) consistent use of processes (i.e. to spot systematic errors in screening)

and

2) needing to have processes in place that ensure 'nothing' is missed.

 

Yes, I fully accept there is overlap between the two purposes.

 

As I have argued in:

Data gathering for realist reviews: Looking for needles in haystacks. Wong G. In: Emmel N, Greenhalgh J, Manzano A, Monaghan M, Dalkin S, editors. Doing Realist Research. London: Sage, 2018  

When the purpose of a review approach is to develop theory that explains phenomena, then the need for exhaustiveness is less pressing.

Though I do accept that empirical data support my argument would be most interesting and could form a nice piece of research within research in a realist review.

 

Geoff

 

 

On Wed, 24 Jul 2019 at 17:28, Andrew Booth <[log in to unmask] <mailto:[log in to unmask]> > wrote:

Great answer Geoff and very useful list.

 

However methodologically it is questionable to describe something as de facto standard when your not inconsiderable influence as a leading opinion leader is attested to in the authorship of all those studies! 

 

In systematic reviews we tend to cluster around 20% but it would be more informed to determine this sample by the characteristics of the topic Eg. Whether the terminology is "secure", whether there are supplementary search methods (minimizing risk of missing items)  and whether the inclusion criteria are unambiguous. So maybe there is a science here comparable to calculating a sample calculation.

 

On our courses I tell students that the sample for verification will depend upon whether the sample is to defend the process or to train the reviewers. For example, even if the size of sample decided upon is the same it might be done at any stage of the review prior to publication in the first instance but might be done as two batches of 10% with checking of consensus after each if it is training the team. 

 

Finally, the most important thing is to make sure you screen a RANDOM sample - the first batch is likely to be from one of the main databases with better indexing Eg pubmed and is therefore unrepresentative of a dual screening process and its interrater reliability. 

 

Best wishes 

 

Andrew 

 

 

 

On Wed, 24 Jul 2019 15:43 Geoff Wong, <[log in to unmask] <mailto:[log in to unmask]> > wrote:

PLEASE NOTE: When you click 'Reply' to any message it will be sent to all RAMESES List members. If you only want to reply to the sender please remove [log in to unmask] <mailto:[log in to unmask]>  from the 'To:' section of your email. 

The 10% random sample in realist reviews (I think) first appears in this publication:

Does therapeutic writing help people with long-term conditions? Systematic review, realist synthesis and economic considerations. Nyssen OP, Taylor S, Wong G, Steed L, Bourke L, Lord J, Ross C, Hayman S, Field V, Higgins A, Greenhalgh T, Meads C. Health Technology Assessment 2016;20    

 

 

It is also used in the following (for example) and in effect has become  'accepted' practice:

 

Access to primary care for socioeconomically disadvantaged older people in rural areas: a realist review. Ford J, Wong G, Jones A, Steel N. BMJ Open 2016;6:e010652

 

Towards and understanding of how appraisal of doctors produces its effects: a realist review. Brennan N, Bryce M, Pearson M, Wong G, Cooper C, Archer J. Medical Education 2017;51:1002-1013    

 

Interventions to improve antimicrobial prescribing of doctors in training (IMPACT): a realist review. Papoutsi C, Mattick K, Pearson M, Brennan N, Briscoe S, Wong G. Health Serv Deliv Res 2018;6(10)    

 

Giving permission to care for people with dementia in residential homes: learning from a realist synthesis of hearing-related communication. Crosbie B, Ferguson M, Wong G, Walker D-M, Vanhegan S, Dening T. BMC Medicine 2019; 17:54

 

Underlying mechanisms of complex interventions addressing the care of older adults with multimorbidity: a realist review. Kastner M, Hayden L, Wong G, Lai Y, Makarski J, Treister V, Chan J, Lee J, Ivers N, Holroyd-Leduc J, Straus S. BMJ Open 2019;9:e025009    

 

Improving best practice for patients receiving hospital discharge letters: a realist review. Weetman K, Wong G, Scott E, MacKenzie E, Schnurr S, Dale J. BMJ Open 2019; 9:e027588    

 

 

As for protocols, have a look at these very recent ones:

 

Remediating doctors' performance to restore patient safety: a realist review protocol. Price T, Brennan N, Cleland J, Prescott-Clements L, Wanner A, Withers L, Wong G, Archer J. BMJ Open 2018;8:e025943    


Understanding the impact of delegated home visiting services accessed via general practice by community-dwelling patients: a realist review protocol. Abrams R, Wong G, Mahtani K, Tierney S, Boylan A-M, Roberts N, Park S. BMJ Open 2018;8:e024876      

 

Explaining variations in test ordering in primary care: protocol for a realist review. Duddy C, Wong G. BMJ Open 2018;8:e023117    

 

A realist review of community engagement with health research. Adhikari B, Vincent R, Wong G, Duddy C, Richardson E, Lavery J, Molyneux S. Wellcome Open Research 2019;4:87    

 

 

For a detailed discussion of rigour, see:

Data gathering for realist reviews: Looking for needles in haystacks. Wong G. In: Emmel N, Greenhalgh J, Manzano A, Monaghan M, Dalkin S, editors. Doing Realist Research. London: Sage, 2018    

 

 

Good luck!

 

Geoff

 

 

On Wed, 24 Jul 2019 at 15:06, Oatley, Chad <[log in to unmask] <mailto:[log in to unmask]> > wrote:

PLEASE NOTE: When you click 'Reply' to any message it will be sent to all RAMESES List members. If you only want to reply to the sender please remove [log in to unmask] <mailto:[log in to unmask]>  from the 'To:' section of your email. 

Hi Rebecca,

 

I have just undertaken a Realist Synthesis as part of my PhD, happy to talk more directly about the approach I took. 

 

Email is: chad.oatley:solent.ac.uk <http://solent.ac.uk> 

 

Best,

 

Chad Oatley

Public Health Senior Practitioner (PRO 411)| Public Health | Isle of Wight Council | Tel: (01983) 821000 ext 6823 | Email: [log in to unmask] <mailto:[log in to unmask]>  

 

From: Realist and Meta-narrative Evidence Synthesis: Evolving Standards <[log in to unmask] <mailto:[log in to unmask]> > On Behalf Of REBECCA HUNTER 17027704
Sent: 24 July 2019 14:52
To: [log in to unmask] <mailto:[log in to unmask]> 
Subject: Screening literature results

 

PLEASE NOTE: When you click 'Reply' to any message it will be sent to all RAMESES List members. If you only want to reply to the sender please remove [log in to unmask] <mailto:[log in to unmask]>  from the 'To:' section of your email. 

Good afternoon all

 

Could anyone help me with regards to screening my search results in my realist literature review. 

 

I am in the process of writing my review protocol and I have outlined three phases to the screening process: identification, selection, and appraisal.  

 

Phase 1: identify by title and abstract 

Phase 2: select papers from full text retrieval 

 

The first two screening tools have yes/no/unsure codes.  

 

I intended to select 10% of the citations/papers to be checked at random by a second party to ensure the screening tool was applied consistently.  However, how do I defend the 10% number?  Why not get 50% of the citations/papers checked, is this just a case of man-power and expediency or is there more to it?  I am anticipating this question in my Viva or, not least, from my Director of Studies and I can’t come up with a good answer other than expediency.

 

The third screening tool I plan to use to appraise the selected papers is more subjective.

 

Phase 3: Appraisal 

1.	Is this paper relevant?
2.	Does this paper have rigour?
3.	Is this paper useful?

Code: High; Moderate; Low; No value  (No value papers are to be discarded)

Random 10%  reviewed by steering group lead (TG)

 

In the screening tool I have clearly outlined what is meant by each of the terms: relevant, rigour, useful but there will always be a degree of subjectivity in the appraisal process.  Also, none of my supervisory team have any experience with realist methodology so may struggle applying these terms (this is an assumption and I might be doing them a great disservice!).  Due to the potential difficulties in applying this tool should I choose a higher number (30%?) to be randomly selected for consistency?

 

Many thanks as always to the Ramses group for your advice and help.

 

Rebecca

 

 

 

 

 

 

 

Rebecca Hunter

MSK Specialist Physiotherapist, NHS Highland

PhD Student Department of Nursing, University of Highlands and Islands



 

To UNSUBSCRIBE please see: https://www.jiscmail.ac.uk/help/subscribers/faq.html#join 

To UNSUBSCRIBE please see: https://www.jiscmail.ac.uk/help/subscribers/faq.html#join 

To UNSUBSCRIBE please see: https://www.jiscmail.ac.uk/help/subscribers/faq.html#join 

To UNSUBSCRIBE please see: https://www.jiscmail.ac.uk/help/subscribers/faq.html#join 


To UNSUBSCRIBE please see: https://www.jiscmail.ac.uk/help/subscribers/faq.html#join