Thank you for this, Jon, and I will go through the links you have found.
Some thoughts on the issue ...
There are 2-3 of us her doing rapid evidence reviews for GPs health service commissioners and managers. If we respond to these clients in the timescale they want and need (say, between two and four weeks), the reviews we produce are actually more appropriately described as scoping reviews. Which means there is no way on Earth that we can go through the whole (or even the half) of the systematic review process: protocol definition / ratification, 5-star search, 100% full text retrieval, detailed reading & quality appraisal of all material, detailed data extraction and any form of quantitative synthesis.
We do have peer reviews or our reviews, since we insist on proper governance. Unfortunately, this imposes a delay in our operations, since peer reviewers need to be highly research literate, possessed of a reasonably relevant medical background and available and willing to work on a quid pro quo basis. Such people are hard to find.
In reality, what we end up producing is essentially a structured summary of the literature.
Is this useful? We don't know, because we don't have the capacity to do rigorous follow-up evaluations. Yes, our clients tell us that what we have given them is useful. And yes, we believe them, based on the (admittedly indirect) reasoning that our clients would not otherwise have benefited from our (painstaking , scrupulously-balanced and done-with-pride) summaries of the body of evidence.
What would help us?
I'm not convinced that simply having more systematic reviews would help, given the heterogeneity of data outside the confines of precisely-defined clinical interventions. But that's a separate debate ...
Some things would, however, make life easier:
1) Methods of qualitative synthesis that do not require several people to work closely and iteratively together, with the sort of timescale that is only available to full systematic reviews.
2) Theoretical frameworks for orienting and structuring the evidence assessment, especially in social areas. But these theories must be explicit and standardised enough (especially about the nuts and bolts of interventions) to make testable predictions for specific implementations of interventions.
3) And what the preceding requires are detailed descriptions of the context of intervention (in nice, downloadable format, amenable to further processing), so that reviewers/end-users of the evidence can make an informed decision as to relevance.
4) And, in an ideal world, the end-users would be sufficiently research/evidence literate that they could do the reviews themselves - or at least find their way around the literature. Some of them are, and yes we a run training workshops and so on to 'build capacity.' Who knows if the end an 'evidence culture' will finally come into being? I hope so. But then we don't have the resources to do more than simple evaluations of the workshops either ...
Back to work now. I have searches to make ...
Roy Marsh
Researcher
Cambridgeshire & Peterborough Foundation Trust
Evidence Adoption Centre Office
Douglas House
18 Trumpington Road
Cambridge
CB2 8AH
Email: [log in to unmask],
Tel. 01223 746161
www.cpft.nhs.uk
-----Original Message-----
From: Evidence based health (EBH) [mailto:[log in to unmask]] On Behalf Of Jon Brassey
Sent: 16 April 2012 08:55
To: [log in to unmask]
Subject: [SPAM] Rapid versus systematic reviews - Follow-up
Importance: Low
Hi All,
A little while ago I asked about the difference between rapid and systematic reviews and if there was any published literature examining the issue. I had a number of responses, which I have collated and share below.
If I've missed any then please let me know.
Best wishes
jon
--
Jon Brassey
TRIP Database
http://www.tripdatabase.com
1) Differences between systematic reviews and health technology
assessments: a trade-off between the ideals of scientific rigor and
the realities of policy making. Rotstein D et al. Int J Technol Assess
Health Care. 2004 Spring;20(2):177-83.
http://www.ims.uerj.br/downloads/cursoats/download/Differences_between.pdf
2) Rapid versus full systematic reviews: an inventory of current
methods and practice in Health Technology Assessment. Cameron
A.ASERNIP-S Report No. 60. 2007
http://www.surgeons.org/media/297941/rapidvsfull2007_systematicreview.pdf
3) Health technology appraisal of interventional procedures:
comparison of rapid and slow methods. Warren. J Health Serv Res Policy
July 2007 12:142-146.
http://jhsrp.rsmjournals.com/content/12/3/142.abstract
4) Rapid versus full systematic reviews: validity in clinical
practice? Watt et al. Anz j. Surg. 2008; 78: 1037-1040
http://www.ncbi.nlm.nih.gov/pubmed/18959712
5) Rapid reviews versus full systematic reviews: an inventory of
current methods and practice in health technology assessment. Watt A
et al. Int J Technol Assess Health Care. 2008 Spring;24(2):133-9.
http://digital.library.adelaide.edu.au/dspace/bitstream/2440/53200/1/hdl_53200.pdf
6) Expediting systematic reviews: methods and implications of rapid
reviews. Ganann et al. Implementation Science 2010, 5:56
http://www.implementationscience.com/content/pdf/1748-5908-5-56.pdf
7) Evidence summaries: the evolution of a rapid review approach.
Khangura et al. Systematic Reviews 2012, 1:10
http://www.systematicreviewsjournal.com/content/1/1/10/abstract
8) Rapid Evidence Assessment Toolkit index (Civil Service)
http://www.civilservice.gov.uk/networks/gsr/resources-and-guidance/rapid-evidence-assessment
|