Print

Print


Message
Here is a response from a colleague of mine who is not a part of this list.
 
Jessica
 
Andrew,
 
We - the Effective Public Health Practice Project - use a checklist for all quantitative studies and one for qualitative studies. In public health, we do have a very mixed bag of study designs. We have not done a systematic review on public health interventions that have incorporated qualitative studies. This is probably because we do reviews on effectiveness and most of the published literature is quantitative. A study was done by the Cochrane statisticians on tools used to evaluate studies...ours was - I think - the second most comprehensive out of 200 tools evaluated. All our systematic reviews are listed in the Cochrane DARE database. 
 
An article on the tool was published in Worldviews on Evidence-based Nursing, 2004 and is attached - EPHPP STTI article 2003.doc
 
A blurb on the EPHPP and where it fits in as well as a pdf of our systematic reviews can be found at http://www.city.hamilton.on.ca/PHCS/EPHPP/EPHPPResearch.asp.
 
I have also attached our QA quantitative tool and dictionary and qualitative tool.
 
We are looking at using the QA quantitative two different ways. The first, and what we have previously done, is sum the scores of each component. This works great if there are a sufficient numbers of RCTs. The method we just used on the most current review we did on healthy body weights was to look at what quality components are the most meaningful given the content of the review and rank the articles based on that. How meaningful is blinding when all the study participants have to have parental consent? How meaningful is a >80% follow-up rate when you have an intervention dealing with addicts who live on the street? These items do not discriminate so instead of making all the articles weak, these quality components would be given the least amount of weight.
 
This is probably more than you asked for. If you have any questions, please e-mail.
 
Sandra
 
Sandra Micucci, MSc, PhD Candidate
Project Coordinator
Effective Public Health Practice Project
Public Health Research, Education and Development Program (PHRED)
Public Health and Community Services Department
City of Hamilton
905-546-2424, ext. 1570
-----Original Message-----
From: Critical Appraisal Skills Programme (CASP) [mailto:[log in to unmask]] On Behalf Of Andrew Booth
Sent: Wednesday, January 12, 2005 5:38 AM
To: [log in to unmask]
Subject: Re: Choice of Critical Appraisal Checklists

Sorry I am resending this one with a message header this time just in case, like me, you delete automatically anything without a message header.

 

Dear All

 

I am hoping that you can help me with this one. We are currently conducting appraisal and review activity for a national governmental organisation. The output is a digest of significant articles on a pre-specified topic with a brief critically appraised summary for each item. The articles comprise all research methodologies (e.g. RCTs, observational studies, qualitative research and even secondary forms such as systematic reviews and guidelines)

 

Specifically in connection with the appraisal part of the process it seems we have three options:

1.       Use a *checklist specific to each study design*. This is the most time intensive option and could result in an inconsistent format of reporting and analysis.

2.       Use a *mixed-methods checklist* and only use the questions that apply to each article. This reduces inconsistency but will result in redundancy of some checklist items and may prove unwieldy.

3.       Use a *generic checklist* that can be used for all study types. This would result in consistency of reporting but might require supplementary questions regarding specific study types (e.g. a question on randomisation specifically for RCTs).

 

Obviously having been involved in systematic reviews, critical appraisal and HTAs for almost a decade we have amassed large numbers of study-design-specific checklists. My question therefore relates ONLY to options 2 and 3 above:

 

1.       Have any of you used either a mixed-methods checklist or a generic checklist to appraise consistently a “mixed bag” of study types and designs?

2.       If so, is there a particular critical appraisal tool/checklist/instrument that you would recommend for this purpose?

 

I am willing to collate replies if there is wider interest in this topic.

 

Thanking you for your assistance – Yours in anticipation

 

Andrew  

 

Andrew Booth

Director of Information Resources and Senior Lecturer in Evidence Based Healthcare Information