JiscMail Logo
Email discussion lists for the UK Education and Research communities

Help for MINORITY-ETHNIC-HEALTH Archives


MINORITY-ETHNIC-HEALTH Archives

MINORITY-ETHNIC-HEALTH Archives


MINORITY-ETHNIC-HEALTH@JISCMAIL.AC.UK


View:

Message:

[

First

|

Previous

|

Next

|

Last

]

By Topic:

[

First

|

Previous

|

Next

|

Last

]

By Author:

[

First

|

Previous

|

Next

|

Last

]

Font:

Proportional Font

LISTSERV Archives

LISTSERV Archives

MINORITY-ETHNIC-HEALTH Home

MINORITY-ETHNIC-HEALTH Home

MINORITY-ETHNIC-HEALTH  July 2012

MINORITY-ETHNIC-HEALTH July 2012

Options

Subscribe or Unsubscribe

Subscribe or Unsubscribe

Log In

Log In

Get Password

Get Password

Subject:

Re: Evaluation criteria - is there a gold standard?

From:

Pete Hoey <[log in to unmask]>

Reply-To:

Pete Hoey <[log in to unmask]>

Date:

Tue, 10 Jul 2012 13:05:58 +0100

Content-Type:

text/plain

Parts/Attachments:

Parts/Attachments

text/plain (195 lines)

Hi all,

I think this subject depends on how much notice commissioning organisations take of equality legislation.  As it plays out, a gold standard in research (with the depth described) around live provision around (say) substance misuse will have ethical problems.  If one could find an ethical way around this, the research would probably be too late for commissioners to use, as it wouldn't be current data.

Because of this reality, I advocate the following broad approaches:-
- Sound recording of service data.
- Analysis of service data to ensure it reflects the broad local populaiton.
- Where service data suggests a differential take-up, engage service users / community / advocates and enquire why.
- If the enquiry suggests there is a problem of engagement, this should lead to commissioning action.
- Regular study of service data to monitor expected improvements.
- When situation improves, so will the service data and this makes possible a like for ike analysis of outcomes by social group (ie, in substance misuse this would be key substance use data).
- Consider any identified differentials between social groups and enquire further, (prevalence data, local partners or service users).
- Test this data over the best frequency available, to ensure it is not an anomily.  This can often depend on the size of the service.
- If pattersn repeat, this can be classed as a persistent differential, which requires more in-depth, planned enquiry.
- Run systematic focus groups with service users from the disadvantaged social group, publish results and draw conclusions with partners.
- Commissioning action should be informed by the combined statistics, which is underpinned by service user's systematic comments.
- Continued monitoring of like for like data to track improvements etc.

I have found that this works for commissioners, but it doesn't match a gold standard, which has long been challenged in real life situaitons.  So this is my gold standard.  It works in drug treatment, and obviously requires modification in the field of prevention.  I supppose the problem here is in the peer-reveiw element and the way such research lies undiscovered by the academny, but it does achieve good local change which is rooted in real life conversations.

Regards and thanks for an interesting discussion.
Pete

Pete Hoey
Planning Development Officer
Personalisation and Commissioning
Wellbeing and Communities
Kirklees Council
30 Market Street, Huddersfield
HD1 2HG
07966 459243
________________________________________
From: Health of minority ethnic communities in the UK [[log in to unmask]] On Behalf Of Ingleby, J.D. [[log in to unmask]]
Sent: 10 July 2012 12:55
To: [log in to unmask]
Subject: Re: Evaluation criteria - is there a gold standard?

This is a fascinating discussion and I hope it won't fizzle out too soon!

Regarding the use of qualitative studies: the reason why policy-makers tend not to rely on them is that they are unable to answer the main question the policy-maker is interested in, namely: "if we use this approach, will it give better results?" This says nothing at all about the standard of the research. Qualitative research can be good or bad, just like the quantitative sort, but neither sort can do things that it wasn't designed to do.

What qualitative research can do is to give insight into questions like: What does a "better result" mean? Why do some approaches give better results than others? What interactions and (mis)interpretations are responsible for these differences?

To me it's obvious and inevitable that you need both kinds of method to evaluate anything properly. The problem is that researchers are brought up within traditions which teach them that either qualitative or quantitative methods are the only way of discovering the truth. Because of this, "mixed-methods" research is regarded as a daring new invention, whereas in my view research should never have been anything else!

Best wishes,
David

________________________________
Van: Health of minority ethnic communities in the UK [[log in to unmask]] namens Farah Islam-Barrett [[log in to unmask]]
Verzonden: dinsdag 10 juli 2012 13:22
Aan: [log in to unmask]
Onderwerp: Re: Evaluation criteria - is there a gold standard?

Hi Beverley,
You raise such an important, especially for smaller organisations, often with BME backgrounds who have years of excellent case studies, and engaging narrative as well as their experience and understanding,  which positions them to  inform  policy makers, academics, researchers as well as funders.

It is a very difficult environment for these groups to make a case for the value of their work.

There also seems to be a growing systemising and standardising tick-box mentality from funders in regards to evidence.

So I echo Beverley’s question

Why can’t narrative or case study research and evidence be considered to be of a high standard?

Best Regards
Farah

Farah Islam-Barrett
Race Equality Foundation
Unit 17
Dean House Studios
Greenwood Place
London
NW5 1LB

Direct line:  0207 428 1889
Office: 0207 428 1880
www.raceequalityfoundation.org.uk

This e-mail and any files transmitted with it contain information which is confidential and may also be privileged. It is for the exclusive use of the intended recipient(s). If you are not the intended recipient(s), please note that any distribution, copying or use of this communication or the information in it is strictly prohibited. If you have received this communication in error, please delete it and immediately notify the sender. This e-mail does not necessarily reflect the views of the Race Equality Foundation or its Trustees. The Race Equality Foundation is a registered charity, number 1051096 and a company limited by guarantee, registered in England with the registered number 3121679.  The Race Equality Foundation’s registered office is Unit 35, Tileyard Studios, Tileyard Road, London N7 9AH.

From: Health of minority ethnic communities in the UK [mailto:[log in to unmask]] On Behalf Of Beverley Costa
Sent: 10 July 2012 12:07
To: [log in to unmask]
Subject: Re: Evaluation criteria - is there a gold standard?

Gosh! Well no wonder very little acceptable evaluation is done.

It is extremely resource heavy and results often just confirm what people know anyway.

With funders increasingly demanding evidence of impact from services, creating crude systems which require users to tick a box saying that an intervention has made a difference to their lives is often the priority.

I agree completely that follow up to see if the effect is maintained is very important. However there are all kind of ethical reasons why following someone up with e.g. a mental health issue might be counter productive to their well being.

We can devise all kinds of systems to measure people’s experiences but we will always run into the problem that people are individual and inconsistent and refuse to be systematised.

Why can’t narrative or case study research and evidence be considered to be of a high standard?


Regards to all

Beverley

Beverley Costa DPsych, UKCP reg.
Chief Executive Officer
Mothertongue
Tel: 0118 957 6393
Fax: 0118 323 4575
www.mothertongue.org.uk<http://www.mothertongue.org.uk>



________________________________
From: Health of minority ethnic communities in the UK [mailto:[log in to unmask]]<mailto:[mailto:[log in to unmask]]> On Behalf Of M & M Johnson
Sent: 10 July 2012 10:42
To: [log in to unmask]<mailto:[log in to unmask]>
Subject: Re: Evaluation criteria - is there a gold standard?

There are so many rules for conducting evaluations - most of which seem to be unmet in practice. We have tried to apply them in evaluating (systematic reviews of) evaluations!

My personal checklist looks a bit like this:

Predicted outcome / change (otherwise no evidence of effect) i.e. prior goals!
Adequate measurement of baseline states and proper description (detailed) of all key variables
Adequate coverage of confounders/covariates (i.e. things going on at the same time as the 'intervention' that might have an effect: is my grass growing too long because of the rain or the warmth or the interaction of the two, or just because it has been so wet I cannot get the lawnmower out to cut it?) (NB days have also got longer - so look for 'contextual' changes as alternative explanations.)
Explicit theory (this is what is so often lacking) that links the intervention with the (expected) outcomes - e.g. 'Psychological link between trigger events and action, so increased probability of a trigger event like seeing a reminder will lead to attendance for checkup' (etc)
Enough time to demonstrate effect - and followup to see if effect is maintained when stimulus is withdrawn

publication in peer reviewed /publically accessible sites

'they liked it' / we gave out N packs ... etc is NOT an evaluation.

Look at the CONSORT / EQUATOR/ etc sites for more ...


l       EQUATOR (International network)
l       CONSORT (Reporting standards for RCTs)
l       STROBE (for Observational studies)
l       RATS (for qualitative studies)
l       PRISMA (Systematic reviews)

l       “Enhancing the quality and transparency of health research” Groves T, 2008 - BMJ 337: 66
l       But see also the new NICE practice (NICE 2007, 2010) and comments by Tugwell et al 2010 – focus on ‘fitness for purpose’ and equity concerns.

In NICE we used the following checklist to appraise evidence:


l       Section 1: theoretical approach1.1 Is a qualitative approach appropriate? For example:• Does the research question seek to understand processes or structures, or illuminate subjective experiences or meanings?• Could a quantitative approach better have addressed the research question? Appropriate Inappropriate Not sure Comments: Aimed to explore healthcare professionals experiences.
l       1.2 Is the study clear in what it seeks to do? For example:• Is the purpose of the study discussed – aims/objectives/research question(s)?• Is there adequate/appropriate reference to the literature?• Are underpinning values/ assumptions/ theory discussed? Clear Unclear Mixed Comments: Introductory paragraphs outlines all these points.
l       Section 2: study design2.1 How defensible/rigorous is the research  design/ methodology?
l       For example:• Is the design appropriate to the research question?• Is a rationale given for using a qualitative approach?• Are there clear accounts of the rationale/ justification for the sampling, data collection and data analysis techniques used?• Is the selection of cases/sampling strategy theoretically justified? Defensible Not defensible Not sure Comments: Design is appropriate to question.  Rationale given for purposeful sample and use of focus groups


l       3.1 How well was the data collection carried out? For example:•
l       Are the data collection methods clearly described?• Were the appropriate data collected to address the research question?• Was the data collection and record keeping systematic? Appropriate Inappropriate Not sure/inadequately reported Comments: Probably appropriate, but section too brief to be sure. Audiotapes and verbatim transcriptions of focus groups were kept.
l       4.1 Is the role of the researcher clearly described? For example:• Has the relationship between the researcher and the participants been adequately considered?• Does the paper describe how the research was explained and presented to the participants? Clear Unclear Not described Comments: No discussion of relationship between researcher and participants.
Invitation and info about the study was sent to potential participants via local service contacts
l       .4.2 Is the context clearly described? For example:• Are the characteristics of the participants and settings clearly defined?• Were observations made in a sufficient variety of circumstances?• Was context bias considered? Clear Unclear Not sure Comments: Table of characteristics given, settings clearly defined.  Focus groups took place in a range circumstances.  Context bias considered: “..findings must be interpreted with regard to the study context...”
l       .4.3 Were the methods reliable? For example:• Were data collected by more than one method?• Is there justification for triangulation, or for not triangulating?• Do the methods investigate what they claim to? Reliable Unreliable Not sure Comments: More than 1 composition of focus groups – professional groups and multidisciplinary teams.  No other information given.


Is the data analysis sufficiently rigorous? For example:• Is the procedure explicit – is it clear how the data were analysed to arrive at the results?• How systematic is the analysis – is the procedure reliable/ dependable?• Is it clear how the themes and concepts were derived from the data? Rigorous Not rigorous  Not sure/ not reported Comments: Constant comparison.  Deviant cases sought to assess integrity of categories.
l       5.2 Are the data ‘rich’? For example:• How well are the contexts of the data described?• Has the diversity of perspective and content been explored?• How well have the detail and depth been demonstrated?• Are responses compared and contrasted across groups/sites? Rich Poor Not sure/not reported Comments: Boxes 1-4 compare and contrast within themes. Contexts well described.
l       5.3 Is the analysis reliable? For example:• Did more than one researcher theme and code transcripts/data?• If so, how were differences resolved?• Did participants feed back on the transcripts/data? (if possible and relevant)• Were negative/discrepant results addressed or ignored? Reliable Unreliable Not sure/ not reported Comments: All 3 researchers, from different backgrounds, coded and themed. Discussion and agreement. Summary of results sent to all participants for comment. 7 attended a follow-up focus group to check validity of interpretation. Deviant cases sought to assess the integrity of the categories identified.
l       5.4 Are the findings convincing? For example:• Are the findings clearly presented?• Are the findings internally coherent?• Are extracts from the original data included?• Are the data appropriately referenced?• Is the reporting clear and coherent? Convincing Not convincing Not sure Comments: Clearly presented, original extracts included and referenced.5.5 Are the findings relevant to the aims of the study? Relevant irrelevant Partially relevant Comments:
l       5.6 Are the conclusions adequate? For example:• How clear are the links between data, interpretation and conclusions?• Are the conclusions plausible and coherent?• Have alternative explanations been explored and discounted?• Does this study enhance understanding of the research subject?• Are the implications of the research clearly defined?• Is there adequate discussion of any limitations  encountered? Adequate Inadequate  Not sure  Comments: Clear links between data, interpretation and conclusions.  Enhances understanding of the subject.  Discussion of limitations and generalisability.

l       6.1 How clear and coherent is the reporting of ethical considerations? For example,• Have ethical issues been taken into consideration?• Are ethical issues discussed adequately – do they address consent and anonymity?• Have the consequences of the research been considered; for example, raising expectations, changing behaviour?• Was the study approved by an ethics committee? Clear Not clear Not sure/ not reported
l       Comments: Ethics committee approval gained. Written informed consent given by participants.


l       NICE checklist is based on checklists in:
l       Spencer L. Ritchie J, Lewis J, Dillon L (2003) Quality in qualitative evaluation: a framework for assessing research evidence. London: Government Chief Social Researcher’s Office. Available from:
l       www.strategy.gov.uk/downloads/su/qual/downloads/qqe_rep.pdf<http://www.strategy.gov.uk/downloads/su/qual/downloads/qqe_rep.pdf>
l       Public Health Resource Unit England (2006) Critical Appraisal Skills Programme (CASP) – making sense of evidence: 10 questions to help you make sense of qualitative research . Available from:
l       www.phru.nhs.uk/Doc_Links/Qualitative%20Appraisal%20Tool.pdf<http://www.phru.nhs.uk/Doc_Links/Qualitative%20Appraisal%20Tool.pdf>
l       National Training and Research Appraisal Group (NTRAG); contact: www.ntrag.co.uk<http://www.ntrag.co.uk>
l       British Sociological Association (BSA); contact: www.britsoc.co.uk<http://www.britsoc.co.uk>
l       I Methodology checklist: qualitative studies
l       © National Institute for Health and Clinical Excellence (January 2009)

See also

l       Tugwell P, Petticrew M, Kristjansson E, Welch V, Ueffing E, Waters E, Bonnefoy J, Morgan A, Doohan E, Kelly MP 2010 “Assessing equity in systematic reviews: realising the recommendations of the Commission on Social Determinants of Health” BMJ 341: c4739  doi: 10.1136/bmj.c4739
lWelch V, Tugwell P, Petticrew M, de Montigny J, Ueffing E, Kristjansson B, McGowan J, Benkhalti Jandu M, Wells GA, Brand K, Smylie J. 2010 How effects on health equity are assessed in systematic reviews of interventions.  Cochrane Database Syst Rev. 2010 Dec 8;12:MR000028

Hope all this helps...

(sorry about the berserk formatting: don't know what is happening here)

Mark
Moderator, Minority-Ethnic-Health Discussion List
www.jiscmail.ac.uk/minority-ethnic-health<http://www.jiscmail.ac.uk/minority-ethnic-health>

Top of Message | Previous Page | Permalink

JiscMail Tools


RSS Feeds and Sharing


Advanced Options


Archives

April 2024
March 2024
February 2024
January 2024
December 2023
November 2023
October 2023
September 2023
August 2023
July 2023
June 2023
May 2023
April 2023
March 2023
February 2023
January 2023
December 2022
November 2022
October 2022
September 2022
August 2022
July 2022
June 2022
May 2022
April 2022
March 2022
February 2022
January 2022
December 2021
November 2021
October 2021
September 2021
August 2021
July 2021
June 2021
May 2021
April 2021
March 2021
February 2021
January 2021
December 2020
November 2020
October 2020
September 2020
August 2020
July 2020
June 2020
May 2020
April 2020
March 2020
February 2020
January 2020
December 2019
November 2019
October 2019
September 2019
August 2019
July 2019
June 2019
May 2019
April 2019
March 2019
February 2019
January 2019
December 2018
November 2018
October 2018
September 2018
August 2018
July 2018
June 2018
May 2018
April 2018
March 2018
February 2018
January 2018
December 2017
November 2017
October 2017
September 2017
August 2017
July 2017
June 2017
May 2017
April 2017
March 2017
February 2017
January 2017
December 2016
November 2016
October 2016
September 2016
August 2016
July 2016
June 2016
May 2016
April 2016
March 2016
February 2016
January 2016
December 2015
November 2015
October 2015
September 2015
August 2015
July 2015
June 2015
May 2015
April 2015
March 2015
February 2015
January 2015
December 2014
November 2014
October 2014
September 2014
August 2014
July 2014
June 2014
May 2014
April 2014
March 2014
February 2014
January 2014
December 2013
November 2013
October 2013
September 2013
August 2013
July 2013
June 2013
May 2013
April 2013
March 2013
February 2013
January 2013
December 2012
November 2012
October 2012
September 2012
August 2012
July 2012
June 2012
May 2012
April 2012
March 2012
February 2012
January 2012
December 2011
November 2011
October 2011
September 2011
August 2011
July 2011
June 2011
May 2011
April 2011
March 2011
February 2011
January 2011
December 2010
November 2010
October 2010
September 2010
August 2010
July 2010
June 2010
May 2010
April 2010
March 2010
February 2010
January 2010
December 2009
November 2009
October 2009
September 2009
August 2009
July 2009
June 2009
May 2009
April 2009
March 2009
February 2009
January 2009
December 2008
November 2008
October 2008
September 2008
August 2008
July 2008
June 2008
May 2008
April 2008
March 2008
February 2008
January 2008
December 2007
November 2007
October 2007
September 2007
August 2007
July 2007
June 2007
May 2007
April 2007
March 2007
February 2007
January 2007
2006
2005
2004
2003
2002
2001
2000


JiscMail is a Jisc service.

View our service policies at https://www.jiscmail.ac.uk/policyandsecurity/ and Jisc's privacy policy at https://www.jisc.ac.uk/website/privacy-notice

For help and support help@jisc.ac.uk

Secured by F-Secure Anti-Virus CataList Email List Search Powered by the LISTSERV Email List Manager