Dear Amit,
Sounds like you have a big problem. Validity will be a major flaw in your study. You will need to be up-front about this and see what you can do.
Here are some practical suggestions. There are no easy answers.
First I describe what I would do in the perfect world. Then I use that description to suggest what you might do now that you are already underway. Other people on the list will no doubt have sources of use, but given your question I think it might be best to start from a practical step-by-step description of what I would do. Other people might do it differently.
A usual approach is for this to be managed prospectively through an instrumentation study - often a pilot- with a sample of the target population.
The common steps for this are:-
1. The 'gold standard' instrument is translated, often with the input of a panel of clinical, consumer and linguistic experts.
2. Then it is 'back-translated' by interpreters preferably with clinical expertise appropriate to the test administrators/data collectors. These are different translators to step one. Back-tranlation finds out whether your are getting the measurement /domain information you thought you were getting from the translation. Items can then be further refined, often using an expert panel again. This is the first step to establishing the content and face validity of the tranlated instrument.
3. You then have a useable instrument that you can admnister in the 'real world' to test its appropriateness. Even though language might be right in items from a clinical point of view, there can be cultural/administrative protocol differences that also need to be identified. Using a test in a different language is more than just tranlsating words. To ensure clinical/research utility, there is often a small pre-pilot trial with people you would expect clincally to demonstrate extreme scores - this gives you some idea whether the use of the translated instrument does provide the sensitivity you would expect - and you should also get their feedback on the experience of using the instrument/ and the adminstrators experience. The translated instrument is then refined again - hopefully to 'research-ready' stage.
4. Conduct a pilot on the sensitivity, utility and validity of the instrument - ideally in a pilot. If this is not possible because you have a small clinical population to target, then you might consider an instrumentation sub-study from part of your sample early on - but if you are doing an RCT protect your primary end point from Type 1s.
5. The findings of your translation and adaptation of the 'gold standard' instrument can be just as important as the clinical finding - and so the approach to the task and the attention given to understanding what this reveals and sharing it is just as important. The increasingly diverse world of health care consumers and workers means gold standard instruments need to be available to more cultures and people.
If none of this can happen prospectively (as in your case) - see what you can do by examining data you have got in an instrumentation sub-study before you use your data set to find out outcomes. If you do this, at least you will understand the data quality better and will be able to report findings more meaningfully. The sub-study should examine clinical/research utility (the administrator and patient experience), validity (document and critique the translation and factors you did/did not consider in adapting the items and the administration protocol), internal consistency reliability, depending on your sample size you could look at factor validity and see if it reflects the original reports, and if there is any opportunity to do some other reliability work you could.
You should also ensure you have back translation even though you have already used it - if problems are identified at least you can report what they are, and if they are terrible, then you will just need to report the items at face value and not pupport to be having the 'gold standard' measurement quality of the original.
At best you might be lucky and find that your back translation is good, your validity patterns reflect the original, and that you have good internal consistency.
Using a 'gold standard' without a rigorous approach to examining and establishing psychometric propoerties can be worse than using a 'home-grown' instrument. This is because you can erroneously assume that the measurement propoerties and dimensions are the same when they are not. It falsely presents the study as more rigorous than it is.
So even though you have yourself in a problem, it is well worth exploring the instrument propoerties as best you can now, otherwise your results will be meaningless or misleading.
If these things are out of your depth, then you should find research colleagues in your area to see if they might be interested in collaborating - these sort of instrumentation studies on data sets that have already been collected can be excellent projects for postgrad students who must do a research study but have limited or no time for the ethics approval and clinical data collection. In this event the student, supervisor, and you are looking at a shared NEW project on an existing data set that will complement but not replace your original study. You may find that the supervisor or student therefore becomes the lead author in this case. It is better than having no psychometric study at all.
In this instance, you do the instrumentation studies, then having those findings known about psychometric and clinical utility attributes of the translated instrument, you can then disseminate your outcome findings with citations to the instrumentation studies - so everything is transparent.
Remember this approach is just my suggestion from the field. There may be other approaches other folk have found useful.
Good luck.
Anne
Anne Cusick
Professor of Interprofessional Health Sciences
School of Biomedical and Health Sciences
University of Western Sydney
Australia
________________________________
From: Evidence based health (EBH) on behalf of Amit Raval
Sent: Fri 1/22/2010 7:24 PM
To: [log in to unmask]
Subject: Validity of qualitative data in regional language
Dear all,
I am here with a problem encountered frequently during my study. I would be
happy to receive your invaluable suggestions.
At site operation, patients are interviewed by Clinical Research
Co-ordinator/ Student for theses/ investigator for filling CRF. We encounter
with the various scale for depression, quality of life, measure of
adherence, etc. Sometime the scale is only valid in English language.At
site, patients, with local language do not understand the English language,
are interviewed by conversion of that (local)language but recorded in
English. Most of questions are similar but there is some difference in scale
cut points or validity, how to quantify this difference?
My Question is
1) Is this classified as a bias? language bias?
2) Do we have to have a internal validity with gold standards?
3) If we do not have any gold standards due to language barrier?
In such case,
do we have to do interview of panel of experts and agreement or disagreement
over the converted version of particular scale in above scenario?
4) For establish a new scale what should do to have internal and external
validity?
Thank You
Sincerely,
Amit Raval
M.Pharm (Pharmacy Practice)
|