Print

Print


Hi Samia,

Hope all is well. I'll try to be brief since a lot of this has been covered by others. You have raised several points that we come across a lot when teaching systematic reviews.

First part relates to how many studies you need in order to conduct a meta-analysis. In principle, pooling (meta-analysis) can be conducted with at least two sets of analyzable data. If you have data from 2 RCTs then you can pool them statistically. Whether or you should or should not is another story. First we always tell our students that an assumption of reasonable clinical homogeneity is needed. If the studies are deemed to be from completely different populations to the point that we would not expect the effect to be similar then you should not be pooling. If you are confident in the clinical homogeneity of the studies providing the data then you can look at evidence of statistical heterogeneity. Most common tool today is the I-squared test and I won't go into this because there are numerous scientific articles and book chapters describing this strengths/ weaknesses and controversies for each approach.

Second part relates to the source of your data. You are contemplating pooling data from prospective randomized trials with prospective  and retrospective cohorts. Without going into too many details of why this inappropriate, I will note that it is very prevalent in the literature. There are many reasons why you shouldn't pool data from different study designs (experimental and observational as broad categories) as biases inherent to each study design make the pooled results unpredictable and may not represent the actual true effect estimate. The reason most people pool data from different study designs is out of necessity because of publication bias. Journals don't like to publish meta-analyses with two or three included trials. Throw in an additional 10 - 15 cohort studies and all of a sudden the meta-analysis looks much more robust. In fact, in many cases, you have just done the opposite.

A simple analogy would be to say that you stated a hypothetical RCT with the expectation of recruiting 1000 patients in each arm. After a few months, you realize that you will not recruit this number of patients in a realistic time frame. Therefore you decide to go to your hospital database and search for patient records that fulfill the inclusion criteria for your study (retrospective study). Even so, you still don't have the required number of patients' data. Therefore you start asking patients who fulfilled the inclusion criteria but refused randomization to still be monitored on the treatment choice they chose with their doctor to see the effect of treatment and side effects (prospective cohort). Here's where it all goes wrong... you then decide to combine the data from the randomized patients, retrospective cohort and prospective cohorts and report on the 'total patient population'. I don't know of any clinical researchers that would say this is appropriate, but we see this regularly in published systematic reviews. Like the old saying goes... garbage in... garbage out (GIGO).

Furthermore, what would the strength of the evidence be in this situation? How can you GRADE this evidence and predict effects in a specific patient population? Therefore the only reason to do this to increase statistical power and the cost is everything else. Simply put... don't do it.

Hope this helps.

Ahmed

P.S. Sorry this was longer than originally anticipated.




Date: Fri, 10 May 2013 02:55:39 +0300
From: [log in to unmask]
To: [log in to unmask]
Subject:

thank you all for your feedback. we got only 2 RCT ans and4 prospective cohort and one retrospective study. I guess one cannot combine 2 RCT , we need at least to have more than 3 studies to combine them?
The studies are similar in terms of outcomes,  stage of disease, and measurements, but differ of course in some aspects such as race, gender...  So, I understand from your comments that metanalysis is inappropriate and narrative one is the option?
Best Regards,
Samia

Dr. Samia Alhabib, MD, PhD

Sent from Samsung Mobile



-------- Original message --------
From: "Dr. Carlos Cuello" <[log in to unmask]>
Date:
To: [log in to unmask]
Subject:


Hi Samia

I agree with Tom that both designs should not be combined. This great question drives us to another question

If you have a body of evidence from RCTs, why would you need the observational studies?
 
I would say that depends on the outcome, and you need to decide a priori which outcomes will you be evaluating; 
For example, if you need to evaluate whether your intervention provoques a rare adverse event (let's say, pancytopenia), then perhaps the cohort studies (or other observational designs) will be more useful than the RCTs.

RCTs by default are considered high quality evidence that you can downgrade depending on certain criteria (see GRADE methods). Furthermore, observational studies are considered low quality evidence that can also be upgraded depending on the GRADE criteria (very similar to the Hills criteria, actually). My suggestion is that you could add GRADE SoF tables to your work.

best


On Thu, May 9, 2013 at 12:55 AM, Tom Jefferson <[log in to unmask]> wrote:
Samia, I strongly advise you to keep RCTs and cohorts separate, you canot combine their results as they two different designs. Are the cohorts all prospective? If not you should further subdivide them, if they are comparative that is.

Good luck with interpreting the observational studies, especially if they tell you a different story from the RCTs.

Best wishes,

Tom.




On 9 May 2013 02:41, Paul Elias <[log in to unmask]> wrote:
I would not combine them in a meta analysis if this is the first q..you should not combine controlled studies with uncontrolled studies.....I would combine the RCTs and observational designs separately and discuss them as such. If you did combine (though I argue no), I would then do a sensitivity type analysis and separate the observationals from the full set (7????) to show the impact on the summary estimate etc. 

 
 
 
 
 
Best,
Paul E. Alexander
 




--- On Wed, 5/8/13, Mohamed El Shayeb <[log in to unmask]> wrote:

From: Mohamed El Shayeb <[log in to unmask]>
Subject:
To: [log in to unmask]
Received: Wednesday, May 8, 2013, 11:05 PM


Hi Samia,

In my humble opinion, there are two notions in this. First, and most important, the conceptual notion. This should be determined by the researcher's discretion. I think the questions that really count rather than just the design will be: Do the studies measure the same clinical end point? Are the results overall consistent in regards to the overall effect magnitude and direction? Are patients characteristics somehow similar across the studies?
The second notion is the statistical notion of heterogeneity. There is no arbitrary line or threshold for heterogeneity that you may consider for such a decision. Further, the presence or absence of heterogeneity will then dictate which model to use (random versus mixed effects model). The first assumes similar distribution among all the studies. The latter assumes that each individual study distribution is part of a bigger distribution for the combined effect and does the calculation accordingly.

In an article that I think someone in the list distributed few weeks ago (unfortunately can't remember who and when) the issue of dealing with heterogeneity in reviews was discussed. No criticism was directed to the presence or absence of heterogeneity itself. In fact, the article criticized usage of an inappropriate model, and emphasized the importance of addressing and quantifying heterogeneity (Q and I2 statistics); interpreting it if you can and use the proper model. I will try to find the article and send it to you.

I hope this is helpful for you and your team .

Cheers,

Mohamed El Shayeb

Health Technology and Policy Unit
University of Alberta
3025 Research Transition Facility
8308 114 street
Edmonton, Alberta, Canada
T6G2V2

Tel: +1 (780) 248 1524


On Wed, May 8, 2013 at 10:05 AM, samia Alhabib <[log in to unmask]> wrote:
Dear list; 
we are doing systematic review of the effectiveness of DC beads and conventional chemotherapy in treating hepatocellular carcinoma, and we found only 2 RCTs and 5 cohort studies that fulfill our inclusion criteria. My Q: is it appropriate to combine these studies although of different designs? bear in mind that it is uncommon disease. ..
Another Q that we debated with the research team; what is the level of heterogeneity that could be acceptable to combine the effect  size?
Appreciate your feedback

Samia

Dr. Samia Alhabib, MD, PhD


Sent from Samsung Mobile




--
Dr Tom Jefferson
www.attentiallebufale.it



--
Carlos A. Cuello-García, MD
Centre for Evidence Based Practice & Knowledge Translation 
Tecnológico de Monterrey School of Medicine & Health Sciences
Editorial Board, The Journal of Pediatrics
CITES piso 3. Morones Prieto 3000 pte. Col. Doctores 64710 
Monterrey, NL. Mexico. 
Black telephone +52.81.8888.2223 & 2154. Skype: dr.carlos.cuello
Facebook · Twitter · Linkedin 

The content of this data transmission must not be considered an offer, proposal, understanding or agreement unless it is confirmed in a document signed by a legal representative of ITESM. The content of this data transmission is confidential and is intended to be delivered only to the addressees. Therefore, it shall not be distributed and/or disclosed through any means without the authorization of the original sender. If you are not the addressee, you are forbidden from using it, either totally or partially, for any purpose