First look at this fascinating research
http://en.wikipedia.org/wiki/John_P._A._Ioannidis
Yours
Dr. Cawthorpe
On Jan 13, 2011, at 9:13 AM, Stephen Senn wrote:
> Dear Dominic,
> 1. If you fit trial as a fixed effect ( as in a conventional meta-
> analysis), then the main effect difference between populations is
> automatically adjusted for.
> 2. The further issue is one of population by treatment interaction.
> This is a difficult point and it hard to see how judgement can be
> avoided. For example, from one point of view every meta-analysis is
> carried out in population X but the results are used in population
> Y. This is because we studied patients yesterday (X) in order to
> decide how to treat a different set of patients tomorrow (Y).
>
> See
>
> 1. Senn, SJ. The many modes of meta, Drug Information Journal 2000;
> 34: 535-549.
> 2. Senn, SJ. Added Values: Controversies concerning randomization
> and additivity in clinical trials, Statistics in Medicine 2004; 23:
> 3729-3753.
> 3. Senn, S. Hans van Houwelingen and the Art of Summing up,
> Biometrical Journal 2010; 52: 1-10.
>
> and also
> 4. Yates, F, Cochran, WG. The analysis of groups of experiments,
> Journal of Agricultural Science 1938; 28: 556-580.
> for an early discussion!
>
> Regards
> Stephen
>
> -----Original Message-----
> From: Evidence based health (EBH) [mailto:[log in to unmask]
> ] On Behalf Of Dominic Hurst
> Sent: 13 January 2011 15:32
> To: [log in to unmask]
> Subject: Combining results for meta-analysis
>
> Hi, I'd appreciate some help on the following:
>
> A systematic review looks to compare the effectiveness of two
> interventions, A and B, in a particular population, X.
>
> The interventions, though, are commonly used in a discrete
> population Y also.
>
> Some of the studies retrieved compare A and B just in the desired
> population X, but others compare the interventions in a mix of
> populations X and Y.
>
> In the latter there may not have been block randomisation so the
> proportions of X and Y receiving A or B may be unbalanced.
>
> In doing a meta-analysis of these studies, should one be cautious in
> looking to combine the results from the X-only studies with those
> extracted from the X-Y mixed studies? Does it matter that in
> removing the subgroup X from the mixed study the original
> randomisation has been disrupted and does it matter that the A and B
> intervention groups may be then be unbalanced?
>
> Would it be reasonable to test for the significance of this with
> sensitivity analysis by removing the results from the mixed studies
> after the meta-analysis?
>
> Thanks,
>
> Dominic
|