Print

Print


Paaveen, I completely agree with what John says. Taking the results at face value, the main conclusion should be that the evidence from the two studies is highly conflicting.  It appears that study 2 is much smaller than study 1 – but this does not in any way undermine this conclusion.

 

This sort of thing CAN genuinely happen. Nevertheless, if there is any possibility of opening audit trails for the two studies – one possible explanation to check out is that in one of the two studies, somehow the two groups have got interchanged – either exposed / unexposed or event / no event. The unthinkable CAN happen!!  It is partly to prevent such disasters that the framework for carrying out clinical trials etc. now incorporates many more safeguards than ever before. I believe that such things HAVE happened, occasionally, in the history of medical research, without ever being noticed! And, with the explosion in research volume, even with more safeguards this will still happen, occasionally.

 

You may or may not be familiar with the expression ‘Murphy’s law’. Anything that can go wrong, will go wrong – sooner or later. Humans get things mixed up. In a study of reasons for several hundred women failing to come for breast cancer screening after being sent an invitation plus a reminder letter – unpublished except that it is referred to in chapter 1 of my book – one-sixth of non-attendances were attributed to confusion – whether on the woman’s part or that of the breast screening service we shall never know. This is a stark illustration of how endemic mixup is.

 

Robert G. Newcombe PhD CStat FFPH HonMRCR

Professor of Biostatistics

Institute of Primary Care and Public Health

School of Medicine

Cardiff University

Neuadd Meirionnydd

Heath Park, Cardiff CF14 4YS

 

My book Confidence Intervals for Proportions and Related Measures of Effect Size is available at http://www.crcpress.com/product/isbn/9781439812785

 

See http://www.facebook.com/confidenceintervals

 

Home page https://sites.google.com/site/robertgnewcombe/

 

 

 

From: A UK-based worldwide e-mail broadcast system mailing list [mailto:[log in to unmask]] On Behalf Of John Sorkin
Sent: 15 November 2016 16:58
To: [log in to unmask]

Paaveen,

 

I think you need to speak to a statistician as it appears that you don't fully understand odds ratios. It appears that study 1 demonstrates that exposure increases risk for with outcome (OR >1), while the in the second study exposure decreases risk for outcome (OR <1). When the two ORs are combined, one showing increased risk with exposure, the other decreased risk (i.e. protection) with exposure, the overall OR is, not surprisingly non-significant.

 

John David Sorkin M.D., Ph.D.
Professor of Medicine
Chief, Biostatistics and Informatics
University of Maryland School of Medicine Division of Gerontology and Geriatric Medicine
Baltimore VA Medical Center
10 North Greene Street
GRECC (BT/18/GR)
Baltimore, MD 21201-1524
(Phone) 410-605-7119
(Fax) 410-605-7913 (Please call phone number above prior to faxing)

 


>>> paaveen jeyaganth <[log in to unmask]> 11/15/16 11:49 AM >>>

Dear Allstat members,

I have question regarding combine odd ratio.

when i combine two significant study , the

combine overall effect become not significant.

what is the reason for that? if i want to explain non statistician how 

can i explain?

 

            OR(95% CI)

study_1(Septal wall thickness)   1.18(1.10, 1.27) significant

study_2(Septal wall thickness)   0.52(0.32, 0.84) significant

 

combine overall 0.81(0.37,1.81) not significant 

 

Thanks 

Paaveen

You may leave the list at any time by sending the command

SIGNOFF allstat

to [log in to unmask], leaving the subject line blank.