Allow me add a bit more salts to Prof Senn's comments. To my
understanding relative risk (RR) and odd ratio (OR) are different.
The latter is only an approximation for the former when the event
rate is low (Altman DG BMJ1988;317:1318). We use OR in case-control
study because it can not produce event rate. In addition,
case-control study is normally undertaken in the conditions with low
event rate to save the budget and overcome the long term consuming
(if the event is rare, say 1/10,000, it is difficult to prospectively
observe this event during the short time period) . However, many
meta-analyses based on prospective studies such as RCTs with high
event outcomes, for example, efficacy of drug therapy, used OR
instead of RR.
OR has some advantages but again they should not be overused.
1.It can always take values between zero to infinity, which is not the
case for RR. For example, if the baseline risk is bigger than 50%, it
is impossible to double it with RR but with OR. This supply a
mathematical superiority for OR in variety of conditions;
2.In addition, the existing multiple regression analysis such as
logistic regression models to analyse association between event rate
and risk factors actually work in terms of odds and report effects as
odds ratio.
Except for these, RR should be the better choice, particularly for
RCTs with high event rate outcome.
Weiya
> My view is quite clear: NNTs are quite unsuitable for reporting the
> results of trials and (especially) meta-analyses. Relative risks are
> acceptable if the background risk is low. However the measure of
> choice is the odds ratio and analysis should proceed on the log-
> odds ratio scale unless evidence is produced that this scale is not
> additive. (This is the default form for in generalised linear models for
> binary data but others can be considered.)
>
> There is a general confusion between the two phases of modelling
> an effect and applying the results. Modelling should follow the
> science. If we are going to pool results between different trials and
> apply results studied in one population to another than we need
> results that are as nearly constant from one to the other as is
> possible. (Additive to use the statistician's jargon.) When we finally
> come to make a clinical decision based on these results, then and
> only then has the time arrived to translate the finding into clinically
> relevant measures using as further inputs whatever is known about
> the clinical state of the patient.
>
> John Nelder some years ago pointed out to a similar confusion in
> the field of quality control where workers were using the measure of
> final interest to dictate the form of analysis rather than suing
> additive models and translating the results at the point of
> application.
>
> In short our motto should be "additive at the point of analysis,
> relevant at the point of application".
>
> Regards
>
> Stephen Senn
> --------------------------------------------------
> Professor Stephen Senn
> Department of Statistical Science &
> Department of Epidemiology and Public Health
>
> University College London
> Room 316, 1-19 Torrington Place
> LONDON WC1E 6BT
>
> Tel: +44 (0) 171 391 1698
> Fax: +44 (0) 171 813 0280
> Email: [log in to unmask]
> webpage: http://www.ucl.ac.uk/~ucaksjs/
> -------------------------------------------------
**************************************************
Dr. W Y Zhang
Centre for Evidence-Based Pharmacotherapy
Department of Pharmaceutical Sciences
Aston University
Aston Triangle
Birmingham B4 7ET
UK
Tel: +44 (0)121 359 3611 x5535
Fax: +44 (0)121 359 0733
Email: [log in to unmask]
http://www.aston.ac.uk/pharmacy/cebp/
**************************************************
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|