> >Allow me add a bit more salts to Prof Senn's comments. To my
> >understanding relative risk (RR) and odd ratio (OR) are different.
> >The latter is only an approximation for the former when the event
> >rate is low (Altman DG BMJ1988;317:1318).
>
> True, but you misunderstand the implication of this. I could just as
> well say that the median is only an approximation to the mean if the
> data are not too skewed. But this is not a reason for saying that
> you should only ever use the median if the data are not skewed. You
> could (and often would) argue the reverse: you should not use the
> mean if the data are very skew. The correct way to describe the
> relationship is to say that the relative risk is only an acceptable
> approximation to the odds ratio if the event rate is small.
Chicken or egg? risk or odds? Clinically, we talk about risk and
relative risk because they tell us exact event probability and
relative probability. Odds, however, can not present such a
probability, or can only approximate this probability when the event
is rare. Theoretically, we may argue one may be an approximation to
another or vice versa, but it will make no sense if we do not know
what we are going to present.
>
> >We use OR in case-control
> >study because it can not produce event rate. In addition,
> >case-control study is normally undertaken in the conditions with
> >low event rate to save the budget and overcome the long term
> >consuming (if the event is rare, say 1/10,000, it is difficult to
> >prospectively observe this event during the short time period) .
> >However, many meta-analyses based on prospective studies such as
> >RCTs with high event outcomes, for example, efficacy of drug
> >therapy, used OR instead of RR.
>
> True. And with good reason. This is because NO random sampling is
> involved in clinical trials. (Randomisation is involved but that is
> about making sure that patients are comparable between groups not
> representative.) As such the base rate in the population is NOT
> estimable. From this point of view exactly the same problem arises
> as with case control studies. It is only by falsely treating the
> trial as representative that an "estimate" is produce. Clinical
> trials, as is the case with all experiments, are about comparisons:
> what are needed are reliable comparative measures. The odds ratio
> fits the bill.
I reckon that random sampling and choosing an appropriate outcome
measures are two different matters. We can not say that because OR
(lnOR) is more likely to be symmetrical, therefore we should always
use OR regardless of study design and what we are going to measure.
> >OR has some advantages but again they should not be overused.
> >
> >1.It can always take values between zero to infinity, which is not
> >the case for RR. For example, if the baseline risk is bigger than
> >50%, it is impossible to double it with RR but with OR. This supply
> >a mathematical superiority for OR in variety of conditions;
> Exactly
This mathematical superiority may explain why the OR is so popular. It
can be used in case-control studies, cohort studies and even clinical
trials to estimate the relative risk. However, it does not mean that
the OR is the best choice for everyone, particularly for cohort
studies and clinical trials, where the RR can be directly calculated.
It is also not a case to refuse the use of the RR in cohort studies or
clinical trials with common event rate. For example, treatment group
had 84% (84/100) pain relief compared to placebo group which had 60%
(60/100) pain relief. Given such risks, RR=1.4, OR=3.5.
>
> >
> >
> >2.In addition, the existing multiple regression analysis such as
> >logistic regression models to analyse association between event
> >rate and risk factors actually work in terms of odds and report
> >effects as odds ratio.
> >
> >Except for these, RR should be the better choice, particularly for
> >RCTs with high event rate outcome.
> This is just crazy. Your arguments should have led you to the
> opposite conclusion. The only circumstance under which the RR is
> acceptable is when the background risk is small and then as an
> approximation to the OR.
I disagree, what we are talking about here is that the OR is suitable
for case-control studies, its systematic reviews and studies related
to logistic regression analysis. For cohort studies, clinical trials
and its systematic reviews, I would suggest to use the RR unless there
is a convincing argument otherwise.
I may agree with your on "the RR is acceptable when the background
risk is small". However, as the example shown above this is not a case
to influence the use of the RR in clinical trials with common event
rate. In contrast, it may be a case when the background risk is high /
common, the OR is more likely to mislead the results if you use it to
present relative risk in cohort studies or clinical trials.
>
> Stephen
> --
Weiya Zhang
**************************************************
Dr. W Y Zhang
Centre for Evidence-Based Pharmacotherapy
Department of Pharmaceutical Sciences
Aston University
Aston Triangle
Birmingham B4 7ET
UK
Tel: +44 (0)121 359 3611 x5535
Fax: +44 (0)121 359 0733
Email: [log in to unmask]
http://www.aston.ac.uk/pharmacy/cebp/
**************************************************
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|