As I understand it, RR can only be used when the population prevalence of
the measured outcome is known. In many cases, population prevalence is not
known, so OR can be properly used to approximate RR when quantifying rare
events. As events become more common, OR no longer accurately approximate
RR. In these cases, OR will overestimate the benefits and harms of
treatment (RR).
See:
Sackett DL. Down with odds ratios! Evidence-Based Medicine. 1996
Sept/Oct;1:(got from ACP website so I don't have page numbers).
-----Original Message-----
From: Andrew Jull <[log in to unmask]>
To: [log in to unmask]
<[log in to unmask]>
Date: Wednesday, February 24, 1999 11:41 AM
Subject: Odds Ratios vs Relative Risk
>Dear All
>
>I was recently conversing with a colleague and the question of why use ORs
>instead RRs came up. My naive response was that the choice seemed to be
>based on the individual's preference and that I had not read anything that
>suggested the use of one was more informative than another (and indeed have
>read some material that suggests ORs are misleading when the OR is high -
>but I don't want to get onto that issue).
>
>Can anyone help me with why odds ratios might be used in preference to
>relative risk or vice versa.
>
>regards
>Andrew Jull
>Clinical Nurse Consultant
>Auckland Hospital
>NEW ZEALAND
>
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|