Responses from Atle Klovning , Dave Sachin, Diana Kornbrot, Victor Montori,
and Steve Simon to Andrew Jull's question have emphasized the clinician's
perspective (quite reasonably, given the nature of this discussion list).
However, consider that lab directors, epidemiologists, public health program
directors or others may need to select a diagnostic test for surveillance
programs. That is a different perspective, and one directly served by
sensitivity, specificity, and Bayes' Theorem.
Test "A" might be best for a surveillance program, but test "C' for a
screening program. The consequence of missed cases represents the worst
error possible on initial screening; conversely, diminishing a program's
credibility by sounding too many false alarms represents the worst error
consequence in surveillance programs.
In screening program initial tests, ability to rule out with confidence is
relatively more important than ability to rule in with confidence:
specificity is less important than sensitivity. In a surveillance program,
just the opposite is true. My students hear that there are two views
through these 2x2 tables: clinicians tend to look across the table because
the question they face is whether a patient has a condition or not.
Clinicians need to know the probability with which a positive or negative
test indicates presence or absence of a condition in a given patient.
Epidemiologists, among others, tend to look down the table because the
question we face is test accuracy. We want to know the extent to which tests
will accurately label cases vs. noncases. Perhaps LR+ and LR- best serve
clinicians, sensitivity and specificity (and Bayes' Theorem) best serve
epidemiologists?
David Birnbaum, PhD, MPH
Clinical Assistant Professor
Dept. of Health Care & Epidemiology
University of British Columbia, Canada
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|