> My rudimentary stats tells me that PPV is closely related to specificity,
> and NPV is closely related to sensitivity, so is it really correct for
> you to say sens and spec are useless at interpreting results of a test?
> In other words, a highly sensitive test (with few false negatives) will
> also produce a high NPV, while a highly specific test (with few false
> positives) will also produce a high PPV, won't it? Am I missing something
> here?
While I am (very definately) not a statistician, hopefully I can put this
sensibly.
Sensitivity and specificity are technical parameters of a test, in
isolation from any particular population. If you feed it a definate
positive (sens) or negative (spec), they indicate the odds of it giving you
the correct answer.
PPV/NPV relate to the application of the test to a particular population,
and depend on the prevalence of the condition as well as the test
parameters. Consider a test with a very high sens/spec (but still less than
100%), and use it on a large population entirely free from the disease in
question. There will be a few false positives, no true positives, and the
PPV will be 0%. As the condition becomes more prevalent in the population,
the number of true positives increases and false positives decreases, and
the PPV rises - despite the test being unchanged. A similar (reverse) trend
happens for NPV - the less prevalent the condition, the higher the NPV for
a given test.
I have no idea how this will survive e-mail formatting, but worth a try:
Population
T With disease Without disease
E Positive a b
S Negative c d
T
Sens = a/(a+c)
Spec = d/(b+d)
PPV = a/(a+b)
NPV = d/(c+d)
Regards
Michael
--
Medical Student
|