Hi Michael
Great - thanks for taking this on. The tradeoff between clear and
accurate is a tricky one.
The spectrum of statements might range along the lines of:
A. Don't use useless tests
B. Don't use a test if the post-test probabilities are about the same as
the pre-test probabilities
(Or Don't use a test if the Positive Predictive Value is equal or close
to to 1 - Negative Predictive Value).
C. Don't use a test if sensitivity + specificity is not mcuh more than 1
(ie, same as a coin toss)
D. Before doing a test, ask what you do if the test was postive?
negative? If the same, then don't do the test.
E. Don't use a test if none of the potential post-test probabilities
cannot shift you from current management plans.*
I am sure there are others in between these, but where to drawn the line
depends on the audience,
Good luck!
Paul
* Note: this last one incorporates the earlier ones, but also considers
the decisional context and allows for multi-outcome tests.
On 1/12/2012 7:20 PM, Michael Power wrote:
> Hi Jenny
>
> Thanks for your very helpful comments.
>
> I have taken your and Paul's comments on board and will be revising my
> version of the "10 commandments for testing" in their light.
>
> The point that I wanted (and probably failed) to make in my response
> to Paul, is that researchers, systematic reviewers, guideline
> developers, and EBP specialists must be rigorous with the statistics.
> But, they should also help help frontline healthcare professionals
> understand and apply diagnostic accuracy statistics.
>
> The "10 commandments for testing" is an attempt to meet this
> challenge. Their appropriate use I think is first as a starting point
> for teaching, then as a reminder checklist.
>
> I might use footnotes (or supplement it with a teacher's guide) to add
> detail left out in the compromise between brevity and attention
> grabbing on the one hand, and comprehensiveness and accuracy on the
> other.
>
> Best wishes
>
> Michael
>
> On 1/12/12, Jenny Doust<[log in to unmask]> wrote:
>> Hi Mike
>>
>> These are great and there are some good general principles here to help
>> people to make more sensible decisions about diagnostic testing. But it is
>> important that we get them right before we send them out further.
>>
>> I agree with Paul about the positive predictive value. To state the
>> principle in more everyday terms, if the positive predictive value of a test
>> is the probability that a patient has a disease given that the test result
>> is positive, there are lots of tests that we use in clinical practice, and
>> especially in a low prevalence setting such as general practice, where the
>> positive predictive value would be much less than 50% but the test is still
>> clinically useful. Some examples would be:
>>
>> - The presence of neck stiffness or a petechial rash in a febrile
>> child. The positive predictive value may be only 10% (or whatever), but you
>> would still want to do the test because even at that probability it would
>> help you decide if you are going to give the penicillin injection or send
>> them home.
>>
>> - All screening tests would have a PPV of< 50%, but the screening
>> test raises the probability of disease and would be the first of a sequence
>> of tests, each of which would increase the probability of disease until
>> there is sufficient confidence in the diagnosis to make a treatment decision
>>
>> - Tests which help you to decide if you are going to refer a
>> patient on to see a specialist (such as Paul's eg of d-dimer for PE). If
>> the test is positive, it doesn't raise the probability that the pt has the
>> disease by much, but if the test is negative, you can exclude the disease
>> and not refer.
>>
>> So the positive and negative predictive value will help you determine if a
>> test is worth doing depending on whether it moves you over the threshold for
>> the next action.
>>
>> I think what you mean about the test being equivalent to a coin toss can
>> either be expressed by the Youden index as Paul has suggested or an
>> alternative would be a likelihood ratio. If the likelihood ratio is 1, then
>> the test is as good as a coin toss.
>>
>> Also, it is not important to know the positive and negative predictive
>> values quantitatively as you have suggested because the pre-test probability
>> may vary by all sorts of indefinable elements. It is good enough to know
>> that a test is good at ruling in or ruling out the disease. This is more
>> closely related to the likelihood ratio of the test.
>>
>> The rule about the low prevalence test should say that in a low prevalence
>> population, even a very sensitive test has a poor positive predictive value,
>> so there will always be many more false positives than true positives in a
>> low prevalence setting. However, it is still useful if it allows you to
>> determine the next action eg refer on or send home.
>>
>> There should also be a commandment that if there is a very high clinical
>> suspicion that a patient has a disease and a test comes back as negative,
>> don't always believe the test result.
>>
>> Hope this helps (and Happy New Year to you and Richard)
>> Jenny
--
Paul Glasziou
Bond University
Qld, Australia 4229
|