I love this discussion!
There's a resident in our program who routinely challenges me on exactly
this point, i.e. that it's funny that we try to be so
statistic-and-number-oriented with likelihood ratios when, at base, we
"guess" at a pre-test probability. Yes, our guesses at pre-test probability
can take the form of anything from a shot in the dark to a prevalence study
in our populations, but to use the argument that we can develop a reasonable
pre-test probability from "clinical experience" is the same fallacy of logic
that physicians use to treat patients with a particular drug, etc. We
cannot reliably remember how often a drug worked in our population, and
though we may have a perception of the prevalence of a condition, how many
people went undiagnosed by us in the past, and therefore would not be taken
into account in our estimation of prevalence?
The nearest thing to a semi-scientific answer to this that I've seen is
something written up once in the Family Medicine literature called the
Family Practice Incidence Rate (though I'm sure other specialties could use
it...(grin)). Simply a literature-born incidence rate proportionalized to
the size of a physician's panel of patients [if condition x has a prevalence
of 1% in the general population, then without any other knowledge of my
population of 3000 patients, I would expect to see 30 cases of condition x].
Obviously, this technique is fraught with overgeneralization, but the more
you know about your population, the more accurate it could be.
I think a non-epidemiologic approach would be to decide on a set of pre-test
probabilities that define your definitions of: really unlikely (say...1%),
unlikely (10%), maybe-maybe not (50%), probably (90%), definitely (99%), and
then apply the likelihood ratios - rounded off to the nearest whole number
to satisfy the statistical purists - and see what you get!
When I teach appraisal of diagnostic tests to medical students in our 6 week
FP clerkship, we go over LRs, but we drop back to this basic approach of
deciding on a rough pre-test probability and understanding that a LR can
only change the probability of disease, not (usually) definitively diagnose
a condition...that seems to be enough for them to chew on...
For what it's worth,
John
John Epling, MD
Naval Hospital Jacksonville
Family Practice Residency Program
> -----Original Message-----
> From: [log in to unmask]
> [mailto:[log in to unmask]]On Behalf Of Simon,
> Steve, PhD
> Sent: Thursday, April 22, 1999 7:43 PM
> To: 'Klazien Matter-Walstra'; EBH Discusision list
> Subject: RE: Pre-test probabilities
>
>
> Klazien Matter-Walstra writes:
>
> >The foundation Paracelsus today organises many cources
> >on EBM and general practice medicine.
> >
> >One of the subjects which are discussed in the cources
> >is how pre-test and positive or negative post-test
> >probabilities (positive predictive value, negative
> >predicitve value) are related. Although these
> >relations itself are easy understood we experience
> >that physicians have problems with how to appraise
> >pre-test probabilities. Because they realise that
> >appraising pre-test probabilities is mostly intuative
> >and influenced by experience, they argue that it is
> >still better to do a test than refrain from it because
> >they can't know the pre-test probability for certain.
> >
> >My question is if there is literature on (estimations
> >of) pre-test probabilities for certain deseases.
>
> I'm very interested in what others say about this. My
> understanding is that
> you look at the prevalence of the disease and adjust it upwards
> or downwards
> based on special characteristics of the patients you see and perhaps on
> information the patient tells you. I'm not a doctor, but I suspect that a
> good doctor has to have some ideas about prevalence in order to make even
> non-quantitative assessments of their patients. They also have to know
> whether the chances of a disease change when a patient is a two pack a day
> smoker or had a heart attack five years ago.
>
> I hope you stress that the doctors don't have to specify pre-test
> probabilities to three significant figures.
>
> I understand people's reluctance to attach numbers to these things, but
> surely they can provide an upper and lower bound. This can be as wide as
> they like. If the pre-test probability is anywhere from 3% to 30% and the
> likelihood ratio for a positive result is 2.0, then the post test
> probability is between 6% and 46%. If the likelihood ratio is
> 10.0, then the
> post test probability is between 24% and 81%. If the likelihood ratio is
> 50.0 then the post test probability is between 61% and 96%.
>
> In each one of these cases, knowing the range of post-test
> probabilities is
> still better than not doing the calculation at all. If you decide to treat
> when the post-test probability is greater than 50%, then the
> three decisions
> would be "don't treat",."order additional tests", and "treat".
>
> If you order a test, and you don't have a good idea about the
> probability of
> disease after ordering the test, you haven't made good use of the test.
>
> If your doctors are still reluctant to use pre-test probabilities, perhaps
> it would help to present them with a test that has three or four possible
> results. If they don't know immediately what to do on the basis of an
> intermediate result, you can show them how assessing pre-test
> probabilities
> can answer that question for them.
>
> Another situation that is ambiguous is a positive test result during the
> "off-season". Can we still rely on a positive test when we know that the
> condition is very rare in the summer? How rare does the condition have to
> get in order to make the test a waste of time?
>
> Another important situation is how to assess these tests when you are a
> specialist who only sees cases that are referred to you by others. The
> patients that a specialist sees are far more likely to have a serious
> condition. How does this change how you view test results?
>
> My background is in Statistics rather than in Medicine, so my comments may
> be naive. I'd appreciate any clarifications from others on this list.
>
> Steve Simon, [log in to unmask], Standard Disclaimer.
> STATS - Steve's Attempt to Teach Statistics: http://www.cmh.edu/stats
>
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|