Hi Teresa
I happen to be going through the literature on treatments for chronic
prostatitis. A number of studies make the basic error of comparing the
outcomes after treatment, instead of comparing changes in outcomes.
This is quite a common error, and the reason for it is that tabulation of results
at baseline and at followup leads the eye to make the visual comparison
between results at followup (and to compare p-values for changes within
groups - another temptation that authors sometimes cannot resist).
Tabulation ideally should include the difference between changes in outcomes
(with p value and confidence interval).
Table 1 in the appended reference might be useful for a teaching exercise in
what comparisons are most informative, and why others can be misleading.
I would sumarize such reports as providing unclear evidence of effectiveness.
Michael
Clinical Knowledge Author, Guideline Developer and Informatician
Clinical Knowledge Summaries Service www.cks.library.nhs.uk
----- reference------
Evliyaoğlu Y, Burgut R.
Lower urinary tract symptoms, pain and quality of life assessment in chronic
non-bacterial prostatitis patients treated with alpha-blocking agent doxazosin;
versus placebo.
Int Urol Nephrol. 2002;34(3):351-6
On Tue, 14 Oct 2008 14:28:39 -0700, Benson, Teresa
<[log in to unmask]> wrote:
>My apologies, I did not intend to imply that all studies have
>statistical errors-- in fact, I assume the vast majority do not. I
>should also have clarified that this is but one segment of a larger EBM
>training that has been going on for almost three years-- and for these
>three years, we have completely avoided statistical topics because of
>the nurses' discomfort. Instead, we've told them, "When you get to the
>article's statistical methodology, just trust that they've done it
>correctly." However, I think it's time to get past this avoidance.
>After three years of trainings from external EBM experts, refresher
>trainings, and periodically reviewing articles in teams, the nurses are
>pretty good at comparing a study's stated a priori objectives to the
>results to look for post-hoc "fishing," as well as spotting threats to
>internal bias-- randomization, blinding, intervention/performance bias,
>confounders, choice of outcome measure, attrition (including
>questionable assumptions for ITT analysis), etc. I definitely wouldn't
>start on statistical topics without first getting them comfortable with
>issues of external validity and clinical relevance, internal validity,
>and some basics about stated results: p-values, confidence intervals,
>power, and measures of association & outcome (ARR, relative risk, RRR,
>NNT, odds ratios, sensitivity/sensitivity, likelihood ratios, predictive
>value, etc.) Now that our staff has demonstrated comfort with these
>concepts for a couple of years, appraising a study's statistical
>methodology is our next logical step.
>If you have any further thoughts, I'm all ears. Thanks again,
>Teresa Benson
>
|