Print

Print


I enjoyed Andrew Booth's summary. I'd like to expand a bit on one topic.

>What if it is a tricky statistical point? 
>There is no need to necessarily handle this any different 
>from any other learning need. Again if it severely 
>compromises the understanding of the paper then the 
>answer will have to be found before proceeding with the 
>article. However if it is just a question of an 
>unfamiliar method or technique then ask the group to take 
>that on trust for the moment and to check this later 
>after the session - point out that the statistics is one 
>of the easiest sections of the paper to validate, 
>(compare inadequate description of randomisation, for 
>example), and that if an inappropriate method has been 
>used it will often be picked up by statistical reviewers. 
>Two useful resources to carry with you are Last's 
>Dictionary of Epidemiology and Greenhalgh's How to read a 
>paper (especially p.73).

I have a fictional story that I tell people. It's about someone who comes to
my office and says he has trouble understanding a recently published paper.
I look at the title "In vitro and in vivo assessment of Endothelin as a
biomarker of iatrogenically induced alveolar hypoxia in neonates" and say
that I understand why you would have trouble with a paper like this. Yeah,
he says in return, I don't understand what this boxplot is.

Dr. Booth makes an excellent point in that the statistical methods are
usually easy to validate. What I stress is that you should focus not on how
the data was analyzed, but on how the data was collected. The four big
issues in data collection are randomization, blinding, exclusions/drop outs,
and protocol deviations. You don't need a Ph.D. in Statistics to assess
these issues.

Also keep in mind that some of the statistical details are there only for
the benefit of those who want to reproduce the research. Most of the medical
professionals I know are smart enough to skim over phrases like "reverse ion
phase chromatography" so they should likewise skim over phrases like
"bootstrap confidence intervals using bias corrected percentiles (Efron
1982)."

When a statistical method is followed by a reference as in the example
above, then you can take some solace in the fact that the authors do not
expect you to be familiar with this method.

Also, focus on whether you understand how to interpret the results. A
bootstrap is a computer intensive method for creating confidence intervals
that are not dependent on assumptions like normality. Once you appreciate
this fact, you don't have to worry as much. You already know to interpret
confidence intervals. The bootstrap is just another tool that produces
confidence intervals.

You do have to know some statistical terminology, of course. Anyone reading
research papers should be familiar with Type I and II errors, odds ratios,
survival curves, etc. A basic appreciation of simple statistical methods is
enough for nine out of ten papers.

I don't want to discourage people from learning more about Statistics, of
course, but neither do I want people to be intimidated by statistical
jargon. I have thought that a fun prank would be to go to a poster session
at a medical conference, look over each poster carefully and then say
something like "Very interesting, but aren't you worried that your results
would be invalidated by the presence of heteroscedascity?" And then I would
slowly walk away.

Steve Simon, [log in to unmask], Standard Disclaimer.
STATS - Steve's Attempt to Teach Statistics: http://www.cmh.edu/stats


%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%