This thread fissioned into three subthreads and then fizzled out without
any real resolution of any of the ideas raised. As I am partly
responsible for the fissioning I would like to tidy up, so that we can
either close the thread(s) or restart them with appropriate titles.
Thread 1. Mistrust in questionable statistics - use the REI (raised
eyebrow index) to indicate level of scepticism.
Brain Alper started this thread by raising the issue of how to grade
evidence when you have a gut feeling that the statistics used are dodgy,
but you can't prove this because the stats are too complex to check
given the time available.
Steve Simon made the point that we have to trust the experts sometimes.
I made the point that you could bolster your traditional critical
appraisal methods if you checked that the results were plausible, and
suggested eyeballing as one way of checking face validity.
To solve Brian's problem, I suggest that we summarize our critical
appraisal of research evidence with a level plus a degree of scepticism.
This is analogous to reporting a measure of central tendency plus
confidence interval for research outcome measures.
The degree of scepticism could be graphically indicated with ! marks.
The more ! marks, the higher the eyebrows are raised when you see the
evidence level.
So,
! might be appropriate for a study using multiple imputation;
!! might be appropriate for a study using hierarchical Bayesian
meta-analysis.
!!! might be appropriate for a study using enriched enrolment.
Thread 2. Eyeballing as a way to check plausibility of results
I claimed that you can fairly accurately estimate the direction of
effect, the confidence interval and heterogeneity of the aggregate data
by eyeballing a Forest plot. This led to suggestions that we could check
this with some quick research, and the realization that to do this
properly would need someone to spend some time planning and managing it.
I have a few suggestions and comments.
(i) This would be an excellent research project for a student. It would
give them experience of doing research; introduce them to Forest plots;
bring home the importance of checking the plausibility of results; and
some journal (I am sure) would find it a publishable as an interesting
contribution to the small body of empirical research on EBM and critical
appraisal.
(ii) I had forgotten that some interesting work has already been done on
eyeballing graphical results. See Cumming, G. (2007). Inference by eye:
Pictures of confidence intervals and thinking about levels of
confidence. Teaching Statistics, 29, 89-93. Paper available at
http://www.latrobe.edu.au/psy/cumming/docs/IBI%20Teaching%20Stats.pdf
Geoff Cumming has an Excel spreadsheet to go with the paper, which
allows you to explore interactively the effect of changing the
variables:
http://www.latrobe.edu.au/psy/esci/
Thread 3. Value of research information
How would you value the information provided by a proposed research
project?
I suggested that research into the accuracy of eyeballing would be
worthwhile and gave some subjective opinions to support this. Are there
better ways of valuing the information that proposed research could
provide? I have found some papers on this issue, but their titles
generally promise more than the papers themselves deliver.
My 3pence for the day!
Michael Power
Clinical Knowledge Summaries Service
www.cks.library.nhs.uk
PS If you want to respond to any of these threads, please put the
relevant thread title in the subject of your email message. Just to keep
the EBH table of contents tidy!!! Thanks.
|