Badri wrote:
It is interesting that only 3 references could be classified as level I
evidence (meta-analysis, evidence-based synthesis and a diagnostic study)
and 40% were reviews (6/15). Though the attempt to classify the authors’
commentary (the references provided) by levels of evidence was crude in
nature, it is apparent that narrative reviews are still the main source of
evidence in this example. One possible explanation could be due to the
specific (narrow) nature of this clinical problem and it remains to be seen
whether a commoner clinical problem would produce better quality evidence..
[381 wo
-------------
I think Badri raises a pertinent and interesting issue for all clinicians
interested in search of best evidences. The question still remains how do we
transfer the results of a particular study to our particular specific
clinical scenarios? Hence, there is a reasonable bias towards basing our
opinions on reviews and summaries. There is a similar problem reported in a
recent study investigating the quality of care by the HMOs in the United
States. There too, the authors did not find a single study that met their
criteria of being an RCT (but 3 out of all references cited in the original
articles qualified). This spawns the other question to Badri whether he
checked the references cited in the studies that the authors cited, and the
references from them and so on.
One thing about the letter I did not understand was something related to the
year of publication. Badri, what did you mean by the "Mean" and Standard
DEviation" of the year of publications??
:)
Arin
__________________________________________________
FREE Email for ALL! Sign up at http://www.mail.com
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|