In message <[log in to unmask]>, Guthrie, Dr Bruce <[log in to unmask]> writes > >It depends what you mean by an 'adverse event prevented'. > >There doesn't appear to be any good evidence that teaching critical appraisal or >EBM affects patient outcomes, so you couldn't construct an NNT. You might be >able to construct one for an increase in knowledge or skills, or change in >attitude to EBM, but are these really what matter? See the systematic review on >the effect of treating critical appraisal at http://www.bham.ac.uk/arif/SysRevs/ >TeachCritApp.PDF > >I've been a bit surprised that this SR hasn't generated any discussion on the >list. If 'teaching EBM' was a new drug, what would our EBM trained assessment >say about spending lots of time and money promoting it? And if the evidence >only supports a B (or a C) grade recommendation to teach critical appraisal, how >come we promote it with all the vigour of an A+? Do we need RCT/strong >experimental evidence of benefit on patient outcomes? I speak as someone who >runs an EBM course, and often wonders what difference it makes to patients. > The review exposes the limitations of the methods that have been used to evaluate EBM teaching. In my view EBM practice and teaching is a "complex intervention" and it is therefore difficult if not impossible even to standardise it as an intervention, much less measure valid and reliable quantitative outcomes. The review does give encouraging, if necessarily limited, evidence that teaching EBM produces measurable effects on knowledge and skills. However, as I'm sure most people involved in EBM teaching would acknowledge, some of the effects are not so easily quantifiable. Workshops appear to profoundly affect some participants' attitudes, their self-confidence, their willingness to challenge preconceptions and to acknowledge their own ignorance. Others react negatively and prefer to return to a safer world where certainty based upon authority is more comfortable than accepting the probabilistic and unstable world implied by EBM. Now how do you measure that? >The systematic review's conclusion is a challenge: > >"This review provides reassurance to those who have invested in critical >appraisal teaching activities that they are likely to have a positive impact. >The evidence is not, however, sufficient to encourage further expansion of >critical appraisal activities, due to limitations on its validity and >significance in practice, and the total absence of results for important >outcomes. > >Further studies should be undertaken in partnership between adult >educationalists and healthcare researchers to ensure studies are properly >designed and valid outcomes are used. It is of importance to assess the size of >benefit of critical appraisal training to postgraduates/CPD, as this is where >greatest investment is made. Such an evaluation should be large, randomised and >assess outcomes and changes which are of significance in practice." > There is some truth in this conclusion (and I'm particularly attracted to the idea of research in collaboration with educationalists), but I'm not sure that randomised controlled trials could be of high enough internal validity simply because in real life you can't exclude contamination. That is not to say that you couldn't set up a trial with cluster randomisation of NHS Trusts, for example, where some had a sustained EBM teaching programme and others didn't, but I think that it would be extremely difficult to define valid and measurable outcomes - and even if you did, the need for experimental rigour would necessarily restrict their scope. I favour sociological and anthropological research on EBM. We are, after all, talking about cultural change here. How does this style of learning affect the relationships within clinical teams? How do EBM aware clinicians make decisions regarding the management of their patients? How do patients respond to this approach? And, of course, how often does appraised evidence get applied in practice? A variety of methods could be used - I'm using semi-structured interviews to explore how research aware GPs make decisions about anticoagulation in patients with atrial fibrillation, for example. Focus groups, videoing consultations, interviews with patients, analysis of educational activities (what do self-directed groups study? How do they study? How do they relate their studying to patient care?), participant observation ("what is the experience of participants in EBM workshops?" is a project I'd dearly love to obtain funding for...) are the kind of methods that I think have huge potential to demonstrate what the practice of EBM would mean to clinicians and patients. Toby -- Toby Lipman General practitioner, Newcastle upon Tyne Northern and Yorkshire research training fellow Tel 0191-2811060 (home), 0191-2437000 (surgery) Northern and Yorkshire Evidence-Based Practice Workshops http://www.eb-practice.fsnet.co.uk/