Print

Print


Ahmed,
I am in no way advocating cutting corners. I understand why complexity is unavoidable. I celebrate this level of detail and transparency in systematic reviewer research. This also goes for RCTs . My question is how can we make information both transparent and manageable for those who are resource poor in terms of funding  and expertise and labor force. Is there a way we can bring the mechanics to be an active researcher within the economic reach of  every interested and concerned practitioner . For me it is about managing the knowledge entrusted  so it used and replicated. I also agree that guideline are best if evidence rather than consensus based.

If the voice of everyday practitioner and users of the technology are not heard then is not that more important evidence that is buried? 

Amy


Amy Price
Empower 2 Go 
Building Brain Potential
Http://empower2go.org
Sent from my iPad

On 26 Oct 2012, at 01:38 PM, "Ahmed Abou-Setta, M.D." <[log in to unmask]> wrote:

> In my mind, the beautiful thing about evidence-based anything is that it
> should be both evidence-based (with facts that can be checked by others) and
> transparent (the way the facts were gathered can be reproduced by others in
> the same way the original authors produced them). There are many reasons why
> the 'true' systematic review process has become complicated, and highly
> undesirable by the masses. Mainly they are built on the evidence that bias,
> heterogeneity in interventions, comparators, populations, etc. regularly
> occur between experiments, and even evident cases of fraud and data
> manipulation have been seen in the literature. At the end of the day, you
> have to use the best evidence you can find whether when preparing a
> systematic review, a clinical guideline, or even giving advice to a patient.
> If we start to believe that cutting corners is the same as doing the best
> job that we can then we have a really big problem. We should be cutting
> corners out of necessity, not because we believe it will produce the same
> results. And even then explicitly state which corners we cut, not just sweep
> it under the rug.
> 
> Let's compare this with a clinical trial. Dr. X decided to do a randomized
> trial to see pain management strategy works best for patients with
> rheumatoid arthritis. After getting a team of experts together, he realizes
> that there is no way he can do a proper, adequately powered RCT with the
> budget at hand or recruit the number of patients needed by the power
> calculation. So he decides to do a retrospective chart review... Do we
> honestly believe that the results of the chart review should be considered
> the same as a well-performed RCT? Dr. X had to cut corners, and so will a
> lot of 'systematic reviewers' so that they can finish their review on time
> and on budget. But where is the evidence to support those decisions. Here
> lies the biggest problem of all... there is a large gap of evidence on what
> we should be doing and what we can omit during a systematic review. For
> example, most people believe that study identification and inclusion should
> be done by two independent reviewers, but what is the evidence behind that.
> What if I told you that a randomized trial showed that no difference in
> effect whether two individuals worked independently, or if one checked the
> work of the other. What about blinding reviewers to journal and author
> names, and on and on. Even some of the most respected guidelines on how to
> undertake a systematic review are really just consensus agreements among the
> top clinical epidemiologists. So before we start to sing the praises of
> cutting corners and turning every systematic review into a narrative review
> with the word systematic in the title, we need to truly understand the
> impact of these decisions.
> 
> Ahmed
> 
> 
> -----Original Message-----
> From: Evidence based health (EBH)
> [mailto:[log in to unmask]] On Behalf Of Amy Price
> Sent: Friday, October 26, 2012 9:07 AM
> To: [log in to unmask]
> Subject: Re: Medline based systematic review
> 
> Jacob and Kev,
> 
> I must confess that for me this top down approach remains quite overwhelming
> and intimidating from a personal labor perspective especially over the
> multiple topics that would be needed by a general Physician. I have stayed
> silent on this until now because I felt perhaps it was my personal concern
> and that I just needed more expertise. This is likely still true but
> nevertheless Jacob raises important issues. If we want those that practice
> to be active as well as passive contributors how are we making that a
> reality?
> 
> There is the aspect where we want the best quality and I understand the need
> for quality. That being said all the search quality in the world will not
> fix some of the bias problems we face because they are not initiated by the
> doctor trying to find the best intervention for the patient but are
> triggered by industry data games. This is indirectly evident in multiple
> Cochrane reviews as well where researchers unknowingly quote research where
> data reporting has not been transparent. In my field I know this because I
> can spot the players and know the latest work but in areas where I most need
> clinical expertise quickly this is not the case.
> 
> There is the drawback that if all this is beyond the reach of those with a
> day job due to time and expertise constraints there will be few
> contributions from them, the learning will be passive and their voice, the
> voice of those in the trenches will be lost. Are their ways we could
> simplify the process and be more inclusive without compromising quality?
> Are those in daily practice and our junior Drs just meant to be passive
> consumers of Up 2 Date and other commercially prepared solutions. If this is
> the case how does it differ from the old textbook days and how will active
> interest in EBM and consumption be sustained and integrity maintained over
> time if the users role is reduced to that of only a passive consumer without
> input?
> 
> Best,
> Amy
> 
> On 10/26/12 1:09 AM, "Jacob Puliyel" <[log in to unmask]> wrote:
> 
>> Dear Kev
>> You write
>> "The conclusion seems to be that at least most of the time, a less than 
>> comprehensive search will give us the same outcome as a comprehensive 
>> search. Unfortunately, there may be occasions that this is not so.
>> The trouble is, we cannot tell when those occasions will occur."
>> 
>> Please let me suggest that by letting perfect become the enemy of the 
>> good, we may be  doing more harm than good to the cause of EBM.
>> 
>> EBM began as a ³bottom-up² paradigm that taught residents to ask 
>> answerable and focused questions, search the literature in a 
>> transparent and reproducible way to find the best evidence and to 
>> critically appraise it in an explicit and structured manner, often 
>> using mathematical analyses to give a clear idea of the strength, 
>> statistical significance and possible clinical significance of the 
>> results.
>> 
>> Unfortunately it is no longer an amateurs¹ enterprise.
>> Journals now insist on numerous boxes being ticked before they will 
>> even consider an review article for publication.
>> Multiple data bases have to be explored, the references in the papers 
>> need to be hand-searched for new references, clinical trials registers 
>> and conference proceedings scrutinised and pharmaceutical companies and 
>> individual researchers must have been contacted for unpublished data 
>> and ongoing trials.
>> 
>> Only organisations with very deep pockets can afford this anymore.
>> Vested interests and jumped into the void.
>> Now we have even Cochrane Meta analysis, written up by persons with 
>> direct conflicts of interests declared.
>> 
>> Als-Nielsen and colleagues have shown that association with for profit 
>> organisations had little impact on treatment effect but the conclusions 
>> were more positive due to biased interpretation of trial results 
>> (Als-Nielsen et al 2003). Lundh and colleagues have shown that 
>> publication of industry-supported trials was associated with an 
>> increase in journal impact factors and revenue (Lundh et al 2010).
>> Smith (2010), the former editor of the BMJ, has suggested that 
>> publishing the RCT sponsored by one drug company could yield a million 
>> dollars in the sales of reprints alone.
>> 
>> We can counteract this form of biased interpretation only if allow less 
>> comprehensive searches done be independent researchers to challenge 
>> these 'comprehensive' reports.
>> 
>> I must declare that most of this stuff I had researched for paper on 
>> EBM published elsewhere (Evidence Based Medicine: Making It Better)
>> 
>> Jacob Puliyel MD MRCP M Phil
>> Head of Pediatrics
>> St Stephens Hospital
>> Delhi
>>