Print

Print


Sorry, I know that these days most people don’t have time to read full articles and go beyond twitter exchanges, but the key point that we made in our Epistemology of EBM version 1.0 paper is that EBM enthusiastically draws on all major traditions of philosophical theories of inferences and scientific evidence. (Hopefully, people can at least have a look at the Tables/Boxes in the paper to see the illustration of how different theoretical approaches define different solutions to scientific problems at hand)

To me, this is a key question that Version 2 of the EBM epistemology paper should tackle.

ben

 

 

From: Benjamin Djulbegovic MD
Sent: Thursday, June 11, 2020 8:07 AM
To: 'Ashley Kennedy' <[log in to unmask]>; Mohammed T. Ansari <[log in to unmask]>; 'Jeremy Howick' <[log in to unmask]>
Cc: Anoop B <[log in to unmask]>; [log in to unmask]
Subject: RE: Good Science Is Good Science For the sake of both science and action in the COVID-19 pandemic, we need collaboration among specialists, not sects.

 

Just when I was to compose my answer to  Ashley, Jeremy’s e-mail arrived.

So, in the interest of efficiency, I copied Jeremy’s e-mail below.

 

Absolutely, both Ashley’s and Jeremy’s points are well taken  and rather very important ones. Again, it is important to remember that we concluded very clearly that EBM should not be construed as a new epistemological theory of (medical) knowledge, but rather seen as a set of practical tools for improving practice of medicine.

 

And, while we stated the importance of theories:

“EBM endorses scientific inquiry based on a solid rationale for undertaking an investigation and, in this sense, acknowledges the importance of an a priori hypothesis underlying the conduct of clinical research.”

 

In the end, we endorsed the central role of evidence:
“Even if we accept that our observations may be theory-laden, it is incontestable that evidence sometimes overwhelms prior theory and “speaks for itself”….

As a result, scientific evidence (at least in some circumstances) has been able to secure objectivity, ie, intersubjective agreement among inquirers who may have held the opposite views”

 

One of the key epistemological principles that I believe in is that science is incomplete, and it is subject to continuous change. Every idea, finding, concept etc should be revisited every 5 years or so.

 

So, I cannot agree more it is Time for Version 2 of the EBM epistemology paper.

 

Can you two take a lead?

 

Thanks for the important discussion and insights

 

ben

 

---

HI Ben,

 

I think this is an important point. As you know I like and cite your paper often. However, it is unclear to me whether it is what you think the epistemology of EBM should be or what it actually is. Also, it is important to distinguish between a theory of knowledge (EBM certainly is that, uncontroversially), and the role of theories in confirming hypotheses (EBM explicitly rejects this).

 

The fact is that EBM ‘made its name’ by disregarding theory to confirm hypotheses. Here are the celebrated cases we all grow up with:

So, if it disregards theory, then it must look at the RCTs for homeopathy without appealing to the (admittedly difficult to swallow) mechanisms theories. Now I have actually done research on homeopathy, and in my view the evidence is not strong enough to support an effect even without appeal to theory. However, if we use the standards I used to reject the evidence for homeopathy on appraisal of RCT evidence grounds for non-homeopathy, much of conventional medicine would be considered ineffective.

 

Time for Version 2 of your epistemology paper to flesh these issues out?

 

Jeremy

 

Director, Oxford Empathy Programme

Senior Researcher and Impact Fellow,Oxford Faculty of Philosophy

Personal website:www.jeremyhowick.com

 

 

From: Ashley Kennedy [mailto:[log in to unmask]]
Sent: Thursday, June 11, 2020 7:33 AM
To: Mohammed T. Ansari <[log in to unmask]>; Benjamin Djulbegovic MD <[log in to unmask]>
Cc: Anoop B <[log in to unmask]>; [log in to unmask]
Subject: Re: Good Science Is Good Science For the sake of both science and action in the COVID-19 pandemic, we need collaboration among specialists, not sects.

 

Hi Ben,

 

I don't think that theories are irrelevant, however I had thought (perhaps I am mistaken) that according to EBM, theories are mostly irrelevant when it comes to a determination of whether or not a given treatment or intervention is effective.

 

My assumption was that the reason for this was to prevent what Mohammed points out (bias).

 

I don't think that there is any such thing as "pure" or theory-free observation. Another way of saying that is that the results all experiments/observations (not just those in medicine) must be given an interpretation.

 

But it seems problematic if we can throw out RCTs just because they don't fit with our expectations. Else, how could we ever learn anything new?

 

Ashley

 

 

-----------------------------

Ashley Graham Kennedy, PhD

Associate Professor of Philosophy

Honors College of Florida Atlantic University

ashleygrahamkennedy.weebly.com


From: Mohammed T. Ansari <[log in to unmask]>
Sent: Thursday, June 11, 2020 10:01 AM
To: Benjamin Djulbegovic MD <[log in to unmask]>
Cc: Ashley Kennedy <[log in to unmask]>; Anoop B <[log in to unmask]>; [log in to unmask] <[log in to unmask]>
Subject: Re: Good Science Is Good Science For the sake of both science and action in the COVID-19 pandemic, we need collaboration among specialists, not sects.

 

 

 

EXTERNAL EMAIL : Exercise caution when responding, opening links, or opening attachments.

 

 

"It seems that both you and Anoop, whom I want to thank for starting this important thread, believe that theories are irrelevant. I wonder what other people think." 

 

Interesting question Ben, and I am not sure I have found my final answer as yet. My current thinking is that it depends where in the development of health technologies they are being used and for what assessment. 

 

They have an important role in discovery, hypothesis generation, and testing novel ideas for new technologies 

They also have important hypothesis generation role in testing untested outcomes of, or surveling them for, existing technologies 

But they lead to an expectation bias and reading too much into empiric data 

Basic bench scientists are much more certain of authors' conclusions than skeptic/critical clinical reviewers of empiric findings. The Intro part of the paper has already primed them enough to accept emerging empric data based conclusions. 

Should we see empiric data with a veil of ignorance or under with basic science priors? The answer depends I guess. 

 

m

 

 

On Thu, Jun 11, 2020 at 9:39 AM Benjamin Djulbegovic MD <[log in to unmask]> wrote:

Hi Ashley,

At the time when we embarked on writing our epistemology paper, there was a lot of discussion in the field, whether EBM represents a new theory of knowledge. We concluded (see the link below):

 

“Our findings indicate that EBM should not be construed as a new scientific or philosophical theory that changes the nature of medicine or our understanding thereof. Rather, we should consider EBM as a continuously evolving heuristic structure for optimizing clinical practice.”

 

In that sense, as the article says , we need to consider both theory and empirical evidence to indeed tell us if what we do works or does not. And, to answer Mohammed’s question: like evidence,  not all theories are created equal. Credible theories are supported by the “web of existing knowledge “, although admittedly what we know today may change in the future.

 

It seems that both you and Anoop, whom I want to thank for starting this important thread, believe that theories are irrelevant. I wonder what other people think.

 

I hope the professional philosophers like you can further chime in on the question of the importance of theories vs observations - this remains one of the most important, yet neglected areas in EBM. It would be great to bring some more clarity to the issue.

Ben 

Sent from my iPad - excuse typos and brevity

 

On Jun 10, 2020, at 11:27 PM, Ashley Kennedy <[log in to unmask]> wrote:



 

"For example, well done, double blind, placebo-controlled homeopathy RCTs were dismissed because of “extreme skepticism of homeopathic theory”

 

I don't understand this. It seems to me an instance of EBM trying to "have its cake and eat it too." RCTs are meant to tell us that a treatment works (or doesn't) not how a treatment works. So if a well designed study of a treatment shows that it is effective, then, according to EBM, it is. Theories of how are mostly irrelevant.

 

What am I missing here?

 

Ashley

 

 

 

 

-----------------------------

Ashley Graham Kennedy, PhD

Associate Professor of Philosophy

Honors College of Florida Atlantic University

ashleygrahamkennedy.weebly.com

 


From: Evidence based health (EBH) <[log in to unmask]> on behalf of Mohammed T. Ansari <[log in to unmask]>
Sent: Wednesday, June 10, 2020 9:52 PM
To: [log in to unmask] <[log in to unmask]>
Subject: Re: Good Science Is Good Science For the sake of both science and action in the COVID-19 pandemic, we need collaboration among specialists, not sects.

 

 

 

EXTERNAL EMAIL : Exercise caution when responding, opening links, or opening attachments.

 

 

Anoop although I am as skeptical of homeopathy as yourself.....but quantum entanglement theory of quantum physics (spooky action at a distance) could be invoked in theory so treatment does not violate laws of physics, no?

 

m

 

On Wed, Jun 10, 2020 at 9:42 PM Benjamin Djulbegovic MD <[log in to unmask]> wrote:

No, Anoop

In terms of research practice, I would say that theories typically come before observations…but I agree with you that observations have higher veridical value…(of course, in the absolute sense, everything has to start with the “first” observation)

Hypotheses and theories are extraordinary important, they serve as a framework to organize our ideas and thoughts and design experiments…in fact, it has been argued that all our observations are theory-laden, which, in turn, validate the accuracy of theoretical predictions, however tentatively…

ben

 

 

From: Anoop B [mailto:[log in to unmask]]
Sent: Wednesday, June 10, 2020 4:54 PM
To: Benjamin Djulbegovic MD <[log in to unmask]>
Cc: [log in to unmask]
Subject: Re: Good Science Is Good Science For the sake of both science and action in the COVID-19 pandemic, we need collaboration among specialists, not sects.

 

Hello Ben,

 

I think what is unique to homeopathy compared to other alternative treatments is that it goes against the fundamental laws of physics/chemistry. Lot of other CAM treatments out there could be argued that there could be some plausibility which is limited by our current knowledge.  So examples of treatments like homeopathy are pretty rare. Maybe we should have a grading for basic science evidence too? 

 

And what you said so right:theory comes after observation. How many studies have we reversed our explanation based on the results!

 

So always better to have rigorous empirical evidence than  from anecdotal observation. And this also shows why we need to  focus more on assessing quality of evidence. We tend to put more focus on these things when we don't agree/doubt the evidence. I think in these times it is amazing how people have been so critical about research studies.Dr. Sackett would have been so proud :)

 

 

 

 

 

 

 

 

 

 

 

On Mon, Jun 8, 2020 at 8:51 AM Benjamin Djulbegovic MD <[log in to unmask]> wrote:

Anoop,

While some people indeed tried to dismiss the homeopathy trials based on the quality of conduct, if theory is strong , homeopathy would flourish based on the existing data (NB a number of homeopathy trials are actually well done, placebo controlled trials)...it has not penetrated the mainstream medical practice largely because of poor theory , the statement with which you appear to be agreeing as well ...but,  certainly if you have bad theory+ badly designed/conducted trial , then it’s no brainer what we should do. Now, having said all this, I am  aware homeopathy is still being used by some folks. Why? That seems interesting question to explore.

bd

Sent from my iPad - excuse typos and brevity

 

On Jun 8, 2020, at 12:03 AM, Anoop B <[log in to unmask]> wrote:



Thank you Ben. 

I looked at the reference in your paper about the meta analysis.  The final line of the conclusion was "   When account was taken for these biases in the analysis, there was weak evidence for a specific effect of homoeopathic remedies, but strong evidence for specific effects of conventional interventions. This finding is compatible with the notion that the clinical effects of homoeopathy are placebo effects." So I am not sure if the homeopathy trials were high-quality.  And I would say I have a bias against homeopathy based on the theory hence checked the paper :)

 

I think acupuncture trials are in the same category. But still published in JAMA and high impact journals!

 

Great point about , "Theories do not attempt to accurately describe unobservable reality but rather to predict empirical findings”! 
 

 

 

On Sun, Jun 7, 2020 at 9:35 AM Benjamin Djulbegovic MD <[log in to unmask]> wrote:

Very important issue, Anoop

In our now 10+ old paper on epistemology of EBM, 

 

 

where we also discussed EBM’s relationship between theory and “getting our observations correct”,  we pointed out that “Although EBM stresses the importance of reliable observations over theory , this stance is not rigid”. For example, well done, double blind, placebo-controlled homeopathy RCTs were dismissed because of “extreme skepticism of homeopathic theory”.

It is these exceptions that have challenged further development of hierarchy of evidence along the vertical integration of basic science and clinical observations. And, as we also stressed, because “Theories do not attempt to accurately describe unobservable reality but rather to predict empirical findings” ,  achieving vertical coherence may only be possible by updating/revising  our basic science knowledge based on “passing severe test” in terms of obtaining reliable clinical observations.

Ben

Sent from my iPad - excuse typos and brevity

 

On Jun 7, 2020, at 8:35 AM, Anoop B <[log in to unmask]> wrote:




[Attention: This email came from an external source. Do not open attachments or click on links from unknown senders or unexpected emails.]


Trisha and Lipsitch have talked about looking at the totality of evidence, so basic science, RCT, and observational studies. This is assuming this will be better than the EBM hierarchical approach. This is a very important question which seriously questions the foundation of EBM, but we haven't discussed this at all. 

 

My opinion is that it sounds like a very "sensible" approach, but lacks concrete examples that can be applied systematically. My greatest concern is that you are basically mixing low and high quality evidence and somehow assuming they will lead to a  better decision. One example I can give is observation studies and basic science pointing in one direction for antioxidants and cancer, but RCT's pointing in the other.  Or maybe there is something to it. 

 

On Sun, Jun 7, 2020 at 5:19 AM Michael Power <[log in to unmask]> wrote:

Thanks Rod for pointing us to these thought provoking articles.

 

Marc Lipsitch has it exactly right when he says “good science is good science”. But his article is not explicit about why Jonathan Fuller’s thinking is wrong.

 

Lipsitch says that “Good science is good science”, leaves us to infer that the science (hypotheses, study design, data, models, and decisions/conclusions) should be critically appraised.

 

In contrast, Fuller creates three boxes to shoehorn people into: clinical epidemiology, public health epidemiology, and evidence-based medicine. These boxes are perfectly designed to act as targets for his straw man arguments.

 

This is a rhetorical technique also employed by Trisha Greenhalgh who is cited for her attacks on the kind of EBM she objects to, but without recognizing that her definition of EBM is a mischaracterization of reality.

 

Fuller cites the “Hill criteria” in his first article. 

 

Someone must have told him that Sir Austin Bradford Hill did not call them “criteria”, because in the second article he calls them “viewpoints”.  

 

(Exercise: apply the Hill “criteria” to my inference of causality.)

 

What Fuller does not seem to comprehend is that Bradford Hill was providing guidance on how to critically appraise claims of causality: a checklist of things to consider, not a set of criteria to be applied unthinkingly by an algorithm. The checklist might be showing its age now, but the attitude of mind remains essential to good science.

 

Critical appraisal is the very heart of good science (and evidence based medicine). Lipsitch gives his own set of rules of thumb for doing good science but he does not label it as a critical appraisal tool.

 

Fuller discusses the first step in the critical appraisal of a model: sensitivity analysis to quantify the effect that changes in the model’s parameters have on its results.

 

He does not discuss the second, more important and more challenging step in the critical appraisal of a model: the effect its structure has on its outputs and consequent decision-making. 

 

John Ioannidis, without explicitly saying that he was critically appraising model structures, did just this. 

 

For example, he points out that a model of an epidemic that treats the population as homogeneous will not identify vulnerable groups (such as those living and working in care homes) as needing special protection. 

 

Ioannidis also makes the point that, if the output of a model is incorporated into decision-making that fails to consider collateral effects of interventions, the outcome is likely to be suboptimal.

 

For me, the take home messages from these articles are:

 

(1) the challenge we face is to ensure that science is good science, and we should do this by incorporating critical appraisal at all stages from hypothesis generation through evidence collection to decision making.

 

(2) strawman arguments are uncritical appraisal.

 

Michael 

 

Sent from my iPad

 

On 7 Jun 2020, at 01:28, Rod Jackson <[log in to unmask]> wrote:

 Hi all. If you have the time, the fantastic recent essay written by Marc Lipsitch, an infectious disease epidemiologist, who was responding to an essay about the differences between a public health epidemiology and a clinical epidemiological approach to Covid-19, written by the philosopher of medicine, Jonathon Fuller. There is also a response by John Ioannidis who demonstrates his more limited and, I believe, overly sceptical views on integrating evidence. These essays are free to view in the Boston Review. The Lipsitsch essay ( https://bostonreview.net/science-nature/marc-lipsitch-good-science-good-science) stands alone if you don't have time to read Fuller’s essay. There are actually 4 essays in the series: Fuller, Lipsitsch, Ioannidis and Fuller again. Well worth reading all 4.

Cheers Rod

 

* * * * * * * *

sent from my phone

 

 


To unsubscribe from the EVIDENCE-BASED-HEALTH list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=EVIDENCE-BASED-HEALTH&A=1

 


To unsubscribe from the EVIDENCE-BASED-HEALTH list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=EVIDENCE-BASED-HEALTH&A=1

 


To unsubscribe from the EVIDENCE-BASED-HEALTH list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=EVIDENCE-BASED-HEALTH&A=1


------------------------------------------------------------
-SECURITY/CONFIDENTIALITY WARNING-

This message and any attachments are intended solely for the individual or entity to which they are addressed. This communication may contain information that is privileged, confidential, or exempt from disclosure under applicable law (e.g., personal health information, research data, financial information). Because this e-mail has been sent without encryption, individuals other than the intended recipient may be able to view the information, forward it to others or tamper with the information without the knowledge or consent of the sender. If you are not the intended recipient, or the employee or person responsible for delivering the message to the intended recipient, any dissemination, distribution or copying of the communication is strictly prohibited. If you received the communication in error, please notify the sender immediately by replying to this message and deleting the message and any accompanying files from your system. If, due to the security risks, you do not wish to receive further communications via e-mail, please reply to this message and inform the sender that you do not wish to receive further e-mail from the sender. (LCP301)
------------------------------------------------------------

 


To unsubscribe from the EVIDENCE-BASED-HEALTH list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=EVIDENCE-BASED-HEALTH&A=1

 


To unsubscribe from the EVIDENCE-BASED-HEALTH list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=EVIDENCE-BASED-HEALTH&A=1

 


To unsubscribe from the EVIDENCE-BASED-HEALTH list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=EVIDENCE-BASED-HEALTH&A=1



To unsubscribe from the EVIDENCE-BASED-HEALTH list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=EVIDENCE-BASED-HEALTH&A=1