Thanks, yes, agree entirely.
Shared decision making is definitely a young science and we are all exploring what the optimal frameworks are (note the plural) and then how to optimally blend those with high level consultation skills required, fine-tuned of course to the individual in THAT consultation.
If you're finding this difficult then you're in very good company. My view is if anyone regards any approach in this area as a solution then they are deluding themselves. This is inherently complex and whilst more research is ongoing - members of this group are I know involved in some of that - even when we have better data on what works better (note not best), approaches in clinical practice will still require the art of individualisation.
As for the levels of evidence, the EBM definition of the "best available evidence" is the best we can do. If subsequently that changes we change the depiction of risks and benefits in shared decision making. That's called learning, and as Marshall Marinker memorably said "That's what we do for a living". Clearly one can't just pick a bar chart or a Cates plot off the shelf and have no idea about the quality of the evidence underpinning the visual one uses.
However, let's not get sucked too far into concerns about the accuracy or validity of the point estimate - important though they are. We all have a cognitive bias which tries to push us towards reducing uncertainty to zero, and that's what makes us think, for example, "I'm not so sure about these shared decision aids, they aren't based on good enough evidence to be able to show this to patients". That's in no way a criticism of your important question.
But actually, from the patient's perspective, showing them the stochastic uncertainty - "I've got no idea whether you'll be one of the nine people who'll have an event even with this extra treatment, whether you'll be one of the 81 who would be fine anyway without the extra treatment, or whether you'll be one of the 2 people out of a hundred who get benefit from the extra treatment, what do you think?" - that's the biggest "step on" in terms of consultation skills in a generation. The fact that in the future those numbers respectively might be shown to be 8 or 6, or 79 or 83 (or even 0 or 4) with more research doesn't mean that in that consultation TODAY you're not using the "best possible evidence".
There are people like David Spiegelhalter working on ways of representing the uncertainty around the point estimate - whether that is possible to do without incrementally complicating the maths so the understanding of the risk: benefit balance by the patient approaches the zero it is without any decision aid remains to be seen!
UK research shows at least two thirds of patients want some involvement in decision making. It's wrong not to try. And we can improve as we get more data from research. For me the perfect remains the enemy of the good.
What's fantastic is we have a growing interest and some really innovative approaches to reinvigorate the practical application of evidence in real world clinical settings.
Best wishes
Neal
Neal Maskrey
NPC
-----Original Message-----
From: Evidence based health (EBH) [mailto:[log in to unmask]] On Behalf Of Amit Singh
Sent: 23 March 2011 08:00
To: [log in to unmask]
Subject: Re: Clinical Decision Making and Diagnostic Error
Dear Neal and Ash,
Thanks for providing links to this extremely important issue. As a GP registrar I am often struck by the forces that influence a patients concerns and expectation. This leads to the inevitable introspection about the reasons for my own preferences and recommendations. Thinking about thinking or meta-cognition is a fascinating science and Dr Groopman's book "How Doctor's Think" was probably my first introduction to this topic. Prof David Spiegelhalter's "Understanding Uncertainty" (http://understandinguncertainty.org/) is another rich source on the science of communicating risks and uncertainty.
One concern I have is that a lot of advice is based on inadequate trials and sometimes outright deception. Tamiflu for swine flu, presented in one of the videos, is an example. Discussions with smiley/sad faces do not reflect many uncertainty about the quality of data on which they are based. For many of the conditions that I commonly see in the primary care, the level of evidence is generally low and I do not know how to factor this in my own decision making.
Richard Feynman in his Cargo Cult Science (http://calteches.library.caltech.edu/3043/1/CargoCult.pdf) called for "a specific, extra type of integrity that is not lying, but bending over backwards to show how you're may be wrong...this is our responsibility as scientists, certainly to other scientists, and I think to laymen."
This makes me feel irresponsible if I did not somehow factored this type of uncertainty in the decision making.
Am I stuck in some hopeless circular argument? What are your thoughts?
Dr Amit Singh
GP Registrar
Bangor Scheme
North Wales
Twitter: @amitns
|