Can anyone help explain why certain information important to successful
drug therapy is not incorporated into medical practice? I am thinking in
particular of the apparent non-acceptance of some forms of
pharmacogenetic-based prescribing.
For example, it has been known for many years that individual variations
in genes (polymorphism) that encode and determine the activity of Phase
I (e.g. 2D6) and II drug metabolising (e.g. n-acetyltransferase) enzymes
are a major source of inter-individual variability in systemic exposure
to drug doses. However, in contrast to other areas of clinical practice
(such as testing to select patients for Herceptin therapy in breast
cancer treatment), pharmacogenetic tests to identify gene varients in
patients that influence the activity of cytochrome P-450 drug
metabolising enzymes have not met with widespread acceptance, despite
the demonstrated benefits and cost-effectiveness of such a strategy.
The inconsistency in drug prescribing behaviour in this area, despite
evidence of its rational basis, has been highlighted recently by the FDA
in connection with the predicted upsurge in pharmacogenetics-based
medicines (see e.g. LJ Lesko and J Woodcock, The Pharmacogenomics J.
(2002) 2, 20-24). There may well be several issues here that serve to
influence the acceptance or otherwise of such tests, and I would be
extremely grateful for any ideas from list members as to why validated
pharmacogenetics-based tests are routinely used in some areas of medical
practice (e.g. oncology) whereas they are not in other areas (e.g.
general practice).
Graham Lewis
--
Dr Graham Lewis
Science and Technology Studies Unit (SATSU)
University of York, York YO10 5DD UK
Tel: +44 (0)1904 433055 Fax: +44 (0)1904 433043
email: [log in to unmask] www.york.ac.uk/org/satsu/
|