Print

Print


Would this also be a complication in comparative effectiveness research?

Best
Amy

On 2/1/15, 7:09 AM, "k.hopayian" <[log in to unmask]> wrote:

>Agreed that the notion that Sn and Sp are not contingent on the
>population is based on the premise that they were calculated in a
>sample(s) that included all populations proportionately. This ideal is
>pretty hard to achieve.
>
>The are still useful depending on why you are doing the tests. For
>example, they would be useful when comparing two screening programmes.
>However, if deciding whether to investigate a symptom, such as whether to
>perform an abdo ultrasound for the isolated system of bloating in primary
>care, you would want to know the PPV and NPV in primary care.
>
>Kev Hopayian
>
>> On 1 Feb 2015, at 11:22, Stephen Senn <[log in to unmask]> wrote:
>> 
>> There is a minority but extremely interesting view that holds that
>>sensitivity and specificity are misleading parameters. Much of the
>>conventional discussion of this  in terms of prior probability assumes
>>that these are permanent features of the test unaffected by prevalence
>>so that therefore what one has to do is put them together with
>>prevalence to calculate PPV and NPV. Suppose that this was not the case.
>>Given PPV and NPV you could then calculate sensitivity and specificity
>>as a function of prevalence.
>> 
>> See the following papers
>> 
>> 1.	Dawid AP. Properties of  Diagnostic Data Distributions. Biometrics
>>1976; 32: 647-658.
>> 2.	Guggenmoos-Holzmann I, van Houwelingen HC. The (in)validity of
>>sensitivity and specificity. Statistics in Medicine 2000; 19: 1783-1792.
>> 3.	Miettinen OS, Caro JJ. Foundations of Medical Diagnosis - What
>>Actually Are the Parameters Involved in Bayes Theorem. Statistics in
>>Medicine 1994; 13: 201-209.
>> 
>> 
>> 
>> Stephen Senn
>> Competence Center for Methodology and Statistics
>> Luxembourg Institute of Health
>> 1a Rue Thomas Edison
>> L-1445 Luxembourg
>> Tel:(+352) 26970 894
>> Email [log in to unmask]
>> Follow me on twitter @stephensenn
>> 
>> 
>> 
>> 
>> -----Original Message-----
>> From: Evidence based health (EBH)
>>[mailto:[log in to unmask]] On Behalf Of
>>EVIDENCE-BASED-HEALTH automatic digest system
>> Sent: 01 February 2015 01:00
>> To: [log in to unmask]
>> Subject: EVIDENCE-BASED-HEALTH Digest - 30 Jan 2015 to 31 Jan 2015
>>(#2015-25)
>> 
>> There are 7 messages totaling 2931 lines in this issue.
>> 
>> Topics of the day:
>> 
>>  1. Genetic tests and Predictive validity (7)
>> 
>> ----------------------------------------------------------------------
>> 
>> Date:    Sat, 31 Jan 2015 02:08:24 +0000
>> From:    "Mayer, Dan" <[log in to unmask]>
>> Subject: Re: Genetic tests and Predictive validity
>> 
>> Hi Teresa,
>> 
>> 
>> 
>> You are absolutely correct.  This is why we should demand that
>>diagnostic studies ONLY present the results of Sensitivity, Specificity
>>and Likelihood ratios.
>> 
>> 
>> 
>> This issue has been a serious problem for many years and it is about
>>time that more people spoke up about it.  Also, journal editors and peer
>>reviewers should be up in arms against the practice of reporting PPV and
>>NPV.
>> 
>> 
>> 
>> Best wishes
>> 
>> Dan
>> 
>> ________________________________
>> From: Evidence based health (EBH)
>>[[log in to unmask]] on behalf of Benson, Teresa
>>[[log in to unmask]]
>> Sent: Friday, January 30, 2015 11:59 AM
>> To: [log in to unmask]
>> Subject: Genetic tests and Predictive validity
>> 
>> I¹ve just started reading the literature on genetic tests, and noticing
>>how many of them tend to focus on predictive value‹that is, if a certain
>>test accurately predicts whether a patient will or won¹t get a
>>particular phenotype (condition), the authors suggest the test should be
>>used.  But if we¹re deciding whether to order the test in the first
>>place, shouldn¹t we be focused on sensitivity and specificity instead,
>>not PPV and NPV?  Predictive value is so heavily dependent on disease
>>prevalence.  For example, if I want to get tested for a disease with a
>>2% prevalence in people like me, I could just flip a coin and regardless
>>of the outcome, my ³Coin Flip Test² would show an NPV of 98%!  So what
>>does NPV alone really tell me, if I¹m not also factoring out
>>prevalence‹which would be easier done by simply looking at sensitivity
>>and specificity?  Someone please tell me where my thinking has gone awry!
>> For a concrete example, look at MammaPrint, a test which reports binary
>>results.  In addition to hazard ratios, study authors often tout
>>statistically significant differences between the probabilities of
>>recurrence-free survival in the MammaPrint-High Risk vs. MammaPrint-Low
>>Risk groups (essentially the test¹s predictive values).  In the RASTER
>>study (N = 427), 97% of the patients with a ³Low Risk² test result did
>>not experience metastasis in the next 5 years.  Sounds great, right?
>>But when you look at Sensitivity, you see that of the 33 patients in the
>>study who did experience metastasis, only 23 of them were classified as
>>³High Risk² by MammaPrint, for a 70% sensitivity.  If patients and
>>clinicians are looking for a test to inform their decision about
>>adjuvant chemotherapy for early stage breast cancer, wouldn¹t the fact
>>that the test missed 10 out of 33 cases be more important than the 97%
>>NPV, an artifact of the extremely low 5-year prevalence of metastasis in
>>this cohort (only 33 out of 427, or  0.7%)?
>> Drukker et al. A prospective evaluation of a breast cancer prognosis
>>signature in the observational RASTER study. Int J Cancer 2013.
>>133(4):929-36. http://www.ncbi.nlm.nih.gov/pubmed/23371464
>> Retel et al. Prospective cost-effectiveness analysis of genomic
>>profiling in breast cancer. Eur J Cancer 2013. 49:3773-9.
>>http://www.ncbi.nlm.nih.gov/pubmed/23992641  (Provides actual true/false
>>positive/negative results)
>> 
>> Thanks so much!
>> 
>> Teresa Benson, MA, LP
>> Clinical Lead, Evidence-Based Medicine
>> McKesson Health Solutions
>> 18211 Yorkshire Ave
>> Prior Lake, MN  55372
>> [log in to unmask]<mailto:[log in to unmask]>
>> Phone: 1-952-226-4033
>> 
>> Confidentiality Notice: This e-mail message, including any attachments,
>>is for the sole use of the intended recipient(s) and may contain
>>confidential and privileged information. Any unauthorized review, use,
>>disclosure or distribution is prohibited. If you are not the intended
>>recipient, please contact the sender by reply e-mail and destroy all
>>copies of the original message.
>> 
>> 
>> 
>> 
>> -----------------------------------------
>> CONFIDENTIALITY NOTICE: This email and any attachments may contain
>>confidential information that is protected by law and is for the sole
>>use of the individuals or entities to which it is addressed. If you are
>>not the intended recipient, please notify the sender by replying to this
>>email and destroying all copies of the communication and attachments.
>>Further use, disclosure, copying, distribution of, or reliance upon the
>>contents of this email and attachments is strictly prohibited. To
>>contact Albany Medical Center, or for a copy of our privacy practices,
>>please visit us on the Internet at www.amc.edu.
>> 
>> 
>> ------------------------------
>> 
>> Date:    Sat, 31 Jan 2015 06:28:59 -0000
>> From:    Michael Power <[log in to unmask]>
>> Subject: Re: Genetic tests and Predictive validity
>> 
>> Hi Dan and Teresa
>> 
>> 
>> 
>> Clinicians and patients need PPVs and NPVs.
>> 
>> 
>> 
>> So, rather than banning them, why not show graphs of PPV/NPV against
>>the range of clinically relevant prevalences?
>> 
>> 
>> 
>> Michael
>> 
>> 
>> 
>> From: Evidence based health (EBH)
>>[mailto:[log in to unmask]] On Behalf Of Mayer, Dan
>> Sent: 31 January 2015 02:08
>> To: [log in to unmask]
>> Subject: Re: Genetic tests and Predictive validity
>> 
>> 
>> 
>> Hi Teresa,
>> 
>> 
>> 
>> You are absolutely correct.  This is why we should demand that
>>diagnostic studies ONLY present the results of Sensitivity, Specificity
>>and Likelihood ratios.
>> 
>> 
>> 
>> This issue has been a serious problem for many years and it is about
>>time that more people spoke up about it.  Also, journal editors and peer
>>reviewers should be up in arms against the practice of reporting PPV and
>>NPV.
>> 
>> 
>> 
>> Best wishes
>> 
>> 
>> Dan
>> 
>>  _____  
>> 
>> From: Evidence based health (EBH)
>>[[log in to unmask]] on behalf of Benson, Teresa
>>[[log in to unmask]]
>> Sent: Friday, January 30, 2015 11:59 AM
>> To: [log in to unmask]
>> Subject: Genetic tests and Predictive validity
>> 
>> I¹ve just started reading the literature on genetic tests, and noticing
>>how many of them tend to focus on predictive value‹that is, if a certain
>>test accurately predicts whether a patient will or won¹t get a
>>particular phenotype (condition), the authors suggest the test should be
>>used.  But if we¹re deciding whether to order the test in the first
>>place, shouldn¹t we be focused on sensitivity and specificity instead,
>>not PPV and NPV?  Predictive value is so heavily dependent on disease
>>prevalence.  For example, if I want to get tested for a disease with a
>>2% prevalence in people like me, I could just flip a coin and regardless
>>of the outcome, my ³Coin Flip Test² would show an NPV of 98%!  So what
>>does NPV alone really tell me, if I¹m not also factoring out
>>prevalence‹which would be easier done by simply looking at sensitivity
>>and specificity?  Someone please tell me where my thinking has gone awry!
>> 
>> For a concrete example, look at MammaPrint, a test which reports binary
>>results.  In addition to hazard ratios, study authors often tout
>>statistically significant differences between the probabilities of
>>recurrence-free survival in the MammaPrint-High Risk vs. MammaPrint-Low
>>Risk groups (essentially the test¹s predictive values).  In the RASTER
>>study (N = 427), 97% of the patients with a ³Low Risk² test result did
>>not experience metastasis in the next 5 years.  Sounds great, right?
>>But when you look at Sensitivity, you see that of the 33 patients in the
>>study who did experience metastasis, only 23 of them were classified as
>>³High Risk² by MammaPrint, for a 70% sensitivity.  If patients and
>>clinicians are looking for a test to inform their decision about
>>adjuvant chemotherapy for early stage breast cancer, wouldn¹t the fact
>>that the test missed 10 out of 33 cases be more important than the 97%
>>NPV, an artifact of the extremely low 5-year prevalence of metastasis in
>>this cohort (only 33 out of 427, or  0.7%)?
>> 
>> Drukker et al. A prospective evaluation of a breast cancer prognosis
>>signature in the observational RASTER study. Int J Cancer 2013.
>>133(4):929-36. http://www.ncbi.nlm.nih.gov/pubmed/23371464
>> 
>> Retel et al. Prospective cost-effectiveness analysis of genomic
>>profiling in breast cancer. Eur J Cancer 2013. 49:3773-9.
>>http://www.ncbi.nlm.nih.gov/pubmed/23992641  (Provides actual true/false
>>positive/negative results)
>> 
>> 
>> 
>> Thanks so much!
>> 
>> 
>> 
>> Teresa Benson, MA, LP
>> 
>> Clinical Lead, Evidence-Based Medicine
>> 
>> McKesson Health Solutions
>> 18211 Yorkshire Ave
>> 
>> Prior Lake, MN  55372
>> 
>> [log in to unmask] <mailto:[log in to unmask]>
>> Phone: 1-952-226-4033
>> 
>> 
>> 
>> Confidentiality Notice: This e-mail message, including any attachments,
>>is for the sole use of the intended recipient(s) and may contain
>>confidential and privileged information. Any unauthorized review, use,
>>disclosure or distribution is prohibited. If you are not the intended
>>recipient, please contact the sender by reply e-mail and destroy all
>>copies of the original message.
>> 
>> 
>> 
>> 
>> 
>>  _____  
>> 
>> ----------------------------------------- CONFIDENTIALITY NOTICE: This
>>email and any attachments may contain confidential information that is
>>protected by law and is for the sole use of the individuals or entities
>>to which it is addressed. If you are not the intended recipient, please
>>notify the sender by replying to this email and destroying all copies of
>>the communication and attachments. Further use, disclosure, copying,
>>distribution of, or reliance upon the contents of this email and
>>attachments is strictly prohibited. To contact Albany Medical Center, or
>>for a copy of our privacy practices, please visit us on the Internet at
>>www.amc.edu. 
>> 
>> ------------------------------
>> 
>> Date:    Sat, 31 Jan 2015 09:35:31 -0200
>> From:    Moacyr Roberto Cuce Nobre <[log in to unmask]>
>> Subject: Re: Genetic tests and Predictive validity
>> 
>> 
>> I disagree Dan, if the NPV is 100% the sensitivity is 100% also,
>>mutatis mutandis, if the PPV is 100% the specificity is 100%.
>> 
>> 
>> I think that the problem is to present the results of accuracy, since
>>this measure interest those involved with the test procedure. What's
>>matter the clinician is a positive or negative test result.
>> 
>> 
>> Cheers 
>> 
>> --
>> Moacyr 
>> 
>> _______________________________________
>> Moacyr Roberto Cuce Nobre, MD, MS, PhD.
>> Diretor da Unidade de Epidemiologia Clínica Instituto do Coração
>>(InCor) Hospital das Clínicas Faculdade de Medicina da Universidade de
>>São Paulo
>> 55 11 2661 5941 (fone/fax)
>> 55 11 9133 1009 (celular)
>> 
>> ----- Mensagem original -----
>> 
>> 
>> De: "Dan Mayer" <[log in to unmask]>
>> Para: [log in to unmask]
>> Enviadas: Sábado, 31 de Janeiro de 2015 0:08:24
>> Assunto: Re: Genetic tests and Predictive validity
>> 
>> 
>> 
>> Hi Teresa, 
>> 
>> You are absolutely correct. This is why we should demand that
>>diagnostic studies ONLY present the results of Sensitivity, Specificity
>>and Likelihood ratios.
>> 
>> This issue has been a serious problem for many years and it is about
>>time that more people spoke up about it. Also, journal editors and peer
>>reviewers should be up in arms against the practice of reporting PPV and
>>NPV. 
>> 
>> Best wishes 
>> 
>> Dan 
>> 
>> 
>> From: Evidence based health (EBH)
>>[[log in to unmask]] on behalf of Benson, Teresa
>>[[log in to unmask]]
>> Sent: Friday, January 30, 2015 11:59 AM
>> To: [log in to unmask]
>> Subject: Genetic tests and Predictive validity
>> 
>> 
>> 
>> 
>> 
>> I¹ve just started reading the literature on genetic tests, and noticing
>>how many of them tend to focus on predictive value‹that is, if a certain
>>test accurately predicts whether a patient will or won¹t get a
>>particular phenotype (condition), the authors suggest the test should be
>>used. But if we¹re deciding whether to order the test in the first
>>place, shouldn¹t we be focused on sensitivity and specificity instead,
>>not PPV and NPV? Predictive value is so heavily dependent on disease
>>prevalence. For example, if I want to get tested for a disease with a 2%
>>prevalence in people like me, I could just flip a coin and regardless of
>>the outcome, my ³Coin Flip Test² would show an NPV of 98%! So what does
>>NPV alone really tell me, if I¹m not also factoring out prevalence‹which
>>would be easier done by simply looking at sensitivity and specificity?
>>Someone please tell me where my thinking has gone awry!
>> For a concrete example, look at MammaPrint, a test which reports binary
>>results. In addition to hazard ratios, study authors often tout
>>statistically significant differences between the probabilities of
>>recurrence-free survival in the MammaPrint-High Risk vs. MammaPrint-Low
>>Risk groups (essentially the test¹s predictive values). In the RASTER
>>study (N = 427), 97% of the patients with a ³Low Risk² test result did
>>not experience metastasis in the next 5 years. Sounds great, right? But
>>when you look at Sensitivity, you see that of the 33 patients in the
>>study who did experience metastasis, only 23 of them were classified as
>>³High Risk² by MammaPrint, for a 70% sensitivity. If patients and
>>clinicians are looking for a test to inform their decision about
>>adjuvant chemotherapy for early stage breast cancer, wouldn¹t the fact
>>that the test missed 10 out of 33 cases be more important than the 97%
>>NPV, an artifact of the extremely low 5-year prevalence of metastasis in
>>this cohort (only 33 out of 427, or 0.7%)?
>> Drukker et al. A prospective evaluation of a breast cancer prognosis
>>signature in the observational RASTER study. Int J Cancer 2013.
>>133(4):929-36. http://www.ncbi.nlm.nih.gov/pubmed/23371464
>> Retel et al. Prospective cost-effectiveness analysis of genomic
>>profiling in breast cancer. Eur J Cancer 2013. 49:3773-9.
>>http://www.ncbi.nlm.nih.gov/pubmed/23992641 (Provides actual true/false
>>positive/negative results)
>> 
>> Thanks so much! 
>> 
>> Teresa Benson, MA, LP
>> Clinical Lead, Evidence-Based Medicine
>> McKesson Health Solutions
>> 18211 Yorkshire Ave
>> Prior Lake, MN 55372
>> [log in to unmask]
>> Phone: 1-952-226-4033
>> 
>> Confidentiality Notice: This e-mail message, including any attachments,
>>is for the sole use of the intended recipient(s) and may contain
>>confidential and privileged information. Any unauthorized review, use,
>>disclosure or distribution is prohibited. If you are not the intended
>>recipient, please contact the sender by reply e-mail and destroy all
>>copies of the original message.
>> 
>> 
>> 
>> 
>> ----------------------------------------- CONFIDENTIALITY NOTICE: This
>>email and any attachments may contain confidential information that is
>>protected by law and is for the sole use of the individuals or entities
>>to which it is addressed. If you are not the intended recipient, please
>>notify the sender by replying to this email and destroying all copies of
>>the communication and attachments. Further use, disclosure, copying,
>>distribution of, or reliance upon the contents of this email and
>>attachments is strictly prohibited. To contact Albany Medical Center, or
>>for a copy of our privacy practices, please visit us on the Internet at
>>www.amc.edu. 
>> 
>> 
>> 
>> ------------------------------
>> 
>> Date:    Sat, 31 Jan 2015 11:39:12 +0000
>> From:    "Huw Llewelyn [hul2]" <[log in to unmask]>
>> Subject: Re: Genetic tests and Predictive validity
>> 
>> Hi Theresa, Dan, Michael , Moacyr and everyone It is not only a medical
>>finding¹s PPV (positive predictive value) that may change in another
>>population with a different prevalence of target disease.  The
>>specificity will also change with the prevalence.  For example, the
>>specificity may be lower if another population contains diseases that
>>can also ¹cause¹ the finding and the specificity may be higher (and much
>>higher) if the population contains more people without the finding or
>>the target disease.  The sensitivity may also change if the severity of
>>the target disease is different in a different population.  It is also
>>possible for a test result to have a likelihood ratio of Œone¹ and still
>>be very powerful when used in combination with another finding to make a
>>prediction.
>> So it is a fallacy to assume that sensitivity and specificity are
>>constant in different populations ­ they vary as much as PPVs.  In any
>>case, provided one knows the prevalence, sensitivity and PPV for finding
>>and diagnosis in a population, it is a simple matter to calculate the
>>specificity, the NPP, etc.  However, these should not be applied to
>>another population without checking that they are the same.  In
>>addition, experienced doctors usually interpret intuitively the actual
>>value of an individual patient¹s observation e.g. BP of 196/132 and do
>>not divide it into Œnegative¹ and Œpositive¹ ranges.  Under these
>>circumstances, there is no such thing as Œspecificity¹.  I explain this
>>in detail in the final chapter of the 3rd edition of Oxford Handbook of
>>Clinical Diagnosis (see also abstract #13 in
>>http://preventingoverdiagnosis.net/documents/POD-Abstracts.docx).
>> Huw
>> 
>> 
>> 
>> ________________________________
>> From: Evidence based health (EBH)
>>[[log in to unmask]] on behalf of Michael Power
>>[[log in to unmask]]
>> Sent: 31 January 2015 06:28
>> To: [log in to unmask]
>> Subject: Re: Genetic tests and Predictive validity
>> 
>> Hi Dan and Teresa
>> 
>> Clinicians and patients need PPVs and NPVs.
>> 
>> So, rather than banning them, why not show graphs of PPV/NPV against
>>the range of clinically relevant prevalences?
>> 
>> Michael
>> 
>> From: Evidence based health (EBH)
>>[mailto:[log in to unmask]] On Behalf Of Mayer, Dan
>> Sent: 31 January 2015 02:08
>> To: [log in to unmask]
>> Subject: Re: Genetic tests and Predictive validity
>> 
>> 
>> Hi Teresa,
>> 
>> 
>> 
>> You are absolutely correct.  This is why we should demand that
>>diagnostic studies ONLY present the results of Sensitivity, Specificity
>>and Likelihood ratios.
>> 
>> 
>> 
>> This issue has been a serious problem for many years and it is about
>>time that more people spoke up about it.  Also, journal editors and peer
>>reviewers should be up in arms against the practice of reporting PPV and
>>NPV.
>> 
>> 
>> 
>> Best wishes
>> 
>> Dan
>> 
>> ________________________________
>> From: Evidence based health (EBH)
>>[[log in to unmask]] on behalf of Benson, Teresa
>>[[log in to unmask]]
>> Sent: Friday, January 30, 2015 11:59 AM
>> To: 
>>[log in to unmask]<mailto:EVIDENCE-BASED-HEALTH@JISCMAI
>>L.AC.UK>
>> Subject: Genetic tests and Predictive validity I¹ve just started
>>reading the literature on genetic tests, and noticing how many of them
>>tend to focus on predictive value‹that is, if a certain test accurately
>>predicts whether a patient will or won¹t get a particular phenotype
>>(condition), the authors suggest the test should be used.  But if we¹re
>>deciding whether to order the test in the first place, shouldn¹t we be
>>focused on sensitivity and specificity instead, not PPV and NPV?
>>Predictive value is so heavily dependent on disease prevalence.  For
>>example, if I want to get tested for a disease with a 2% prevalence in
>>people like me, I could just flip a coin and regardless of the outcome,
>>my ³Coin Flip Test² would show an NPV of 98%!  So what does NPV alone
>>really tell me, if I¹m not also factoring out prevalence‹which would be
>>easier done by simply looking at sensitivity and specificity?  Someone
>>please tell me where my thinking has gone awry!
>> For a concrete example, look at MammaPrint, a test which reports binary
>>results.  In addition to hazard ratios, study authors often tout
>>statistically significant differences between the probabilities of
>>recurrence-free survival in the MammaPrint-High Risk vs. MammaPrint-Low
>>Risk groups (essentially the test¹s predictive values).  In the RASTER
>>study (N = 427), 97% of the patients with a ³Low Risk² test result did
>>not experience metastasis in the next 5 years.  Sounds great, right?
>>But when you look at Sensitivity, you see that of the 33 patients in the
>>study who did experience metastasis, only 23 of them were classified as
>>³High Risk² by MammaPrint, for a 70% sensitivity.  If patients and
>>clinicians are looking for a test to inform their decision about
>>adjuvant chemotherapy for early stage breast cancer, wouldn¹t the fact
>>that the test missed 10 out of 33 cases be more important than the 97%
>>NPV, an artifact of the extremely low 5-year prevalence of metastasis in
>>this cohort (only 33 out of 427, or  0.7%)?
>> Drukker et al. A prospective evaluation of a breast cancer prognosis
>>signature in the observational RASTER study. Int J Cancer 2013.
>>133(4):929-36. http://www.ncbi.nlm.nih.gov/pubmed/23371464
>> Retel et al. Prospective cost-effectiveness analysis of genomic
>>profiling in breast cancer. Eur J Cancer 2013. 49:3773-9.
>>http://www.ncbi.nlm.nih.gov/pubmed/23992641  (Provides actual true/false
>>positive/negative results)
>> 
>> Thanks so much!
>> 
>> Teresa Benson, MA, LP
>> Clinical Lead, Evidence-Based Medicine
>> McKesson Health Solutions
>> 18211 Yorkshire Ave
>> Prior Lake, MN  55372
>> [log in to unmask]<mailto:[log in to unmask]>
>> Phone: 1-952-226-4033
>> 
>> Confidentiality Notice: This e-mail message, including any attachments,
>>is for the sole use of the intended recipient(s) and may contain
>>confidential and privileged information. Any unauthorized review, use,
>>disclosure or distribution is prohibited. If you are not the intended
>>recipient, please contact the sender by reply e-mail and destroy all
>>copies of the original message.
>> 
>> 
>> ________________________________
>> ----------------------------------------- CONFIDENTIALITY NOTICE: This
>>email and any attachments may contain confidential information that is
>>protected by law and is for the sole use of the individuals or entities
>>to which it is addressed. If you are not the intended recipient, please
>>notify the sender by replying to this email and destroying all copies of
>>the communication and attachments. Further use, disclosure, copying,
>>distribution of, or reliance upon the contents of this email and
>>attachments is strictly prohibited. To contact Albany Medical Center, or
>>for a copy of our privacy practices, please visit us on the Internet at
>>www.amc.edu<http://www.amc.edu>.
>> 
>> ------------------------------
>> 
>> Date:    Sat, 31 Jan 2015 12:31:29 +0000
>> From:    Brian Alper MD <[log in to unmask]>
>> Subject: Re: Genetic tests and Predictive validity
>> 
>> Reporting diagnostic and prognostic parameters to help clinicians and
>>policy makers understood the use of tests is challenging.   There are
>>many possible parameters to report and varied understanding such that
>>one measure is not uniformly understood by all.
>> 
>> The most sophisticated may want likelihood ratios and understand how to
>>apply them with respect to pretest likelihood.   But many want simpler
>>yes/no type of information.  For the simpler approach sensitivity and
>>specificity are too often confused for rule in and rule out, and are too
>>often confused for representing the likelihood of an answer in practice
>>(which does not account for the pretest likelihood at all).
>> 
>> For these reasons positive and negative predictive values appear to be
>>the simplest yet clinically meaningful results to report.  However they
>>are only accurate if the pretest likelihood is similar in practice to
>>the source where they are being reported from.  It is possible to report
>>PPV and NPV for different pretest likelihoods or different
>>representative populations.
>> 
>> In determining whether a test is useful, the PPV and NPV in isolation
>>is insufficient.  Only if PPV or NPV is different enough from the
>>pretest likelihood to cross a decision threshold (treatment threshold or
>>testing threshold) does the test change action.  In the example below
>>with a pretest likelihood of 2% and a coin flip resulting in PPV 2% and
>>NPV 98% the post-test results are no different than the pretest
>>likelihood.
>> 
>> Other parameters (accuracy, diagnostic odds ratio) that combine
>>sensitivity and specificity into a single measure are easier to combine
>>and report statistically but I generally avoid these parameters because
>>they are less informative for clinical use - in most clinical situations
>>sensitivity and specificity are not equally important; a useful
>>one-sided test (eg useful if positive, ignored if negative) becomes less
>>clear when viewing it for overall accuracy.
>> 
>> 
>> But another consideration is sometimes tests are used for "diagnostic"
>>purposes - Does the patient have or not have a certain diagnosis? - an
>>in these cases sensitivity, specificity, PPV*, NPV*, positive likelihood
>>ratio, and negative likelihood ratio (* with prevalence to put into
>>perspective) are clear.
>> 
>> Sometimes tests are used for prognostic purposes, and may be used as a
>>continuous measure with different predictions at different test results.
>>  In these cases presenting the expected prevalence (or probability or
>>likelihood) at different test results may be easier to interpret than
>>selecting a single cutoff and converting it to
>>sensitivity/specificity/etc.
>> 
>> 
>> In general all of the comments above reflect diagnostic and prognostic
>>testing, whether genetic or not.  If the test is genetic the results can
>>usually be handled similarly if the clinical question is specific to the
>>person being tested.   There may be some scenarios where familial
>>implications (cross-generation interpretation) or combinatorial
>>implications (multigene testing) make the overall reporting more
>>challenging.
>> 
>> Brian S. Alper, MD, MSPH, FAAFP
>> Founder of DynaMed
>> Vice President of EBM Research and Development, Quality & Standards
>>dynamed.ebscohost.com
>> 
>> From: Evidence based health (EBH)
>>[mailto:[log in to unmask]] On Behalf Of Benson, Teresa
>> Sent: Friday, January 30, 2015 12:00 PM
>> To: [log in to unmask]
>> Subject: Genetic tests and Predictive validity
>> 
>> I've just started reading the literature on genetic tests, and noticing
>>how many of them tend to focus on predictive value-that is, if a certain
>>test accurately predicts whether a patient will or won't get a
>>particular phenotype (condition), the authors suggest the test should be
>>used.  But if we're deciding whether to order the test in the first
>>place, shouldn't we be focused on sensitivity and specificity instead,
>>not PPV and NPV?  Predictive value is so heavily dependent on disease
>>prevalence.  For example, if I want to get tested for a disease with a
>>2% prevalence in people like me, I could just flip a coin and regardless
>>of the outcome, my "Coin Flip Test" would show an NPV of 98%!  So what
>>does NPV alone really tell me, if I'm not also factoring out
>>prevalence-which would be easier done by simply looking at sensitivity
>>and specificity?  Someone please tell me where my thinking has gone awry!
>> For a concrete example, look at MammaPrint, a test which reports binary
>>results.  In addition to hazard ratios, study authors often tout
>>statistically significant differences between the probabilities of
>>recurrence-free survival in the MammaPrint-High Risk vs. MammaPrint-Low
>>Risk groups (essentially the test's predictive values).  In the RASTER
>>study (N = 427), 97% of the patients with a "Low Risk" test result did
>>not experience metastasis in the next 5 years.  Sounds great, right?
>>But when you look at Sensitivity, you see that of the 33 patients in the
>>study who did experience metastasis, only 23 of them were classified as
>>"High Risk" by MammaPrint, for a 70% sensitivity.  If patients and
>>clinicians are looking for a test to inform their decision about
>>adjuvant chemotherapy for early stage breast cancer, wouldn't the fact
>>that the test missed 10 out of 33 cases be more important than the 97%
>>NPV, an artifact of the extremely low 5-year prevalence of metastasis in
>>this cohort (only 33 out of 427, or  0.7%)?
>> Drukker et al. A prospective evaluation of a breast cancer prognosis
>>signature in the observational RASTER study. Int J Cancer 2013.
>>133(4):929-36. http://www.ncbi.nlm.nih.gov/pubmed/23371464
>> Retel et al. Prospective cost-effectiveness analysis of genomic
>>profiling in breast cancer. Eur J Cancer 2013. 49:3773-9.
>>http://www.ncbi.nlm.nih.gov/pubmed/23992641  (Provides actual true/false
>>positive/negative results)
>> 
>> Thanks so much!
>> 
>> Teresa Benson, MA, LP
>> Clinical Lead, Evidence-Based Medicine
>> McKesson Health Solutions
>> 18211 Yorkshire Ave
>> Prior Lake, MN  55372
>> [log in to unmask]<mailto:[log in to unmask]>
>> Phone: 1-952-226-4033
>> 
>> Confidentiality Notice: This e-mail message, including any attachments,
>>is for the sole use of the intended recipient(s) and may contain
>>confidential and privileged information. Any unauthorized review, use,
>>disclosure or distribution is prohibited. If you are not the intended
>>recipient, please contact the sender by reply e-mail and destroy all
>>copies of the original message.
>> 
>> 
>> ------------------------------
>> 
>> Date:    Sat, 31 Jan 2015 14:59:57 +0000
>> From:    "Huw Llewelyn [hul2]" <[log in to unmask]>
>> Subject: Re: Genetic tests and Predictive validity
>> 
>> Hi Moacyr, Theresa, Dan, Michael, Brian and everyone I agree that prior
>>probabilities are important.  However, these should not be 
>>non-evidence-based subjective guesses but probabilities (i.e. positive 
>>predictive values) of diagnoses in a traditional differential diagnostic 
>>list.  This list and the probabilities are suggested by a presenting 
>>complaint or another finding (e.g. a chest x ray appearance) with a 
>>shorter list of possibilities.  One of these diagnoses are chosen, and 
>>other findings looked for that occur commonly in that diagnosis (i.e. 
>>the sensitivity is high) and rarely or never in at least one other 
>>diagnosis in the list (i.e. the sensitivity is low or zero).  If the 
>>finding were present, this would make the probability of the chosen 
>>diagnosis higher and the probability of the other diagnosis lower or 
>>zero.  Other findings are sought to try to make all but one of the 
>>differential diagnoses improbable.
>> In this reasoning process, which experienced doctors use when 
>>explaining their reasoning, we use ratios of sensitivities, NOT the 
>>ratio of sensitivity to false positive rate (i.e. one minus the 
>>specificity).  Differential diagnostic reasoning uses a statistical 
>>dependence assumption which safely underestimates diagnostic 
>>probabilities whereas using likelihood ratios uses a statistical 
>>independence assumption that overestimates probabilities (and thus may 
>>over-diagnose).  I explain this reasoning process in the Oxford Handbook 
>>of Clinical Diagnosis (see page 6 and 622 with the mathematical proof on 
>>page 638) in Œlook inside¹ on the Amazon web-site: 
>>http://www.amazon.co.uk/Handbook-Clinical-Diagnosis-Medical-Handbooks/dp/
>>019967986X#reader_019967986X.
>> Best wishes
>> Huw
>> 
>> 
>> 
>> ________________________________
>> From: Moacyr Roberto Cuce Nobre [[log in to unmask]]
>> Sent: 31 January 2015 12:15
>> To: Huw Llewelyn [hul2]
>> Subject: Re: Genetic tests and Predictive validity
>> 
>> Hi Huw, dont you think that the output to the clinician is the pretest 
>>probability of the patient, that better reflects the individual 
>>uncertainty, rather than the prevalence as a population data.
>> 
>> --
>> Moacyr
>> 
>> _______________________________________
>> Moacyr Roberto Cuce Nobre, MD, MS, PhD.
>> Diretor da Unidade de Epidemiologia Clínica Instituto do Coração 
>>(InCor) Hospital das Clínicas Faculdade de Medicina da Universidade de 
>>São Paulo
>> 55 11 2661 5941 (fone/fax)
>> 55 11 9133 1009 (celular)
>> 
>> ________________________________
>> De: "Huw Llewelyn [hul2]" <[log in to unmask]>
>> Para: [log in to unmask]
>> Enviadas: Sábado, 31 de Janeiro de 2015 9:39:12
>> Assunto: Re: Genetic tests and Predictive validity
>> 
>> Hi Theresa, Dan, Michael , Moacyr and everyone It is not only a medical 
>>finding¹s PPV (positive predictive value) that may change in another 
>>population with a different prevalence of target disease.  The 
>>specificity will also change with the prevalence.  For example, the 
>>specificity may be lower if another population contains diseases that 
>>can also ¹cause¹ the finding and the specificity may be higher (and much 
>>higher) if the population contains more people without the finding or 
>>the target disease.  The sensitivity may also change if the severity of 
>>the target disease is different in a different population.  It is also 
>>possible for a test result to have a likelihood ratio of Œone¹ and still 
>>be very powerful when used in combination with another finding to make a 
>>prediction.
>> So it is a fallacy to assume that sensitivity and specificity are 
>>constant in different populations ­ they vary as much as PPVs.  In any 
>>case, provided one knows the prevalence, sensitivity and PPV for finding 
>>and diagnosis in a population, it is a simple matter to calculate the 
>>specificity, the NPP, etc.  However, these should not be applied to 
>>another population without checking that they are the same.  In 
>>addition, experienced doctors usually interpret intuitively the actual 
>>value of an individual patient¹s observation e.g. BP of 196/132 and do 
>>not divide it into Œnegative¹ and Œpositive¹ ranges.  Under these 
>>circumstances, there is no such thing as Œspecificity¹.  I explain this 
>>in detail in the final chapter of the 3rd edition of Oxford Handbook of 
>>Clinical Diagnosis (see also abstract #13 in 
>>http://preventingoverdiagnosis.net/documents/POD-Abstracts.docx).
>> Huw
>> 
>> 
>> 
>> ________________________________
>> From: Evidence based health (EBH) 
>>[[log in to unmask]] on behalf of Michael Power 
>>[[log in to unmask]]
>> Sent: 31 January 2015 06:28
>> To: [log in to unmask]
>> Subject: Re: Genetic tests and Predictive validity
>> 
>> Hi Dan and Teresa
>> 
>> Clinicians and patients need PPVs and NPVs.
>> 
>> So, rather than banning them, why not show graphs of PPV/NPV against 
>>the range of clinically relevant prevalences?
>> 
>> Michael
>> 
>> From: Evidence based health (EBH) 
>>[mailto:[log in to unmask]] On Behalf Of Mayer, Dan
>> Sent: 31 January 2015 02:08
>> To: [log in to unmask]
>> Subject: Re: Genetic tests and Predictive validity
>> 
>> 
>> Hi Teresa,
>> 
>> 
>> 
>> You are absolutely correct.  This is why we should demand that 
>>diagnostic studies ONLY present the results of Sensitivity, Specificity 
>>and Likelihood ratios.
>> 
>> 
>> 
>> This issue has been a serious problem for many years and it is about 
>>time that more people spoke up about it.  Also, journal editors and peer 
>>reviewers should be up in arms against the practice of reporting PPV and 
>>NPV.
>> 
>> 
>> 
>> Best wishes
>> 
>> Dan
>> 
>> ________________________________
>> From: Evidence based health (EBH) 
>>[[log in to unmask]] on behalf of Benson, Teresa 
>>[[log in to unmask]]
>> Sent: Friday, January 30, 2015 11:59 AM
>> To: 
>>[log in to unmask]<mailto:EVIDENCE-BASED-HEALTH@JISCMAI
>>L.AC.UK>
>> Subject: Genetic tests and Predictive validity I¹ve just started 
>>reading the literature on genetic tests, and noticing how many of them 
>>tend to focus on predictive value‹that is, if a certain test accurately 
>>predicts whether a patient will or won¹t get a particular phenotype 
>>(condition), the authors suggest the test should be used.  But if we¹re 
>>deciding whether to order the test in the first place, shouldn¹t we be 
>>focused on sensitivity and specificity instead, not PPV and NPV?  
>>Predictive value is so heavily dependent on disease prevalence.  For 
>>example, if I want to get tested for a disease with a 2% prevalence in 
>>people like me, I could just flip a coin and regardless of the outcome, 
>>my ³Coin Flip Test² would show an NPV of 98%!  So what does NPV alone 
>>really tell me, if I¹m not also factoring out prevalence‹which would be 
>>easier done by simply looking at sensitivity and specificity?  Someone 
>>please tell me where my thinking has gone awry!
>> For a concrete example, look at MammaPrint, a test which reports binary 
>>results.  In addition to hazard ratios, study authors often tout 
>>statistically significant differences between the probabilities of 
>>recurrence-free survival in the MammaPrint-High Risk vs. MammaPrint-Low 
>>Risk groups (essentially the test¹s predictive values).  In the RASTER 
>>study (N = 427), 97% of the patients with a ³Low Risk² test result did 
>>not experience metastasis in the next 5 years.  Sounds great, right?  
>>But when you look at Sensitivity, you see that of the 33 patients in the 
>>study who did experience metastasis, only 23 of them were classified as 
>>³High Risk² by MammaPrint, for a 70% sensitivity.  If patients and 
>>clinicians are looking for a test to inform their decision about 
>>adjuvant chemotherapy for early stage breast cancer, wouldn¹t the fact 
>>that the test missed 10 out of 33 cases be more important than the 97% 
>>NPV, an artifact of the extremely low 5-year prevalence of metastasis in 
>>this cohort (only 33 out of 427, or  0.7%)?
>> Drukker et al. A prospective evaluation of a breast cancer prognosis 
>>signature in the observational RASTER study. Int J Cancer 2013. 
>>133(4):929-36. http://www.ncbi.nlm.nih.gov/pubmed/23371464
>> Retel et al. Prospective cost-effectiveness analysis of genomic 
>>profiling in breast cancer. Eur J Cancer 2013. 49:3773-9. 
>>http://www.ncbi.nlm.nih.gov/pubmed/23992641  (Provides actual true/false 
>>positive/negative results)
>> 
>> Thanks so much!
>> 
>> Teresa Benson, MA, LP
>> Clinical Lead, Evidence-Based Medicine
>> McKesson Health Solutions
>> 18211 Yorkshire Ave
>> Prior Lake, MN  55372
>> [log in to unmask]<mailto:[log in to unmask]>
>> Phone: 1-952-226-4033
>> 
>> Confidentiality Notice: This e-mail message, including any attachments, 
>>is for the sole use of the intended recipient(s) and may contain 
>>confidential and privileged information. Any unauthorized review, use, 
>>disclosure or distribution is prohibited. If you are not the intended 
>>recipient, please contact the sender by reply e-mail and destroy all 
>>copies of the original message.
>> 
>> 
>> ________________________________
>> ----------------------------------------- CONFIDENTIALITY NOTICE: This 
>>email and any attachments may contain confidential information that is 
>>protected by law and is for the sole use of the individuals or entities 
>>to which it is addressed. If you are not the intended recipient, please 
>>notify the sender by replying to this email and destroying all copies of 
>>the communication and attachments. Further use, disclosure, copying, 
>>distribution of, or reliance upon the contents of this email and 
>>attachments is strictly prohibited. To contact Albany Medical Center, or 
>>for a copy of our privacy practices, please visit us on the Internet at 
>>www.amc.edu<http://www.amc.edu>.
>> 
>> ------------------------------
>> 
>> Date:    Sat, 31 Jan 2015 16:45:44 +0000
>> From:    "McCormack, James" <[log in to unmask]>
>> Subject: Re: Genetic tests and Predictive validity
>> 
>> Hi Everyone - here is a useful rough rule of thumb for Likelihood 
>>Ratios that a clinician can use which puts numbers to disease 
>>probability and you can see when an LR can basically rule in our out a 
>>disease.
>> 
>> The only caveat is this is ONLY accurate if the pretest probability 
>>lies between 10% and 90% so it doesn¹t apply to low prevalent (<10%) 
>>conditions so it may not be all that applicable to this discussion.
>> 
>> A LR of 2 increases the absolute probability of disease FROM your 
>>pre-test probability by 15% - for every additional one increase in LR 
>>you add another 5% absolute change to your probability - so a LR of 3 
>>increases your probability of disease by 15% plus 2 X 5% or a total 
>>increase of 25% over your pre-test probability.
>> 
>> Going in the other direction
>> 
>> A LR of 0.5 decreases the absolute probability of disease FROM your 
>>pre-test probability by 15% - for every additional 0.1 lowering in LR 
>>you add another 5% absolute change to your probability - so a LR of 0.3 
>>decreases probability by 15% plus 2 X 5% or a total decrease of 25% from 
>>your pre-test probability.
>> 
>> The original idea for this came from this article 
>>http://onlinelibrary.wiley.com/doi/10.1046/j.1525-1497.2002.10750.x/full
>> 
>> and we published a discussion on probabilistic reasoning in primary 
>>care here http://goo.gl/Rnl2q2
>> 
>> Thanks
>> 
>> James
>> 
>> 
>> 
>> On Jan 31, 2015, at 4:31 AM, Brian Alper MD 
>><[log in to unmask]<mailto:[log in to unmask]>> wrote:
>> 
>> Reporting diagnostic and prognostic parameters to help clinicians and 
>>policy makers understood the use of tests is challenging.   There are 
>>many possible parameters to report and varied understanding such that 
>>one measure is not uniformly understood by all.
>> 
>> The most sophisticated may want likelihood ratios and understand how to 
>>apply them with respect to pretest likelihood.   But many want simpler 
>>yes/no type of information.  For the simpler approach sensitivity and 
>>specificity are too often confused for rule in and rule out, and are too 
>>often confused for representing the likelihood of an answer in practice 
>>(which does not account for the pretest likelihood at all).
>> 
>> For these reasons positive and negative predictive values appear to be 
>>the simplest yet clinically meaningful results to report.  However they 
>>are only accurate if the pretest likelihood is similar in practice to 
>>the source where they are being reported from.  It is possible to report 
>>PPV and NPV for different pretest likelihoods or different 
>>representative populations.
>> 
>> In determining whether a test is useful, the PPV and NPV in isolation 
>>is insufficient.  Only if PPV or NPV is different enough from the 
>>pretest likelihood to cross a decision threshold (treatment threshold or 
>>testing threshold) does the test change action.  In the example below 
>>with a pretest likelihood of 2% and a coin flip resulting in PPV 2% and 
>>NPV 98% the post-test results are no different than the pretest 
>>likelihood.
>> 
>> Other parameters (accuracy, diagnostic odds ratio) that combine 
>>sensitivity and specificity into a single measure are easier to combine 
>>and report statistically but I generally avoid these parameters because 
>>they are less informative for clinical use ­ in most clinical situations 
>>sensitivity and specificity are not equally important; a useful 
>>one-sided test (eg useful if positive, ignored if negative) becomes less 
>>clear when viewing it for overall accuracy.
>> 
>> 
>> But another consideration is sometimes tests are used for ³diagnostic² 
>>purposes ­ Does the patient have or not have a certain diagnosis? ­ an 
>>in these cases sensitivity, specificity, PPV*, NPV*, positive likelihood 
>>ratio, and negative likelihood ratio (* with prevalence to put into 
>>perspective) are clear.
>> 
>> Sometimes tests are used for prognostic purposes, and may be used as a 
>>continuous measure with different predictions at different test results. 
>>  In these cases presenting the expected prevalence (or probability or 
>>likelihood) at different test results may be easier to interpret than 
>>selecting a single cutoff and converting it to 
>>sensitivity/specificity/etc.
>> 
>> 
>> In general all of the comments above reflect diagnostic and prognostic 
>>testing, whether genetic or not.  If the test is genetic the results can 
>>usually be handled similarly if the clinical question is specific to the 
>>person being tested.   There may be some scenarios where familial 
>>implications (cross-generation interpretation) or combinatorial 
>>implications (multigene testing) make the overall reporting more 
>>challenging.
>> 
>> Brian S. Alper, MD, MSPH, FAAFP
>> Founder of DynaMed
>> Vice President of EBM Research and Development, Quality & Standards 
>>dynamed.ebscohost.com<http://dynamed.ebscohost.com/>
>> 
>> From: Evidence based health (EBH) 
>>[mailto:[log in to unmask]] On Behalf Of Benson, Teresa
>> Sent: Friday, January 30, 2015 12:00 PM
>> To: 
>>[log in to unmask]<mailto:EVIDENCE-BASED-HEALTH@JISCMAI
>>L.AC.UK>
>> Subject: Genetic tests and Predictive validity
>> 
>> I¹ve just started reading the literature on genetic tests, and noticing 
>>how many of them tend to focus on predictive value‹that is, if a certain 
>>test accurately predicts whether a patient will or won¹t get a 
>>particular phenotype (condition), the authors suggest the test should be 
>>used.  But if we¹re deciding whether to order the test in the first 
>>place, shouldn¹t we be focused on sensitivity and specificity instead, 
>>not PPV and NPV?  Predictive value is so heavily dependent on disease 
>>prevalence.  For example, if I want to get tested for a disease with a 
>>2% prevalence in people like me, I could just flip a coin and regardless 
>>of the outcome, my ³Coin Flip Test² would show an NPV of 98%!  So what 
>>does NPV alone really tell me, if I¹m not also factoring out 
>>prevalence‹which would be easier done by simply looking at sensitivity 
>>and specificity?  Someone please tell me where my thinking has gone awry!
>> For a concrete example, look at MammaPrint, a test which reports binary 
>>results.  In addition to hazard ratios, study authors often tout 
>>statistically significant differences between the probabilities of 
>>recurrence-free survival in the MammaPrint-High Risk vs. MammaPrint-Low 
>>Risk groups (essentially the test¹s predictive values).  In the RASTER 
>>study (N = 427), 97% of the patients with a ³Low Risk² test result did 
>>not experience metastasis in the next 5 years.  Sounds great, right?  
>>But when you look at Sensitivity, you see that of the 33 patients in the 
>>study who did experience metastasis, only 23 of them were classified as 
>>³High Risk² by MammaPrint, for a 70% sensitivity.  If patients and 
>>clinicians are looking for a test to inform their decision about 
>>adjuvant chemotherapy for early stage breast cancer, wouldn¹t the fact 
>>that the test missed 10 out of 33 cases be more important than the 97% 
>>NPV, an artifact of the extremely low 5-year prevalence of metastasis in 
>>this cohort (only 33 out of 427, or  0.7%)?
>> Drukker et al. A prospective evaluation of a breast cancer prognosis 
>>signature in the observational RASTER study. Int J Cancer 2013. 
>>133(4):929-36.http://www.ncbi.nlm.nih.gov/pubmed/23371464
>> Retel et al. Prospective cost-effectiveness analysis of genomic 
>>profiling in breast cancer. Eur J Cancer 2013. 49:3773-9. 
>>http://www.ncbi.nlm.nih.gov/pubmed/23992641  (Provides actual true/false 
>>positive/negative results)
>> 
>> Thanks so much!
>> 
>> Teresa Benson, MA, LP
>> Clinical Lead, Evidence-Based Medicine
>> McKesson Health Solutions
>> 18211 Yorkshire Ave
>> Prior Lake, MN  55372
>> [log in to unmask]<mailto:[log in to unmask]>
>> Phone: 1-952-226-4033
>> 
>> Confidentiality Notice: This e-mail message, including any attachments, 
>>is for the sole use of the intended recipient(s) and may contain 
>>confidential and privileged information. Any unauthorized review, use, 
>>disclosure or distribution is prohibited. If you are not the intended 
>>recipient, please contact the sender by reply e-mail and destroy all 
>>copies of the original message.
>> 
>> ------------------------------
>> 
>> End of EVIDENCE-BASED-HEALTH Digest - 30 Jan 2015 to 31 Jan 2015 
>>(#2015-25)
>> 
>>*************************************************************************
>>**