JiscMail Logo
Email discussion lists for the UK Education and Research communities

Help for EVIDENCE-BASED-HEALTH Archives


EVIDENCE-BASED-HEALTH Archives

EVIDENCE-BASED-HEALTH Archives


EVIDENCE-BASED-HEALTH@JISCMAIL.AC.UK


View:

Message:

[

First

|

Previous

|

Next

|

Last

]

By Topic:

[

First

|

Previous

|

Next

|

Last

]

By Author:

[

First

|

Previous

|

Next

|

Last

]

Font:

Proportional Font

LISTSERV Archives

LISTSERV Archives

EVIDENCE-BASED-HEALTH Home

EVIDENCE-BASED-HEALTH Home

EVIDENCE-BASED-HEALTH  February 2015

EVIDENCE-BASED-HEALTH February 2015

Options

Subscribe or Unsubscribe

Subscribe or Unsubscribe

Log In

Log In

Get Password

Get Password

Subject:

Genetic tests and predictive values

From:

Stephen Senn <[log in to unmask]>

Reply-To:

Stephen Senn <[log in to unmask]>

Date:

Sun, 1 Feb 2015 12:22:15 +0100

Content-Type:

text/plain

Parts/Attachments:

Parts/Attachments

text/plain (559 lines)

There is a minority but extremely interesting view that holds that sensitivity and specificity are misleading parameters. Much of the conventional discussion of this  in terms of prior probability assumes that these are permanent features of the test unaffected by prevalence so that therefore what one has to do is put them together with prevalence to calculate PPV and NPV. Suppose that this was not the case. Given PPV and NPV you could then calculate sensitivity and specificity as a function of prevalence.

See the following papers

1.	Dawid AP. Properties of  Diagnostic Data Distributions. Biometrics 1976; 32: 647-658.
2.	Guggenmoos-Holzmann I, van Houwelingen HC. The (in)validity of sensitivity and specificity. Statistics in Medicine 2000; 19: 1783-1792.
3.	Miettinen OS, Caro JJ. Foundations of Medical Diagnosis - What Actually Are the Parameters Involved in Bayes Theorem. Statistics in Medicine 1994; 13: 201-209.



Stephen Senn
Competence Center for Methodology and Statistics
Luxembourg Institute of Health
1a Rue Thomas Edison
 L-1445 Luxembourg
Tel:(+352) 26970 894
Email [log in to unmask]
Follow me on twitter @stephensenn




-----Original Message-----
From: Evidence based health (EBH) [mailto:[log in to unmask]] On Behalf Of EVIDENCE-BASED-HEALTH automatic digest system
Sent: 01 February 2015 01:00
To: [log in to unmask]
Subject: EVIDENCE-BASED-HEALTH Digest - 30 Jan 2015 to 31 Jan 2015 (#2015-25)

There are 7 messages totaling 2931 lines in this issue.

Topics of the day:

  1. Genetic tests and Predictive validity (7)

----------------------------------------------------------------------

Date:    Sat, 31 Jan 2015 02:08:24 +0000
From:    "Mayer, Dan" <[log in to unmask]>
Subject: Re: Genetic tests and Predictive validity

Hi Teresa,



You are absolutely correct.  This is why we should demand that diagnostic studies ONLY present the results of Sensitivity, Specificity and Likelihood ratios.



This issue has been a serious problem for many years and it is about time that more people spoke up about it.  Also, journal editors and peer reviewers should be up in arms against the practice of reporting PPV and NPV.



Best wishes

Dan

________________________________
From: Evidence based health (EBH) [[log in to unmask]] on behalf of Benson, Teresa [[log in to unmask]]
Sent: Friday, January 30, 2015 11:59 AM
To: [log in to unmask]
Subject: Genetic tests and Predictive validity

I’ve just started reading the literature on genetic tests, and noticing how many of them tend to focus on predictive value—that is, if a certain test accurately predicts whether a patient will or won’t get a particular phenotype (condition), the authors suggest the test should be used.  But if we’re deciding whether to order the test in the first place, shouldn’t we be focused on sensitivity and specificity instead, not PPV and NPV?  Predictive value is so heavily dependent on disease prevalence.  For example, if I want to get tested for a disease with a 2% prevalence in people like me, I could just flip a coin and regardless of the outcome, my “Coin Flip Test” would show an NPV of 98%!  So what does NPV alone really tell me, if I’m not also factoring out prevalence—which would be easier done by simply looking at sensitivity and specificity?  Someone please tell me where my thinking has gone awry!
For a concrete example, look at MammaPrint, a test which reports binary results.  In addition to hazard ratios, study authors often tout statistically significant differences between the probabilities of recurrence-free survival in the MammaPrint-High Risk vs. MammaPrint-Low Risk groups (essentially the test’s predictive values).  In the RASTER study (N = 427), 97% of the patients with a “Low Risk” test result did not experience metastasis in the next 5 years.  Sounds great, right?  But when you look at Sensitivity, you see that of the 33 patients in the study who did experience metastasis, only 23 of them were classified as “High Risk” by MammaPrint, for a 70% sensitivity.  If patients and clinicians are looking for a test to inform their decision about adjuvant chemotherapy for early stage breast cancer, wouldn’t the fact that the test missed 10 out of 33 cases be more important than the 97% NPV, an artifact of the extremely low 5-year prevalence of metastasis in this cohort (only 33 out of 427, or  0.7%)?
Drukker et al. A prospective evaluation of a breast cancer prognosis signature in the observational RASTER study. Int J Cancer 2013. 133(4):929-36. http://www.ncbi.nlm.nih.gov/pubmed/23371464
Retel et al. Prospective cost-effectiveness analysis of genomic profiling in breast cancer. Eur J Cancer 2013. 49:3773-9. http://www.ncbi.nlm.nih.gov/pubmed/23992641  (Provides actual true/false positive/negative results)

Thanks so much!

Teresa Benson, MA, LP
Clinical Lead, Evidence-Based Medicine
McKesson Health Solutions
18211 Yorkshire Ave
Prior Lake, MN  55372
[log in to unmask]<mailto:[log in to unmask]>
Phone: 1-952-226-4033

Confidentiality Notice: This e-mail message, including any attachments, is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message.




-----------------------------------------
CONFIDENTIALITY NOTICE: This email and any attachments may contain confidential information that is protected by law and is for the sole use of the individuals or entities to which it is addressed. If you are not the intended recipient, please notify the sender by replying to this email and destroying all copies of the communication and attachments. Further use, disclosure, copying, distribution of, or reliance upon the contents of this email and attachments is strictly prohibited. To contact Albany Medical Center, or for a copy of our privacy practices, please visit us on the Internet at www.amc.edu.


------------------------------

Date:    Sat, 31 Jan 2015 06:28:59 -0000
From:    Michael Power <[log in to unmask]>
Subject: Re: Genetic tests and Predictive validity

Hi Dan and Teresa

 

Clinicians and patients need PPVs and NPVs. 

 

So, rather than banning them, why not show graphs of PPV/NPV against the range of clinically relevant prevalences?

 

Michael

 

From: Evidence based health (EBH) [mailto:[log in to unmask]] On Behalf Of Mayer, Dan
Sent: 31 January 2015 02:08
To: [log in to unmask]
Subject: Re: Genetic tests and Predictive validity

 

Hi Teresa,

 

You are absolutely correct.  This is why we should demand that diagnostic studies ONLY present the results of Sensitivity, Specificity and Likelihood ratios.

 

This issue has been a serious problem for many years and it is about time that more people spoke up about it.  Also, journal editors and peer reviewers should be up in arms against the practice of reporting PPV and NPV.

 

Best wishes


Dan

  _____  

From: Evidence based health (EBH) [[log in to unmask]] on behalf of Benson, Teresa [[log in to unmask]]
Sent: Friday, January 30, 2015 11:59 AM
To: [log in to unmask]
Subject: Genetic tests and Predictive validity

I’ve just started reading the literature on genetic tests, and noticing how many of them tend to focus on predictive value—that is, if a certain test accurately predicts whether a patient will or won’t get a particular phenotype (condition), the authors suggest the test should be used.  But if we’re deciding whether to order the test in the first place, shouldn’t we be focused on sensitivity and specificity instead, not PPV and NPV?  Predictive value is so heavily dependent on disease prevalence.  For example, if I want to get tested for a disease with a 2% prevalence in people like me, I could just flip a coin and regardless of the outcome, my “Coin Flip Test” would show an NPV of 98%!  So what does NPV alone really tell me, if I’m not also factoring out prevalence—which would be easier done by simply looking at sensitivity and specificity?  Someone please tell me where my thinking has gone awry!

For a concrete example, look at MammaPrint, a test which reports binary results.  In addition to hazard ratios, study authors often tout statistically significant differences between the probabilities of recurrence-free survival in the MammaPrint-High Risk vs. MammaPrint-Low Risk groups (essentially the test’s predictive values).  In the RASTER study (N = 427), 97% of the patients with a “Low Risk” test result did not experience metastasis in the next 5 years.  Sounds great, right?  But when you look at Sensitivity, you see that of the 33 patients in the study who did experience metastasis, only 23 of them were classified as “High Risk” by MammaPrint, for a 70% sensitivity.  If patients and clinicians are looking for a test to inform their decision about adjuvant chemotherapy for early stage breast cancer, wouldn’t the fact that the test missed 10 out of 33 cases be more important than the 97% NPV, an artifact of the extremely low 5-year prevalence of metastasis in this cohort (only 33 out of 427, or  0.7%)?  

Drukker et al. A prospective evaluation of a breast cancer prognosis signature in the observational RASTER study. Int J Cancer 2013. 133(4):929-36. http://www.ncbi.nlm.nih.gov/pubmed/23371464 

Retel et al. Prospective cost-effectiveness analysis of genomic profiling in breast cancer. Eur J Cancer 2013. 49:3773-9. http://www.ncbi.nlm.nih.gov/pubmed/23992641  (Provides actual true/false positive/negative results)

 

Thanks so much!

 

Teresa Benson, MA, LP

Clinical Lead, Evidence-Based Medicine

McKesson Health Solutions
18211 Yorkshire Ave

Prior Lake, MN  55372

[log in to unmask] <mailto:[log in to unmask]>
Phone: 1-952-226-4033

 

Confidentiality Notice: This e-mail message, including any attachments, is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message.

 

 

  _____  

----------------------------------------- CONFIDENTIALITY NOTICE: This email and any attachments may contain confidential information that is protected by law and is for the sole use of the individuals or entities to which it is addressed. If you are not the intended recipient, please notify the sender by replying to this email and destroying all copies of the communication and attachments. Further use, disclosure, copying, distribution of, or reliance upon the contents of this email and attachments is strictly prohibited. To contact Albany Medical Center, or for a copy of our privacy practices, please visit us on the Internet at www.amc.edu. 

------------------------------

Date:    Sat, 31 Jan 2015 09:35:31 -0200
From:    Moacyr Roberto Cuce Nobre <[log in to unmask]>
Subject: Re: Genetic tests and Predictive validity


I disagree Dan, if the NPV is 100% the sensitivity is 100% also, mutatis mutandis, if the PPV is 100% the specificity is 100%. 


I think that the problem is to present the results of accuracy, since this measure interest those involved with the test procedure. What's matter the clinician is a positive or negative test result. 


Cheers 

--
Moacyr 

_______________________________________
Moacyr Roberto Cuce Nobre, MD, MS, PhD. 
Diretor da Unidade de Epidemiologia Clínica Instituto do Coração (InCor) Hospital das Clínicas Faculdade de Medicina da Universidade de São Paulo
55 11 2661 5941 (fone/fax)
55 11 9133 1009 (celular) 

----- Mensagem original -----


De: "Dan Mayer" <[log in to unmask]>
Para: [log in to unmask]
Enviadas: Sábado, 31 de Janeiro de 2015 0:08:24
Assunto: Re: Genetic tests and Predictive validity 



Hi Teresa, 

You are absolutely correct. This is why we should demand that diagnostic studies ONLY present the results of Sensitivity, Specificity and Likelihood ratios. 

This issue has been a serious problem for many years and it is about time that more people spoke up about it. Also, journal editors and peer reviewers should be up in arms against the practice of reporting PPV and NPV. 

Best wishes 

Dan 


From: Evidence based health (EBH) [[log in to unmask]] on behalf of Benson, Teresa [[log in to unmask]]
Sent: Friday, January 30, 2015 11:59 AM
To: [log in to unmask]
Subject: Genetic tests and Predictive validity 





I’ve just started reading the literature on genetic tests, and noticing how many of them tend to focus on predictive value—that is, if a certain test accurately predicts whether a patient will or won’t get a particular phenotype (condition), the authors suggest the test should be used. But if we’re deciding whether to order the test in the first place, shouldn’t we be focused on sensitivity and specificity instead, not PPV and NPV? Predictive value is so heavily dependent on disease prevalence. For example, if I want to get tested for a disease with a 2% prevalence in people like me, I could just flip a coin and regardless of the outcome, my “Coin Flip Test” would show an NPV of 98%! So what does NPV alone really tell me, if I’m not also factoring out prevalence—which would be easier done by simply looking at sensitivity and specificity? Someone please tell me where my thinking has gone awry! 
For a concrete example, look at MammaPrint, a test which reports binary results. In addition to hazard ratios, study authors often tout statistically significant differences between the probabilities of recurrence-free survival in the MammaPrint-High Risk vs. MammaPrint-Low Risk groups (essentially the test’s predictive values). In the RASTER study (N = 427), 97% of the patients with a “Low Risk” test result did not experience metastasis in the next 5 years. Sounds great, right? But when you look at Sensitivity, you see that of the 33 patients in the study who did experience metastasis, only 23 of them were classified as “High Risk” by MammaPrint, for a 70% sensitivity. If patients and clinicians are looking for a test to inform their decision about adjuvant chemotherapy for early stage breast cancer, wouldn’t the fact that the test missed 10 out of 33 cases be more important than the 97% NPV, an artifact of the extremely low 5-year prevalence of metastasis in this cohort (only 33 out of 427, or 0.7%)? 
Drukker et al. A prospective evaluation of a breast cancer prognosis signature in the observational RASTER study. Int J Cancer 2013. 133(4):929-36. http://www.ncbi.nlm.nih.gov/pubmed/23371464
Retel et al. Prospective cost-effectiveness analysis of genomic profiling in breast cancer. Eur J Cancer 2013. 49:3773-9. http://www.ncbi.nlm.nih.gov/pubmed/23992641 (Provides actual true/false positive/negative results) 

Thanks so much! 

Teresa Benson, MA, LP
Clinical Lead, Evidence-Based Medicine
McKesson Health Solutions
18211 Yorkshire Ave
Prior Lake, MN 55372
[log in to unmask]
Phone: 1-952-226-4033 

Confidentiality Notice: This e-mail message, including any attachments, is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message. 




----------------------------------------- CONFIDENTIALITY NOTICE: This email and any attachments may contain confidential information that is protected by law and is for the sole use of the individuals or entities to which it is addressed. If you are not the intended recipient, please notify the sender by replying to this email and destroying all copies of the communication and attachments. Further use, disclosure, copying, distribution of, or reliance upon the contents of this email and attachments is strictly prohibited. To contact Albany Medical Center, or for a copy of our privacy practices, please visit us on the Internet at www.amc.edu. 



------------------------------

Date:    Sat, 31 Jan 2015 11:39:12 +0000
From:    "Huw Llewelyn [hul2]" <[log in to unmask]>
Subject: Re: Genetic tests and Predictive validity

Hi Theresa, Dan, Michael , Moacyr and everyone It is not only a medical finding’s PPV (positive predictive value) that may change in another population with a different prevalence of target disease.  The specificity will also change with the prevalence.  For example, the specificity may be lower if another population contains diseases that can also ’cause’ the finding and the specificity may be higher (and much higher) if the population contains more people without the finding or the target disease.  The sensitivity may also change if the severity of the target disease is different in a different population.  It is also possible for a test result to have a likelihood ratio of ‘one’ and still be very powerful when used in combination with another finding to make a prediction.
So it is a fallacy to assume that sensitivity and specificity are constant in different populations – they vary as much as PPVs.  In any case, provided one knows the prevalence, sensitivity and PPV for finding and diagnosis in a population, it is a simple matter to calculate the specificity, the NPP, etc.  However, these should not be applied to another population without checking that they are the same.  In addition, experienced doctors usually interpret intuitively the actual value of an individual patient’s observation e.g. BP of 196/132 and do not divide it into ‘negative’ and ‘positive’ ranges.  Under these circumstances, there is no such thing as ‘specificity’.  I explain this in detail in the final chapter of the 3rd edition of Oxford Handbook of Clinical Diagnosis (see also abstract #13 in http://preventingoverdiagnosis.net/documents/POD-Abstracts.docx).
Huw



________________________________
From: Evidence based health (EBH) [[log in to unmask]] on behalf of Michael Power [[log in to unmask]]
Sent: 31 January 2015 06:28
To: [log in to unmask]
Subject: Re: Genetic tests and Predictive validity

Hi Dan and Teresa

Clinicians and patients need PPVs and NPVs.

So, rather than banning them, why not show graphs of PPV/NPV against the range of clinically relevant prevalences?

Michael

From: Evidence based health (EBH) [mailto:[log in to unmask]] On Behalf Of Mayer, Dan
Sent: 31 January 2015 02:08
To: [log in to unmask]
Subject: Re: Genetic tests and Predictive validity


Hi Teresa,



You are absolutely correct.  This is why we should demand that diagnostic studies ONLY present the results of Sensitivity, Specificity and Likelihood ratios.



This issue has been a serious problem for many years and it is about time that more people spoke up about it.  Also, journal editors and peer reviewers should be up in arms against the practice of reporting PPV and NPV.



Best wishes

Dan

________________________________
From: Evidence based health (EBH) [[log in to unmask]] on behalf of Benson, Teresa [[log in to unmask]]
Sent: Friday, January 30, 2015 11:59 AM
To: [log in to unmask]<mailto:[log in to unmask]>
Subject: Genetic tests and Predictive validity I’ve just started reading the literature on genetic tests, and noticing how many of them tend to focus on predictive value—that is, if a certain test accurately predicts whether a patient will or won’t get a particular phenotype (condition), the authors suggest the test should be used.  But if we’re deciding whether to order the test in the first place, shouldn’t we be focused on sensitivity and specificity instead, not PPV and NPV?  Predictive value is so heavily dependent on disease prevalence.  For example, if I want to get tested for a disease with a 2% prevalence in people like me, I could just flip a coin and regardless of the outcome, my “Coin Flip Test” would show an NPV of 98%!  So what does NPV alone really tell me, if I’m not also factoring out prevalence—which would be easier done by simply looking at sensitivity and specificity?  Someone please tell me where my thinking has gone awry!
For a concrete example, look at MammaPrint, a test which reports binary results.  In addition to hazard ratios, study authors often tout statistically significant differences between the probabilities of recurrence-free survival in the MammaPrint-High Risk vs. MammaPrint-Low Risk groups (essentially the test’s predictive values).  In the RASTER study (N = 427), 97% of the patients with a “Low Risk” test result did not experience metastasis in the next 5 years.  Sounds great, right?  But when you look at Sensitivity, you see that of the 33 patients in the study who did experience metastasis, only 23 of them were classified as “High Risk” by MammaPrint, for a 70% sensitivity.  If patients and clinicians are looking for a test to inform their decision about adjuvant chemotherapy for early stage breast cancer, wouldn’t the fact that the test missed 10 out of 33 cases be more important than the 97% NPV, an artifact of the extremely low 5-year prevalence of metastasis in this cohort (only 33 out of 427, or  0.7%)?
Drukker et al. A prospective evaluation of a breast cancer prognosis signature in the observational RASTER study. Int J Cancer 2013. 133(4):929-36. http://www.ncbi.nlm.nih.gov/pubmed/23371464
Retel et al. Prospective cost-effectiveness analysis of genomic profiling in breast cancer. Eur J Cancer 2013. 49:3773-9. http://www.ncbi.nlm.nih.gov/pubmed/23992641  (Provides actual true/false positive/negative results)

Thanks so much!

Teresa Benson, MA, LP
Clinical Lead, Evidence-Based Medicine
McKesson Health Solutions
18211 Yorkshire Ave
Prior Lake, MN  55372
[log in to unmask]<mailto:[log in to unmask]>
Phone: 1-952-226-4033

Confidentiality Notice: This e-mail message, including any attachments, is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message.


________________________________
----------------------------------------- CONFIDENTIALITY NOTICE: This email and any attachments may contain confidential information that is protected by law and is for the sole use of the individuals or entities to which it is addressed. If you are not the intended recipient, please notify the sender by replying to this email and destroying all copies of the communication and attachments. Further use, disclosure, copying, distribution of, or reliance upon the contents of this email and attachments is strictly prohibited. To contact Albany Medical Center, or for a copy of our privacy practices, please visit us on the Internet at www.amc.edu<http://www.amc.edu>.

------------------------------

Date:    Sat, 31 Jan 2015 12:31:29 +0000
From:    Brian Alper MD <[log in to unmask]>
Subject: Re: Genetic tests and Predictive validity

Reporting diagnostic and prognostic parameters to help clinicians and policy makers understood the use of tests is challenging.   There are many possible parameters to report and varied understanding such that one measure is not uniformly understood by all.

The most sophisticated may want likelihood ratios and understand how to apply them with respect to pretest likelihood.   But many want simpler yes/no type of information.  For the simpler approach sensitivity and specificity are too often confused for rule in and rule out, and are too often confused for representing the likelihood of an answer in practice (which does not account for the pretest likelihood at all).

For these reasons positive and negative predictive values appear to be the simplest yet clinically meaningful results to report.  However they are only accurate if the pretest likelihood is similar in practice to the source where they are being reported from.  It is possible to report PPV and NPV for different pretest likelihoods or different representative populations.

In determining whether a test is useful, the PPV and NPV in isolation is insufficient.  Only if PPV or NPV is different enough from the pretest likelihood to cross a decision threshold (treatment threshold or testing threshold) does the test change action.  In the example below with a pretest likelihood of 2% and a coin flip resulting in PPV 2% and NPV 98% the post-test results are no different than the pretest likelihood.

Other parameters (accuracy, diagnostic odds ratio) that combine sensitivity and specificity into a single measure are easier to combine and report statistically but I generally avoid these parameters because they are less informative for clinical use - in most clinical situations sensitivity and specificity are not equally important; a useful one-sided test (eg useful if positive, ignored if negative) becomes less clear when viewing it for overall accuracy.


But another consideration is sometimes tests are used for "diagnostic" purposes - Does the patient have or not have a certain diagnosis? - an in these cases sensitivity, specificity, PPV*, NPV*, positive likelihood ratio, and negative likelihood ratio (* with prevalence to put into perspective) are clear.

Sometimes tests are used for prognostic purposes, and may be used as a continuous measure with different predictions at different test results.   In these cases presenting the expected prevalence (or probability or likelihood) at different test results may be easier to interpret than selecting a single cutoff and converting it to sensitivity/specificity/etc.


In general all of the comments above reflect diagnostic and prognostic testing, whether genetic or not.  If the test is genetic the results can usually be handled similarly if the clinical question is specific to the person being tested.   There may be some scenarios where familial implications (cross-generation interpretation) or combinatorial implications (multigene testing) make the overall reporting more challenging.

Brian S. Alper, MD, MSPH, FAAFP
Founder of DynaMed
Vice President of EBM Research and Development, Quality & Standards dynamed.ebscohost.com

From: Evidence based health (EBH) [mailto:[log in to unmask]] On Behalf Of Benson, Teresa
Sent: Friday, January 30, 2015 12:00 PM
To: [log in to unmask]
Subject: Genetic tests and Predictive validity

I've just started reading the literature on genetic tests, and noticing how many of them tend to focus on predictive value-that is, if a certain test accurately predicts whether a patient will or won't get a particular phenotype (condition), the authors suggest the test should be used.  But if we're deciding whether to order the test in the first place, shouldn't we be focused on sensitivity and specificity instead, not PPV and NPV?  Predictive value is so heavily dependent on disease prevalence.  For example, if I want to get tested for a disease with a 2% prevalence in people like me, I could just flip a coin and regardless of the outcome, my "Coin Flip Test" would show an NPV of 98%!  So what does NPV alone really tell me, if I'm not also factoring out prevalence-which would be easier done by simply looking at sensitivity and specificity?  Someone please tell me where my thinking has gone awry!
For a concrete example, look at MammaPrint, a test which reports binary results.  In addition to hazard ratios, study authors often tout statistically significant differences between the probabilities of recurrence-free survival in the MammaPrint-High Risk vs. MammaPrint-Low Risk groups (essentially the test's predictive values).  In the RASTER study (N = 427), 97% of the patients with a "Low Risk" test result did not experience metastasis in the next 5 years.  Sounds great, right?  But when you look at Sensitivity, you see that of the 33 patients in the study who did experience metastasis, only 23 of them were classified as "High Risk" by MammaPrint, for a 70% sensitivity.  If patients and clinicians are looking for a test to inform their decision about adjuvant chemotherapy for early stage breast cancer, wouldn't the fact that the test missed 10 out of 33 cases be more important than the 97% NPV, an artifact of the extremely low 5-year prevalence of metastasis in this cohort (only 33 out of 427, or  0.7%)?
Drukker et al. A prospective evaluation of a breast cancer prognosis signature in the observational RASTER study. Int J Cancer 2013. 133(4):929-36. http://www.ncbi.nlm.nih.gov/pubmed/23371464
Retel et al. Prospective cost-effectiveness analysis of genomic profiling in breast cancer. Eur J Cancer 2013. 49:3773-9. http://www.ncbi.nlm.nih.gov/pubmed/23992641  (Provides actual true/false positive/negative results)

Thanks so much!

Teresa Benson, MA, LP
Clinical Lead, Evidence-Based Medicine
McKesson Health Solutions
18211 Yorkshire Ave
Prior Lake, MN  55372
[log in to unmask]<mailto:[log in to unmask]>
Phone: 1-952-226-4033

Confidentiality Notice: This e-mail message, including any attachments, is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message.


------------------------------

Date:    Sat, 31 Jan 2015 14:59:57 +0000
From:    "Huw Llewelyn [hul2]" <[log in to unmask]>
Subject: Re: Genetic tests and Predictive validity

Hi Moacyr, Theresa, Dan, Michael, Brian and everyone I agree that prior probabilities are important.  However, these should not be non-evidence-based subjective guesses but probabilities (i.e. positive predictive values) of diagnoses in a traditional differential diagnostic list.  This list and the probabilities are suggested by a presenting complaint or another finding (e.g. a chest x ray appearance) with a shorter list of possibilities.  One of these diagnoses are chosen, and other findings looked for that occur commonly in that diagnosis (i.e. the sensitivity is high) and rarely or never in at least one other diagnosis in the list (i.e. the sensitivity is low or zero).  If the finding were present, this would make the probability of the chosen diagnosis higher and the probability of the other diagnosis lower or zero.  Other findings are sought to try to make all but one of the differential diagnoses improbable.
 In this reasoning process, which experienced doctors use when explaining their reasoning, we use ratios of sensitivities, NOT the ratio of sensitivity to false positive rate (i.e. one minus the specificity).  Differential diagnostic reasoning uses a statistical dependence assumption which safely underestimates diagnostic probabilities whereas using likelihood ratios uses a statistical independence assumption that overestimates probabilities (and thus may over-diagnose).  I explain this reasoning process in the Oxford Handbook of Clinical Diagnosis (see page 6 and 622 with the mathematical proof on page 638) in ‘look inside’ on the Amazon web-site: http://www.amazon.co.uk/Handbook-Clinical-Diagnosis-Medical-Handbooks/dp/019967986X#reader_019967986X.
 Best wishes
 Huw



________________________________
From: Moacyr Roberto Cuce Nobre [[log in to unmask]]
Sent: 31 January 2015 12:15
To: Huw Llewelyn [hul2]
Subject: Re: Genetic tests and Predictive validity

Hi Huw, dont you think that the output to the clinician is the pretest probability of the patient, that better reflects the individual uncertainty, rather than the prevalence as a population data.

--
Moacyr

_______________________________________
Moacyr Roberto Cuce Nobre, MD, MS, PhD.
Diretor da Unidade de Epidemiologia Clínica Instituto do Coração (InCor) Hospital das Clínicas Faculdade de Medicina da Universidade de São Paulo
55 11 2661 5941 (fone/fax)
55 11 9133 1009 (celular)

________________________________
De: "Huw Llewelyn [hul2]" <[log in to unmask]>
Para: [log in to unmask]
Enviadas: Sábado, 31 de Janeiro de 2015 9:39:12
Assunto: Re: Genetic tests and Predictive validity

Hi Theresa, Dan, Michael , Moacyr and everyone It is not only a medical finding’s PPV (positive predictive value) that may change in another population with a different prevalence of target disease.  The specificity will also change with the prevalence.  For example, the specificity may be lower if another population contains diseases that can also ’cause’ the finding and the specificity may be higher (and much higher) if the population contains more people without the finding or the target disease.  The sensitivity may also change if the severity of the target disease is different in a different population.  It is also possible for a test result to have a likelihood ratio of ‘one’ and still be very powerful when used in combination with another finding to make a prediction.
So it is a fallacy to assume that sensitivity and specificity are constant in different populations – they vary as much as PPVs.  In any case, provided one knows the prevalence, sensitivity and PPV for finding and diagnosis in a population, it is a simple matter to calculate the specificity, the NPP, etc.  However, these should not be applied to another population without checking that they are the same.  In addition, experienced doctors usually interpret intuitively the actual value of an individual patient’s observation e.g. BP of 196/132 and do not divide it into ‘negative’ and ‘positive’ ranges.  Under these circumstances, there is no such thing as ‘specificity’.  I explain this in detail in the final chapter of the 3rd edition of Oxford Handbook of Clinical Diagnosis (see also abstract #13 in http://preventingoverdiagnosis.net/documents/POD-Abstracts.docx).
Huw



________________________________
From: Evidence based health (EBH) [[log in to unmask]] on behalf of Michael Power [[log in to unmask]]
Sent: 31 January 2015 06:28
To: [log in to unmask]
Subject: Re: Genetic tests and Predictive validity

Hi Dan and Teresa

Clinicians and patients need PPVs and NPVs.

So, rather than banning them, why not show graphs of PPV/NPV against the range of clinically relevant prevalences?

Michael

From: Evidence based health (EBH) [mailto:[log in to unmask]] On Behalf Of Mayer, Dan
Sent: 31 January 2015 02:08
To: [log in to unmask]
Subject: Re: Genetic tests and Predictive validity


Hi Teresa,



You are absolutely correct.  This is why we should demand that diagnostic studies ONLY present the results of Sensitivity, Specificity and Likelihood ratios.



This issue has been a serious problem for many years and it is about time that more people spoke up about it.  Also, journal editors and peer reviewers should be up in arms against the practice of reporting PPV and NPV.



Best wishes

Dan

________________________________
From: Evidence based health (EBH) [[log in to unmask]] on behalf of Benson, Teresa [[log in to unmask]]
Sent: Friday, January 30, 2015 11:59 AM
To: [log in to unmask]<mailto:[log in to unmask]>
Subject: Genetic tests and Predictive validity I’ve just started reading the literature on genetic tests, and noticing how many of them tend to focus on predictive value—that is, if a certain test accurately predicts whether a patient will or won’t get a particular phenotype (condition), the authors suggest the test should be used.  But if we’re deciding whether to order the test in the first place, shouldn’t we be focused on sensitivity and specificity instead, not PPV and NPV?  Predictive value is so heavily dependent on disease prevalence.  For example, if I want to get tested for a disease with a 2% prevalence in people like me, I could just flip a coin and regardless of the outcome, my “Coin Flip Test” would show an NPV of 98%!  So what does NPV alone really tell me, if I’m not also factoring out prevalence—which would be easier done by simply looking at sensitivity and specificity?  Someone please tell me where my thinking has gone awry!
For a concrete example, look at MammaPrint, a test which reports binary results.  In addition to hazard ratios, study authors often tout statistically significant differences between the probabilities of recurrence-free survival in the MammaPrint-High Risk vs. MammaPrint-Low Risk groups (essentially the test’s predictive values).  In the RASTER study (N = 427), 97% of the patients with a “Low Risk” test result did not experience metastasis in the next 5 years.  Sounds great, right?  But when you look at Sensitivity, you see that of the 33 patients in the study who did experience metastasis, only 23 of them were classified as “High Risk” by MammaPrint, for a 70% sensitivity.  If patients and clinicians are looking for a test to inform their decision about adjuvant chemotherapy for early stage breast cancer, wouldn’t the fact that the test missed 10 out of 33 cases be more important than the 97% NPV, an artifact of the extremely low 5-year prevalence of metastasis in this cohort (only 33 out of 427, or  0.7%)?
Drukker et al. A prospective evaluation of a breast cancer prognosis signature in the observational RASTER study. Int J Cancer 2013. 133(4):929-36. http://www.ncbi.nlm.nih.gov/pubmed/23371464
Retel et al. Prospective cost-effectiveness analysis of genomic profiling in breast cancer. Eur J Cancer 2013. 49:3773-9. http://www.ncbi.nlm.nih.gov/pubmed/23992641  (Provides actual true/false positive/negative results)

Thanks so much!

Teresa Benson, MA, LP
Clinical Lead, Evidence-Based Medicine
McKesson Health Solutions
18211 Yorkshire Ave
Prior Lake, MN  55372
[log in to unmask]<mailto:[log in to unmask]>
Phone: 1-952-226-4033

Confidentiality Notice: This e-mail message, including any attachments, is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message.


________________________________
----------------------------------------- CONFIDENTIALITY NOTICE: This email and any attachments may contain confidential information that is protected by law and is for the sole use of the individuals or entities to which it is addressed. If you are not the intended recipient, please notify the sender by replying to this email and destroying all copies of the communication and attachments. Further use, disclosure, copying, distribution of, or reliance upon the contents of this email and attachments is strictly prohibited. To contact Albany Medical Center, or for a copy of our privacy practices, please visit us on the Internet at www.amc.edu<http://www.amc.edu>.

------------------------------

Date:    Sat, 31 Jan 2015 16:45:44 +0000
From:    "McCormack, James" <[log in to unmask]>
Subject: Re: Genetic tests and Predictive validity

Hi Everyone - here is a useful rough rule of thumb for Likelihood Ratios that a clinician can use which puts numbers to disease probability and you can see when an LR can basically rule in our out a disease.

The only caveat is this is ONLY accurate if the pretest probability lies between 10% and 90% so it doesn’t apply to low prevalent (<10%) conditions so it may not be all that applicable to this discussion.

A LR of 2 increases the absolute probability of disease FROM your pre-test probability by 15% - for every additional one increase in LR you add another 5% absolute change to your probability - so a LR of 3 increases your probability of disease by 15% plus 2 X 5% or a total increase of 25% over your pre-test probability.

Going in the other direction

A LR of 0.5 decreases the absolute probability of disease FROM your pre-test probability by 15% - for every additional 0.1 lowering in LR you add another 5% absolute change to your probability - so a LR of 0.3 decreases probability by 15% plus 2 X 5% or a total decrease of 25% from your pre-test probability.

The original idea for this came from this article http://onlinelibrary.wiley.com/doi/10.1046/j.1525-1497.2002.10750.x/full

and we published a discussion on probabilistic reasoning in primary care here http://goo.gl/Rnl2q2

Thanks

James



On Jan 31, 2015, at 4:31 AM, Brian Alper MD <[log in to unmask]<mailto:[log in to unmask]>> wrote:

Reporting diagnostic and prognostic parameters to help clinicians and policy makers understood the use of tests is challenging.   There are many possible parameters to report and varied understanding such that one measure is not uniformly understood by all.

The most sophisticated may want likelihood ratios and understand how to apply them with respect to pretest likelihood.   But many want simpler yes/no type of information.  For the simpler approach sensitivity and specificity are too often confused for rule in and rule out, and are too often confused for representing the likelihood of an answer in practice (which does not account for the pretest likelihood at all).

For these reasons positive and negative predictive values appear to be the simplest yet clinically meaningful results to report.  However they are only accurate if the pretest likelihood is similar in practice to the source where they are being reported from.  It is possible to report PPV and NPV for different pretest likelihoods or different representative populations.

In determining whether a test is useful, the PPV and NPV in isolation is insufficient.  Only if PPV or NPV is different enough from the pretest likelihood to cross a decision threshold (treatment threshold or testing threshold) does the test change action.  In the example below with a pretest likelihood of 2% and a coin flip resulting in PPV 2% and NPV 98% the post-test results are no different than the pretest likelihood.

Other parameters (accuracy, diagnostic odds ratio) that combine sensitivity and specificity into a single measure are easier to combine and report statistically but I generally avoid these parameters because they are less informative for clinical use – in most clinical situations sensitivity and specificity are not equally important; a useful one-sided test (eg useful if positive, ignored if negative) becomes less clear when viewing it for overall accuracy.


But another consideration is sometimes tests are used for “diagnostic” purposes – Does the patient have or not have a certain diagnosis? – an in these cases sensitivity, specificity, PPV*, NPV*, positive likelihood ratio, and negative likelihood ratio (* with prevalence to put into perspective) are clear.

Sometimes tests are used for prognostic purposes, and may be used as a continuous measure with different predictions at different test results.   In these cases presenting the expected prevalence (or probability or likelihood) at different test results may be easier to interpret than selecting a single cutoff and converting it to sensitivity/specificity/etc.


In general all of the comments above reflect diagnostic and prognostic testing, whether genetic or not.  If the test is genetic the results can usually be handled similarly if the clinical question is specific to the person being tested.   There may be some scenarios where familial implications (cross-generation interpretation) or combinatorial implications (multigene testing) make the overall reporting more challenging.

Brian S. Alper, MD, MSPH, FAAFP
Founder of DynaMed
Vice President of EBM Research and Development, Quality & Standards dynamed.ebscohost.com<http://dynamed.ebscohost.com/>

From: Evidence based health (EBH) [mailto:[log in to unmask]] On Behalf Of Benson, Teresa
Sent: Friday, January 30, 2015 12:00 PM
To: [log in to unmask]<mailto:[log in to unmask]>
Subject: Genetic tests and Predictive validity

I’ve just started reading the literature on genetic tests, and noticing how many of them tend to focus on predictive value—that is, if a certain test accurately predicts whether a patient will or won’t get a particular phenotype (condition), the authors suggest the test should be used.  But if we’re deciding whether to order the test in the first place, shouldn’t we be focused on sensitivity and specificity instead, not PPV and NPV?  Predictive value is so heavily dependent on disease prevalence.  For example, if I want to get tested for a disease with a 2% prevalence in people like me, I could just flip a coin and regardless of the outcome, my “Coin Flip Test” would show an NPV of 98%!  So what does NPV alone really tell me, if I’m not also factoring out prevalence—which would be easier done by simply looking at sensitivity and specificity?  Someone please tell me where my thinking has gone awry!
For a concrete example, look at MammaPrint, a test which reports binary results.  In addition to hazard ratios, study authors often tout statistically significant differences between the probabilities of recurrence-free survival in the MammaPrint-High Risk vs. MammaPrint-Low Risk groups (essentially the test’s predictive values).  In the RASTER study (N = 427), 97% of the patients with a “Low Risk” test result did not experience metastasis in the next 5 years.  Sounds great, right?  But when you look at Sensitivity, you see that of the 33 patients in the study who did experience metastasis, only 23 of them were classified as “High Risk” by MammaPrint, for a 70% sensitivity.  If patients and clinicians are looking for a test to inform their decision about adjuvant chemotherapy for early stage breast cancer, wouldn’t the fact that the test missed 10 out of 33 cases be more important than the 97% NPV, an artifact of the extremely low 5-year prevalence of metastasis in this cohort (only 33 out of 427, or  0.7%)?
Drukker et al. A prospective evaluation of a breast cancer prognosis signature in the observational RASTER study. Int J Cancer 2013. 133(4):929-36.http://www.ncbi.nlm.nih.gov/pubmed/23371464
Retel et al. Prospective cost-effectiveness analysis of genomic profiling in breast cancer. Eur J Cancer 2013. 49:3773-9. http://www.ncbi.nlm.nih.gov/pubmed/23992641  (Provides actual true/false positive/negative results)

Thanks so much!

Teresa Benson, MA, LP
Clinical Lead, Evidence-Based Medicine
McKesson Health Solutions
18211 Yorkshire Ave
Prior Lake, MN  55372
[log in to unmask]<mailto:[log in to unmask]>
Phone: 1-952-226-4033

Confidentiality Notice: This e-mail message, including any attachments, is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message.

------------------------------

End of EVIDENCE-BASED-HEALTH Digest - 30 Jan 2015 to 31 Jan 2015 (#2015-25)
***************************************************************************

Top of Message | Previous Page | Permalink

JiscMail Tools


RSS Feeds and Sharing


Advanced Options


Archives

October 2019
September 2019
August 2019
July 2019
June 2019
May 2019
April 2019
March 2019
February 2019
January 2019
December 2018
November 2018
October 2018
September 2018
August 2018
July 2018
June 2018
May 2018
April 2018
March 2018
February 2018
January 2018
December 2017
November 2017
October 2017
September 2017
August 2017
July 2017
June 2017
May 2017
April 2017
March 2017
February 2017
January 2017
December 2016
November 2016
October 2016
September 2016
August 2016
July 2016
June 2016
May 2016
April 2016
March 2016
February 2016
January 2016
December 2015
November 2015
October 2015
September 2015
August 2015
July 2015
June 2015
May 2015
April 2015
March 2015
February 2015
January 2015
December 2014
November 2014
October 2014
September 2014
August 2014
July 2014
June 2014
May 2014
April 2014
March 2014
February 2014
January 2014
December 2013
November 2013
October 2013
September 2013
August 2013
July 2013
June 2013
May 2013
April 2013
March 2013
February 2013
January 2013
December 2012
November 2012
October 2012
September 2012
August 2012
July 2012
June 2012
May 2012
April 2012
March 2012
February 2012
January 2012
December 2011
November 2011
October 2011
September 2011
August 2011
July 2011
June 2011
May 2011
April 2011
March 2011
February 2011
January 2011
December 2010
November 2010
October 2010
September 2010
August 2010
July 2010
June 2010
May 2010
April 2010
March 2010
February 2010
January 2010
December 2009
November 2009
October 2009
September 2009
August 2009
July 2009
June 2009
May 2009
April 2009
March 2009
February 2009
January 2009
December 2008
November 2008
October 2008
September 2008
August 2008
July 2008
June 2008
May 2008
April 2008
March 2008
February 2008
January 2008
December 2007
November 2007
October 2007
September 2007
August 2007
July 2007
June 2007
May 2007
April 2007
March 2007
February 2007
January 2007
December 2006
November 2006
October 2006
September 2006
August 2006
July 2006
June 2006
May 2006
April 2006
March 2006
February 2006
January 2006
December 2005
November 2005
October 2005
September 2005
August 2005
July 2005
June 2005
May 2005
April 2005
March 2005
February 2005
January 2005
December 2004
November 2004
October 2004
September 2004
August 2004
July 2004
June 2004
May 2004
April 2004
March 2004
February 2004
January 2004
December 2003
November 2003
October 2003
September 2003
August 2003
July 2003
June 2003
May 2003
April 2003
March 2003
February 2003
January 2003
December 2002
November 2002
October 2002
September 2002
August 2002
July 2002
June 2002
May 2002
April 2002
March 2002
February 2002
January 2002
December 2001
November 2001
October 2001
September 2001
August 2001
July 2001
June 2001
May 2001
April 2001
March 2001
February 2001
January 2001
December 2000
November 2000
October 2000
September 2000
August 2000
July 2000
June 2000
May 2000
April 2000
March 2000
February 2000
January 2000
December 1999
November 1999
October 1999
September 1999
August 1999
July 1999
June 1999
May 1999
April 1999
March 1999
February 1999
January 1999
December 1998
November 1998
October 1998
September 1998


JiscMail is a Jisc service.

View our service policies at https://www.jiscmail.ac.uk/policyandsecurity/ and Jisc's privacy policy at https://www.jisc.ac.uk/website/privacy-notice

Secured by F-Secure Anti-Virus CataList Email List Search Powered by the LISTSERV Email List Manager