JiscMail Logo
Email discussion lists for the UK Education and Research communities

Help for EVIDENCE-BASED-HEALTH Archives


EVIDENCE-BASED-HEALTH Archives

EVIDENCE-BASED-HEALTH Archives


EVIDENCE-BASED-HEALTH@JISCMAIL.AC.UK


View:

Message:

[

First

|

Previous

|

Next

|

Last

]

By Topic:

[

First

|

Previous

|

Next

|

Last

]

By Author:

[

First

|

Previous

|

Next

|

Last

]

Font:

Proportional Font

LISTSERV Archives

LISTSERV Archives

EVIDENCE-BASED-HEALTH Home

EVIDENCE-BASED-HEALTH Home

EVIDENCE-BASED-HEALTH  February 2015

EVIDENCE-BASED-HEALTH February 2015

Options

Subscribe or Unsubscribe

Subscribe or Unsubscribe

Log In

Log In

Get Password

Get Password

Subject:

Re: Genetic tests and Predictive validity

From:

"Huw Llewelyn [hul2]" <[log in to unmask]>

Reply-To:

Huw Llewelyn [hul2]

Date:

Mon, 9 Feb 2015 11:18:58 +0000

Content-Type:

text/plain

Parts/Attachments:

Parts/Attachments

text/plain (1 lines)

PS. During differential diagnosis, diagnostic exclusion and confirmation and outcome prediction +/- treatment a test has to be assessed for its usefulness when used in combination with others. 

-----Original Message-----

From: [log in to unmask]

Date: Mon, 9 Feb 2015 11:09:28 

To: <[log in to unmask]>

Reply-To: [log in to unmask]

Subject: Re: Genetic tests and Predictive validity



I agree. The only time a test is interpreted alone is during screening if it lowers the probability that the 'target' diagnosis is present so that no further action is needed except perhaps to repeat the test at some future date. 



If the screening test result is 'positive' then it rarely pathognomonic on its own and we have to consider its differential diagnoses etc. From this point on we have to consider all the patient's findings during #2 to #5 below. 



-----Original Message-----

From: donald stanley <[log in to unmask]>

Date: Mon, 9 Feb 2015 05:41:52 

To: Huw Llewelyn [hul2]<[log in to unmask]>

Reply-To: <[log in to unmask]>

Subject: Re: Genetic tests and Predictive validity



The discussion is indeed interesting: about reductive.



  But diagnosis cannot be reduced in the fashion of this thread. 

Validation of the diagnosis is required by taking into account all 

features of the manifestation of disease. It is my opinion that we are 

liable to delude ourselves in technical details when we focus too 

intensely on methodology.



> Huw Llewelyn [hul2] <mailto:[log in to unmask]>

> February 9, 2015 at 05:05

> The predictive validity of a test has to be assessed for a number of 

> different situations:

>

> 1. For population screening.

> 2. For differential diagnosis

> 3. For diagnostic confirmation

> 4. For diagnostic exclusion

> 5. For predicting outcome with or without treatment (usually when a 

> diagnosis has been confirmed or is probable).

>

> The paper to which Teresa was referring addresses #5 but the 

> discussion seems to be about the issues in general about #1, hence the 

> confusion. I explain some of these principles in the Oxford Handbook 

> of Clinical Diagnosis, especially the final chapter.

>

> Huw

> ------------------------------------------------------------------------

> *From: *OWEN DEMPSEY <[log in to unmask]>

> *Sender: *"Evidence based health (EBH)" 

> <[log in to unmask]>

> *Date: *Sun, 8 Feb 2015 22:52:37 +0000

> *To: *<[log in to unmask]>

> *ReplyTo: *OWEN DEMPSEY <[log in to unmask]>

> *Subject: *Re: Genetic tests and Predictive validity

>

> I remain quite intrigued by Teresa's question, and now also by the 

> apparent paucity of response. I would be interested to know if my 

> thinking on this holds water or whether it is disastrously leaky!

> Just to summarise:

> Teresa said:

>

>     "Oh dear!  such a complicated topic.  Going back to the RASTER

>     study on MammaPrint, can you speak to the following clinician

>     rationales and comment?

>

>     · “I’m going to order a MammaPrint test for my patient, because if

>     it shows Low Risk, we can be 97% sure my patient won’t develop

>     metastasis and I can confidently recommend no chemotherapy.”

>

>     “I don’t think MammaPrint would be a good test for me to order for

>     my patient, because even if she is likely to develop metastasis,

>     the MammaPrint test would have a 30% chance of missing that and

>     showing a false-negative Low Risk result instead—potentially

>     misleading.”"

>

> My response was:

>

>     "The shift of the low-high risk cut off point to favour low risk

>     is odd.  It seems to favour (as in promote) specificity rather

>     than sensitivity.  This is odd because usually cut-offs have

>     tended to promote medicalisation, as in most screening tests.  But

>     to decide on any particular cut off might be presupposing that

>     there is a piece of knowledge, here called 'risk', that the test

>     can fully characterise or obtain. Maybe it can't. But maybe the

>     testers don't acknowledge that some things aren't necessarily

>     measurable and perhaps never will be.  It would be interesting to

>     know the confidence intervals for the findings of the study, it

>     sounds as though chance may play a role. “

>

>

> My further thoughts are:

>

> Given a population with a significant (possibly) high prevalence of 

> the target condition (in this case so called high risk of metastases) 

> , then, a high specificity at the expense of sensitivity can be 

> misleading;  whereas specificity would protect the already low risk 

> low prevalence population from false negatives. But in  the face of an 

> important high prevalence diagnosis, (high risk of metastases here), 

> then the low sensitivity will miss, in this example, 30% of those that 

> might benefit from chemotherapy.  This suggests that the cut off 

> between low and high risk , should depend upon what is known of 

> the prevalence of the risks, low and high, in the test population.  

> The more the prevalence of metastases, then the more sensitive the 

> test should be, and the less specific, the rationale being that the 

> more the prevalence of metastases then the more important it is not to 

> miss them, simply because there are increasing numbers of them and the 

> more forgivable it is to be less specific and to over diagnose those 

> that would not benefit from chemotherapy as there are proportionately 

> less of them.  This is complicated.  The example given suggests that 

> the cut off inordinately favours specificity and I wonder how the 

> authors justified their choice of cut off.

>

> Of course for an individual the risk might be an unknown, if only 

> because studies have not been done to measure risks of having or not 

> having metastases without treatments, i.e. without a non-treatment 

> arm.  In such a case there can be no rationale for the cut off.  

> However, in the example given it looks as if the population levels of 

> risks have been estimated (but how accurately?) and a decision made 

> for a cut off between low and high risk, but the cut off has for some 

> reason been inordinately skewed towards specificity.  This could be 

> because the tests cut offs were worked out on a low prevalence 

> population but then inappropriately applied to a higher prevalence 

> population. Does this sound possible?

> best wishes

> Owen

> ,

>

> On 2 February 2015 at 16:34, OWEN DEMPSEY <[log in to unmask] 

> <mailto:[log in to unmask]>> wrote:

>

>     The shift of the low-high risk cut off point to favour low risk is

>     odd.  It seems to favour (as in promote) specificity rather than

>     sensitivity.  This is odd because usually cut-offs have tended to

>     promote medicalisation, as in most screening tests.  But to decide

>     on any particular cut off might be presupposing that there is a

>     piece of knowledge, here called 'risk', that the test can fully

>     characterise or obtain. Maybe it can't. But maybe the testers

>     don't acknowledge that some things aren't necessarily measurable

>     and perhaps never will be.  It would be interesting to know the

>     confidence intervals for the findings of the study, it sounds as

>     though chance may play a role.

>

>

>     On Monday, 2 February 2015, Benson, Teresa

>     <[log in to unmask] <mailto:[log in to unmask]>>

>     wrote:

>

>         Oh dear!  such a complicated topic.  Going back to the RASTER

>         study on MammaPrint, can you speak to the following clinician

>         rationales and comment?

>

>         ·“I’m going to order a MammaPrint test for my patient, because

>         if it shows Low Risk, we can be 97% sure my patient won’t

>         develop metastasis and I can confidently recommend no

>         chemotherapy.”

>

>         ·“I don’t think MammaPrint would be a good test for me to

>         order for my patient, because even if she is likely to develop

>         metastasis, the MammaPrint test would have a 30% chance of

>         missing that and showing a false-negative Low Risk result

>         instead—potentially misleading.”

>

>         *From:*Evidence based health (EBH)

>         [mailto:[log in to unmask]] *On Behalf Of

>         *Juan Acuna

>         *Sent:* Monday, February 02, 2015 9:01 AM

>         *To:* [log in to unmask]

>         *Subject:* Re: Genetic tests and Predictive validity

>

>         All,

>

>         Allow me first to apologize for the long email (for those of

>         you who like short emails with more “simple” answers).

>

>         Going back to the original question, I think most of the

>         answer lies in different considerations for two different

>         areas: first, “genetic testing” is way too broad to be able to

>         attempt a single answer. “Genetic” is way too broad so

>         multiple considerations must apply but not so relevant to the

>         discussion, I think; Second, as for Dx tests themselves,

>         whether for genetic testing or not, the same basic

>         considerations apply, I think, based on the basic definitions

>         of [and the rationale used when calculating] the different

>         operative characteristics of a given test.

>

>         *Sensitivity and specificity*are the perspectives of the test

>         itself (not really that of the patient): one needs to know who

>         is diseased or not (or to have the gold standard known) to

>         calculate them. They define the test in isolation and are not

>         a very useful tool for clinicians that are struggling with the

>         opposite: uncertainty about the patient’s condition. Whether

>         they change or not with prevalence is a discussion that is

>         irrelevant. What is relevant is that one may use test with

>         known Sn/Sp only after consideration of the possibility of

>         having a disease in the patient.

>

>         The discussion about Sn/Sp changing with prevalence has been

>         there forever. Due to the way they are calculated, the study

>         to definitely prove if Sn/Sp change with prevalence (beyond

>         mock calculations and simulations) is probably undoable (or at

>         least I have never seen it). However, if the patient belongs

>         to the same risk population than the one used in study that

>         reports Sn/Sp for a test, the situation is good for us. For

>         some tests (such as ultrasound for the Dx of birth defects)

>         unmeasurable factors dealing with threatened reproducibility

>         (such as the expertise of the operator) make generalization of

>         Sn/Sp from a given study very difficult. Each operator must

>         know his/her own Sn/Sp. For tests with results reported in

>         numerical values that are associated with different

>         probabilities of the disease, cutoff points might be avoided

>         and the use of ROC reports might be far preferable.

>

>         *PPV/NPV*are the perspective of the clinician: patient that

>         needs to be tested (whether the patient is symptomatic or not

>         is irrelevant) under uncertainty of the presence of disease. I

>         am in strong disagreement with Dan: the fact that they change

>         with prevalence is not a good reason to suggest not to

>         reporting them. Much less in the EBM context where the

>         clinician is supposed to actually input the prevalence in the

>         consideration of both the prescription of a test and the

>         application of the results (“to what population does my

>         patient belong?”). Furthermore, changing PPV/NPV values with

>         changing prevalence dictate why some tests, albeit good (from

>         the perspective of Sn/Sp) and used in some populations, should

>         not be used (and turn into very ineffective tests) in some

>         others of different risk.

>

>         LASt, P/N LR are derived from Sn/Sp. Thus share the same

>         limitations. However, if I understand that a test is indicated

>         given my patient’s risk for a disease, understanding how the

>         pretest probability changes after a positive result is of

>         great value. Nevertheless, if I used a wrong test (given the

>         inadequate matching of the probability of disease and Sn/Sp of

>         the test) the use of LRs will prove extremely disappointing.

>

>         In summary, each indicator, as always, has it use, with

>         benefits and limitations. Not understanding in full the reach

>         of each of those operative characteristics for a test and the

>         diverse nature of my patients is the problem. All this framed

>         in the need we sometimes have as physicians to oversimplify

>         concepts (almost demanding “coking recipes” for everything)

>         when concepts cannot be simple. This is a great problem and

>         what true EBM addresses.

>

>         JA

>

>         *Juan M. Acuña M.D., MSc., FACOG.*

>

>         Chair, Department of Medical and Health Sciences Research

>

>         Associate Professor of OBGYN, Human and Molecular Genetics and

>         Clinical Epidemiology,

>

>         Herbert Wertheim College  of Medicine

>

>         Florida International University

>

>         11200 SW 8th Street

>

>         AHC1-445

>

>         Miami, FL 33199

>

>         Ph (305) 348-0676

>

>         *From:*Evidence based health (EBH)

>         [mailto:[log in to unmask]] *On Behalf Of

>         *Raatz Heike

>         *Sent:* Monday, February 02, 2015 5:08 AM

>         *To:* [log in to unmask]

>         *Subject:* Re: Genetic tests and Predictive validity

>

>         Dear all,

>

>         Question is though: is the genetic test you want to evaluate

>         actually used as a *diagnostic* test? From what I understand

>         from the case mentioned by Teresa she is interested not in

>         whether a genetic test accurately recognizes whether you have

>         a certain genotype but whether you will in the future develop

>         a certain phenotype*. *So you are not trying to find out

>         whether the patient currently suffers from a condition but the

>         risk of developing a condition in the future. Now unless you

>         have a dominant gene that will always lead to the expression

>         of a certain phenotype (like Huntingtons) you need to consider

>         whether that genotype is not just one of many factors that can

>         lead to a certain condition. For the examples mentioned like

>         Mammaprint prognostic modelling seems much more appropriate to

>         me than diagnostic accuracy though ultimately you need RCTs to

>         prove that they improve patient reported outcomes and from

>         what I saw last those don’t exist.

>

>         Best wishes, Heike

>

>         Heike Raatz, MD, MSc

>

>         Basel Institute for Clinical Epidemiology and Biostatistics

>

>         Hebelstr. 10

>

>         4031 Basel

>

>         Tel.: +41 61 265 31 07

>

>         Fax: +41 61 265 31 09

>

>         E-Mail: [log in to unmask]

>

>         *From:*Evidence based health (EBH)

>         [mailto:[log in to unmask]] *On Behalf Of

>         *Majid Artus

>         *Sent:* Monday, February 02, 2015 10:04 AM

>         *To:* [log in to unmask]

>         *Subject:* Re: Genetic tests and Predictive validity

>

>         Dear All,

>

>         Is it, or is it not, correct that one should follow the

>         classic teaching that (loosely and notwithstanding the false

>         partition here): from patient's perspective,

>         sensitivity/specificity are what is relevant; and from

>         clinician's perspective, PPV/NPPV are what is relevant?

>

>         Also, it is, isn't it, crucial to consider the media take on

>         outcome of research and how careful researchers need to be in

>         selecting the way they present the outcome of their research?

>         MRI (NMR) in diagnosing autism is one example that springs to

>         mind - the high sensitivity was jumped on by the media

>         presenting it as a very accurate test missing the role of the

>         varying prevalence in certain settings.

>

>         I find this discussion trail hugely thought provoking!

>

>         Best Regards

>

>         Majid

>

>         On 2 February 2015 at 00:19, Mark V Johnston <[log in to unmask]>

>         wrote:

>

>         At the same time, don’t we need to know whether  the  patient 

>         probably  has or does not have the  condition  of  interest? 

>          Yes, prevalence and  other  factors  affect PPV and  NPV, but

>         in my  opinion we  need to move away from the  oversimplified

>         notion that test interpretation depends on  a single factor.

>

>         *From:* Evidence based health (EBH)

>         [mailto:[log in to unmask]] *On Behalf Of

>         *Mayer, Dan

>         *Sent:* Friday, January 30, 2015 8:08 PM

>         *To:* [log in to unmask]

>         *Subject:* Re: Genetic tests and Predictive validity

>

>         Hi Teresa,

>

>         You are absolutely correct.  This is why we should demand that

>         diagnostic studies ONLY present the results of Sensitivity,

>         Specificity and Likelihood ratios.

>

>         This issue has been a serious problem for many years and it is

>         about time that more people spoke up about it.  Also, journal

>         editors and peer reviewers should be up in arms against the

>         practice of reporting PPV and NPV.

>

>         Best wishes

>

>

>         Dan

>

>         ------------------------------------------------------------------------

>

>         *From:*Evidence based health (EBH)

>         [[log in to unmask]] on behalf of Benson,

>         Teresa [[log in to unmask]]

>         *Sent:* Friday, January 30, 2015 11:59 AM

>         *To:* [log in to unmask]

>         *Subject:* Genetic tests and Predictive validity

>

>         I’ve just started reading the literature on genetic tests, and

>         noticing how many of them tend to focus on predictive

>         value—that is, if a certain test accurately predicts whether a

>         patient will or won’t get a particular phenotype (condition),

>         the authors suggest the test should be used.  But if we’re

>         deciding whether to order the test in the first place,

>         shouldn’t we be focused on /sensitivity/ and /specificity/

>         instead, not PPV and NPV?  Predictive value is so heavily

>         dependent on disease prevalence.  For example, if I want to

>         get tested for a disease with a 2% prevalence in people like

>         me, I could just flip a coin and regardless of the outcome, my

>         “Coin Flip Test” would show an NPV of 98%!  So what does NPV

>         alone really tell me, if I’m not also factoring out

>         prevalence—which would be easier done by simply looking at

>         sensitivity and specificity?  Someone please tell me where my

>         thinking has gone awry!

>

>         For a concrete example, look at MammaPrint, a test which

>         reports binary results.  In addition to hazard ratios, study

>         authors often tout statistically significant differences

>         between the probabilities of recurrence-free survival in the

>         MammaPrint-High Risk vs. MammaPrint-Low Risk groups

>         (essentially the test’s predictive values).  In the RASTER

>         study (N = 427), 97% of the patients with a “Low Risk” test

>         result did not experience metastasis in the next 5 years. 

>         Sounds great, right?  But when you look at Sensitivity, you

>         see that of the 33 patients in the study who /did/ experience

>         metastasis, only 23 of them were classified as “High Risk” by

>         MammaPrint, for a 70% sensitivity.  If patients and clinicians

>         are looking for a test to inform their decision about adjuvant

>         chemotherapy for early stage breast cancer, wouldn’t the fact

>         that the test missed 10 out of 33 cases be more important than

>         the 97% NPV, an artifact of the extremely low 5-year

>         prevalence of metastasis in this cohort (only 33 out of 427,

>         or  0.7%)?

>

>         Drukker et al. A prospective evaluation of a breast cancer

>         prognosis signature in the observational RASTER study. Int J

>         Cancer 2013. 133(4):929-36.

>         http://www.ncbi.nlm.nih.gov/pubmed/23371464

>         <https://urldefense.proofpoint.com/v2/url?u=http-3A__www.ncbi.nlm.nih.gov_pubmed_23371464&d=AwMF-g&c=1QsCMERiq7JOmEnKpsSyjg&r=Ip5wwmlafmSkip30kemE5Q&m=GwNY8srJ3CEjlNbLqoSFCeH_n8m0cC6puP4CE3XW1VM&s=2chGNOKM9TfFH_1Hzl6g8am4NJO3LHdB0CIDAH8DaTM&e=>

>

>

>         Retel et al. Prospective cost-effectiveness analysis of

>         genomic profiling in breast cancer. Eur J Cancer 2013.

>         49:3773-9. http://www.ncbi.nlm.nih.gov/pubmed/23992641

>         <https://urldefense.proofpoint.com/v2/url?u=http-3A__www.ncbi.nlm.nih.gov_pubmed_23992641&d=AwMF-g&c=1QsCMERiq7JOmEnKpsSyjg&r=Ip5wwmlafmSkip30kemE5Q&m=GwNY8srJ3CEjlNbLqoSFCeH_n8m0cC6puP4CE3XW1VM&s=dA6wTdKzugoGqf6Fx5fjSEz4CFxLPDhMbyV5Mt-j2ak&e=>

>          (Provides actual true/false positive/negative results)

>

>         Thanks so much!

>

>         *Teresa Benson, MA, LP*

>

>         Clinical Lead, Evidence-Based Medicine

>

>         McKesson Health Solutions

>         18211 Yorkshire Ave

>

>         Prior Lake, MN  55372

>

>         [log in to unmask]

>         Phone: 1-952-226-4033 <tel:1-952-226-4033>

>

>         /Confidentiality Notice: This e-mail message, including any

>         attachments, is for the sole use of the intended recipient(s)

>         and may contain confidential and privileged information. Any

>         unauthorized review, use, disclosure or distribution is

>         prohibited. If you are not the intended recipient, please

>         contact the sender by reply e-mail and destroy all copies of

>         the original message./

>

>         ------------------------------------------------------------------------

>

>         ----------------------------------------- CONFIDENTIALITY

>         NOTICE: This email and any attachments may contain

>         confidential information that is protected by law and is for

>         the sole use of the individuals or entities to which it is

>         addressed. If you are not the intended recipient, please

>         notify the sender by replying to this email and destroying all

>         copies of the communication and attachments. Further use,

>         disclosure, copying, distribution of, or reliance upon the

>         contents of this email and attachments is strictly prohibited.

>         To contact Albany Medical Center, or for a copy of our privacy

>         practices, please visit us on the Internet at www.amc.edu

>         <https://urldefense.proofpoint.com/v2/url?u=http-3A__www.amc.edu&d=AwMF-g&c=1QsCMERiq7JOmEnKpsSyjg&r=Ip5wwmlafmSkip30kemE5Q&m=GwNY8srJ3CEjlNbLqoSFCeH_n8m0cC6puP4CE3XW1VM&s=__vs2OCqLIDvEtFDEkRu8CoWSznyDNAcsL389Go27lQ&e=>.

>

>

>

>

>

>         -- 

>

>         Please note my new email address: [log in to unmask]

>

>

>         Dr Majid Artus PhD

>         NIHR Clinical Lecturer in General Practice

>         Arthritis Research UK Primary Care

>

>         Centre

>

>         Research Institute for Primary Care & Health Sciences

>

>         Keele University

>         Staffordshire, ST5 5BG

>         Tel: 01782 734826

>         Fax: 01782 733911

>         http://www.keele.ac.uk/pchs/

>         <https://urldefense.proofpoint.com/v2/url?u=http-3A__www.keele.ac.uk_pchs_&d=AwMF-g&c=1QsCMERiq7JOmEnKpsSyjg&r=Ip5wwmlafmSkip30kemE5Q&m=GwNY8srJ3CEjlNbLqoSFCeH_n8m0cC6puP4CE3XW1VM&s=1tHDI4xDX-49yHN21UNpB2IjZeCZqO9J1qExDBWgv4s&e=>

>

>         Please consider the environment before printing this email.

>         This email and its attachments are intended for the above

>         named only and may be confidential. If it has come to you in

>         error you should not copy or show it to anyone; nor should you

>         take any action based on it, other than to notify the sender

>         of the error by replying to the sender. Keele University staff

>         and students are required to abide by the University's

>         conditions of use when sending email. Keele University email

>         is hosted by a cloud provider and may be stored outside of the UK.

>

>

>

>     -- 

>     Owen Dempsey MBBS MSc MRCGP RCGP cert II

>

>     07760 164420

>

>     GPwsi Substance Misuse Locala and Kirklees Lifeline Addiction Service

>

>

>

>

> -- 

> Owen Dempsey MBBS MSc MRCGP RCGP cert II

>

> 07760 164420

>

> GPwsi Substance Misuse Locala and Kirklees Lifeline Addiction Service



-- 

Dr. Donald E. Stanley

Associates in Pathology

500 West Neck Road

Nobleboro, Maine 04555

(207) 563-1560

mobile (207) 952-0908

[log in to unmask]

[log in to unmask]





Top of Message | Previous Page | Permalink

JiscMail Tools


RSS Feeds and Sharing


Advanced Options


Archives

October 2019
September 2019
August 2019
July 2019
June 2019
May 2019
April 2019
March 2019
February 2019
January 2019
December 2018
November 2018
October 2018
September 2018
August 2018
July 2018
June 2018
May 2018
April 2018
March 2018
February 2018
January 2018
December 2017
November 2017
October 2017
September 2017
August 2017
July 2017
June 2017
May 2017
April 2017
March 2017
February 2017
January 2017
December 2016
November 2016
October 2016
September 2016
August 2016
July 2016
June 2016
May 2016
April 2016
March 2016
February 2016
January 2016
December 2015
November 2015
October 2015
September 2015
August 2015
July 2015
June 2015
May 2015
April 2015
March 2015
February 2015
January 2015
December 2014
November 2014
October 2014
September 2014
August 2014
July 2014
June 2014
May 2014
April 2014
March 2014
February 2014
January 2014
December 2013
November 2013
October 2013
September 2013
August 2013
July 2013
June 2013
May 2013
April 2013
March 2013
February 2013
January 2013
December 2012
November 2012
October 2012
September 2012
August 2012
July 2012
June 2012
May 2012
April 2012
March 2012
February 2012
January 2012
December 2011
November 2011
October 2011
September 2011
August 2011
July 2011
June 2011
May 2011
April 2011
March 2011
February 2011
January 2011
December 2010
November 2010
October 2010
September 2010
August 2010
July 2010
June 2010
May 2010
April 2010
March 2010
February 2010
January 2010
December 2009
November 2009
October 2009
September 2009
August 2009
July 2009
June 2009
May 2009
April 2009
March 2009
February 2009
January 2009
December 2008
November 2008
October 2008
September 2008
August 2008
July 2008
June 2008
May 2008
April 2008
March 2008
February 2008
January 2008
December 2007
November 2007
October 2007
September 2007
August 2007
July 2007
June 2007
May 2007
April 2007
March 2007
February 2007
January 2007
December 2006
November 2006
October 2006
September 2006
August 2006
July 2006
June 2006
May 2006
April 2006
March 2006
February 2006
January 2006
December 2005
November 2005
October 2005
September 2005
August 2005
July 2005
June 2005
May 2005
April 2005
March 2005
February 2005
January 2005
December 2004
November 2004
October 2004
September 2004
August 2004
July 2004
June 2004
May 2004
April 2004
March 2004
February 2004
January 2004
December 2003
November 2003
October 2003
September 2003
August 2003
July 2003
June 2003
May 2003
April 2003
March 2003
February 2003
January 2003
December 2002
November 2002
October 2002
September 2002
August 2002
July 2002
June 2002
May 2002
April 2002
March 2002
February 2002
January 2002
December 2001
November 2001
October 2001
September 2001
August 2001
July 2001
June 2001
May 2001
April 2001
March 2001
February 2001
January 2001
December 2000
November 2000
October 2000
September 2000
August 2000
July 2000
June 2000
May 2000
April 2000
March 2000
February 2000
January 2000
December 1999
November 1999
October 1999
September 1999
August 1999
July 1999
June 1999
May 1999
April 1999
March 1999
February 1999
January 1999
December 1998
November 1998
October 1998
September 1998


JiscMail is a Jisc service.

View our service policies at https://www.jiscmail.ac.uk/policyandsecurity/ and Jisc's privacy policy at https://www.jisc.ac.uk/website/privacy-notice

Secured by F-Secure Anti-Virus CataList Email List Search Powered by the LISTSERV Email List Manager