JiscMail Logo
Email discussion lists for the UK Education and Research communities

Help for FSL Archives


FSL Archives

FSL Archives


FSL@JISCMAIL.AC.UK


View:

Message:

[

First

|

Previous

|

Next

|

Last

]

By Topic:

[

First

|

Previous

|

Next

|

Last

]

By Author:

[

First

|

Previous

|

Next

|

Last

]

Font:

Proportional Font

LISTSERV Archives

LISTSERV Archives

FSL Home

FSL Home

FSL  May 2009

FSL May 2009

Options

Subscribe or Unsubscribe

Subscribe or Unsubscribe

Log In

Log In

Get Password

Get Password

Subject:

AW: [FSL] AW: [FSL] featquery on a second level analysis

From:

Andreas Bartsch <[log in to unmask]>

Reply-To:

FSL - FMRIB's Software Library <[log in to unmask]>

Date:

Wed, 6 May 2009 23:01:03 +0200

Content-Type:

text/plain

Parts/Attachments:

Parts/Attachments

text/plain (625 lines)

Hi Stephane,

thanks for your response. Yes, PEATE converts to %BOLD signal change plots - featquery gives you the values (max, mean, whatever).
Anyway, I'm not sure about how to best handle it at the higher level. Jeanette points out:
"• % change at higher level analyses.
– Not quite sure actually (sorry). I haven’t been able to figure it out exactly.
– Good: It will use the same scale factor for all copes within that design. So, if it
is at the second level of a 3 level design, all subjects are scaled the same.
– Bad: Even though it is using the same scale factor across copes, who knows what
the scale factor is referring to? For example, running featquery on 16 copes of
the second level analysis used a scale factor of 1.4183, when in reality if I wanted
to scale according to an isolated 2 second event, I’d use 0.4073 (note I took into
account the rules of contrast and design matrix construction mentioned below).
• A really bad thing to do: Running featquery on multiple first level designs from
an event related and then combining results.
– Different scale factors may be used for each run, leading to completely incompa-
rable things.
– Especially a problem if the design varies greatly across runs; for example, looking
at correct trials per run."
How have you handled it? Did you just average your first level results (irrepective of the scaling)?
Cheers-
Andreas


________________________________________
Von: FSL - FMRIB's Software Library [[log in to unmask]] im Auftrag von StéŽphane Jacobs [[log in to unmask]]
Gesendet: Mittwoch, 6. Mai 2009 15:25
An: [log in to unmask]
Betreff: Re: [FSL] AW: [FSL] featquery on a second level analysis

Hi Andreas,

I don't know if there is any consensus on these issues, but the method
proposed by Jeanette in her tutorial is quite simple to follow. This is
what I've done following the discussion you're referring to.

As for PEATE, if I'm not missing anything, a slight nuance with respect
to featquery is that it computes % BOLD signal change using the
preprocessed "raw" data, not the PEs/COPEs created by the model.

Best,

Stéphane


Stéphane Jacobs - Chercheur post-doctorant / Post-doctoral researcher

Espace et Action - Inserm U864
16 avenue du Doyen Lépine
69676 Bron Cedex, France
Téléphone / Phone: (+33) (0)4-72-91-34-33



Andreas Bartsch a écrit :
> Hi everybody,
>
> is there a consensus of how to extract %BOLD signal change plots for a second level group analysis based on a set of first-level feats? Jeanette's tutorial has been pointing out that running featquery on multiple first level designs from an event-related and then combining the results is really bad and that she has not figured out how to do it right at the higher level. PEATE seems no different (except that it makes the bad quite easy;) - no criticism intended!).
> Also, ours is actually a freesurfer higher level analysis on the fsaverage surface. Any ideas how to best get % BOLD changes for a given EV from that? It seems like mri_label2vol can convert from the surface into MNI152 volume space and we could apply standard (volume) space coordinates and/or masks to our 1st level feats and then average but there are the problems associated with that Jeanette indicates in her tutorial...
> Thanks + cheers-
> Andreas
>
> ________________________________
> Von: FSL - FMRIB's Software Library [[log in to unmask]] im Auftrag von Jeanette Mumford [[log in to unmask]]
> Gesendet: Dienstag, 18. März 2008 00:40
> An: [log in to unmask]
> Betreff: Re: [FSL] featquery on a second level analysis
>
> Hi again,
>
> I finally see what you're getting at and I understand that if you entered the copes instead of the feats you might get a different ppheight in each case.  For the same reason different runs shouldn't have different weights, different copes shouldn't have different weights (unless you're fixing a contrast scaling problem...say if in one cope you used -1 -1 1 1 and you need to scale by .5 and the other cope is 1 -1 0 0 and doesn't need scaling).  In that respect if I'm remembering your model correctly I think featquery will be using the same scale for both conditions.  If it seems like featquery isn't going to give you the result you desire, I recommend scripting it up yourself, it is only 2 lines of code.
>
> Good luck!
> Jeanette
>
> On Mon, Mar 17, 2008 at 3:22 PM, Stephane Jacobs <[log in to unmask]<mailto:[log in to unmask]>> wrote:
> Hi Jeannette,
>
> Thanks a lot for helping me with this! First of all, I should say that I did not really expect featquery to be doing exactly the same thing on outputs of 1st and 2nd level analyses, precisely because I have read you writeup already. :) Thanks for this very helpful document!
>
> I guess my questions boil down to this: assume you have 2 conditions, represented by 2 contrasts at the 1st level (e.g.cond A > rest and cond B > rest).
>
> - if the conditions are run within the same run, when you run your cross-session 2nd level analysis with feat directories as inputs, you end up with cope1.feat and cope2.feat, corresponding to condition A and B, respectively. Cope1.feat contains a design.lcon file with an average PPheight value obtained by averaging all PPheight values obtained for condition A at the first level. Same goes for cope2.feat and condition B. When you run featquery on this 2nd level output, it will actually run separately for cope1.feat and cope2.feat, thus using different scaling factors for condition A and B. Right?
>
> - now if the conditions are run in different sessions, as in my case, and you want to combine them at the second level, you use cope images as inputs to the second level analysis. You then end up with a single cope1.feat directory containing cope images for both conditions, and a single design.lcon file that has only one average PPheight value, calculated by averaging all PPheight values obtained at the first level for condition A *and* condition B. When you run featquery on this 2nd level analysis, both condition A and B get scaled by the same factor.
>
> My question: is it normal that in these 2 cases conditions A and B get scaled either by different or by the same factor? Or should one correct for this in the second case by calculating the average PPheight value separately for each condition?
>
> Thanks again for your help!
>
> Stephane
>
> On Mon, 17 Mar 2008 11:16:09 -0700, Jeanette Mumford <[log in to unmask]<mailto:[log in to unmask]>> wrote:
>
>> Hi
>>
> Stephane, I'll have a try answering your questions.
>
>> On Fri, Mar 14, 2008 at 11:20 AM, Stephane Jacobs < [log in to unmask]<mailto:[log in to unmask]> > wrote:
>>
>>
> Hi Steve,
>
>
>
> Sorry if this is a confusing discussion. I guess my question comes from
>
> the fact that I would expect Feat query to do the same thing (if  we
>
> ignore the transformations of a mask in standard space, for example)
>
> regardless of whether it's run on a 1st level or on a 2nd level output.
>
>> I agree %-change can seem a bit confusing, but hopefully this will clear some things up for you.  Actually you should not prefer first level featquery to produce exactly the same result as a second level featquery *especially* if the pp heights vary greatly.  If they do vary, which can happen perhaps because you're only  looking at correct trials across subjects or the event length is dependent on something like reaction time, which will vary across subjects, it doesn't make any sense to give different subjects different weights.  After all, if this were necessary we'd be having huge problems with the multilevel GLMs since we don't re-weight subjects by their ppheight (it wouldn't make sense).  This is why in the writeup that I made (Steve gave the link in an earlier email  http://mumford.bol.ucla.edu/perchange_guide.pdf ) I advise *not* to run featquery on the first level GLM copes, especially in an event related study.  Running at the second level makes more sense and if you don't like the interpretation of the featquery %-change (%-change using the average pp height), then the writeup gives instructions for doing it other ways.
>>
>>
>
>
>>
>>
> The way I understand things, in my case, the PPheights are combined
>
> across all my conditions because I used cope images as inputs to my 2nd
>
> level analysis rather than feat directories. Whereas, if I had used feat
>
> directories as inputs to my 2nd level analysis, PPheights would have
>
> been estimated separately for my different conditions. And the latter is
>
> what happens when one runs Featquery on a 1st level output (one would
>
> probably  not combine PPheights from different conditions when doing
>
> cross sessions and then cross subjects averaging). Am I right so far?
>
>
>
>> I think if you were to run an analysis both ways (feed copes, feed Feats) you'll find featquery will give you the same result.  I realize you can't test this out on your data and I haven't tested it out myself, but it seems like that would be the case.  Again, if your PPheights are different between runs/subjects the you  wouldn't want to run featquery at the first level, actually the second level is more sensible since it uses the same scale factor, which is the average ppheight.  For an event-related study design I think the most interpretable thing to do is choose your own scale factor as I describe in the writeup Steve referenced in his last email ( http://mumford.bol.ucla.edu/perchange_guide.pdf ).
>>
>>
>
>
>> Hope that clears a few things up for you.
>>
>> Cheers,
>> Jeanette
>>
>>
>>
>
>
> If I am, the difference between the 2 cases (feat directories vs. cope
>
> images input into a 2nd level analysis) is indeed probably minor if
>
> PPheights are similar from one condition to the other, but I wanted to
>
> make sure I understood the logic behind this.
>
> Also, is it realistic to imagine that one could get fairly different
>
> PPheights between conditions, or should these always be pretty similar?
>
> in other words, would this always be alright to combine PPheights across
>
> different conditions?
>
>
>
> Thanks again for your time and help!
>
>
>
> Best,
>
>
>
>>
>>
>>
>>
> Stephane
>
>
>
> Steve Smith wrote:
>
>> Hi,
>>
>>
>>
>> On 26 Feb 2008, at 23:13, Stephane Jacobs wrote:
>>
>>
>>
>>
>>> Hi Steve,
>>>
>>> Thanks again for your answer, and sorry for  resurrecting a somewhat
>>>
>>> old discussion... I'm still confused about this, and unless I have a
>>>
>>> wrong idea about what are the "relevant contrasts" at the first
>>>
>>> level, I don't see how featquery could get the right PPheight values...
>>>
>>> So, to make sure I'm clear, here's my precise case. At the first
>>>
>>> level, I have for each subject two different types of run, each
>>>
>>> containing 2 of my four conditions:
>>>
>>> Run1: conditions A1 and B2
>>>
>>> Run2: conditions A2 and B1
>>>
>>> At the first level, I modeled each condition versus modeled rest
>>>
>>> periods. Therefore, the design.con file for each feat directory
>>>
>>> contains the PPheight values for condition A1 and B2 or A2 and B1,
>>>
>>> depending on the type of run.
>>>
>>> At the second level, I wanted to include all 4 conditions, so I used
>>>
>>> cope images instead of feat directories as  inputs, and included cope
>>>
>>> images corresponding to conditions A1, A2, B1 and B2. In my
>>>
>>> understanding, what happens then is that the PPheight value contained
>>>
>>> in the design.lcon file at the output of the 2nd level analysis is
>>>
>>> the average of the first level PPheight values across all inputs,
>>>
>>> that is in my case, all four conditions.
>>>
>>> I understand that this should be fine for FEAT, but what happens when
>>>
>>> one  runs featquery on the output of the second level analysis? From
>>>
>>> the code I've read, I don't see how it could get anything else but
>>>
>>> this average PPheight value across different conditions, which I
>>>
>>> don't think is what we want... Especially if different conditions
>>>
>>> give very different PPheight values.
>>>
>>
>> I'm not sure exactly what you're saying is problematic with this, or
>>
>> even how this is different from what you are  doing below..... in
>>
>> general is you have very different ppheights for the different
>>
>> first-level conditions, then there's probably a problem in combining
>>
>> their resulting PEs/COPEs, surely? Or are you talking about the
>>
>> fine-detail tuning of such issues, such as is addressed in Jeanette's
>>
>> excellent techrep at  http://mumford.bol.ucla.edu/perchange_guide.pdf
>>
>> ?   As far as I  can see, both FEAT and Featquery are in general doing
>>
>> the right thing for most situations......?
>>
>>
>>
>> Cheers.
>>
>>
>>
>>
>>
>>
>>> So, what I've done is to go back to the first level output and the
>>>
>>> design.con files, computed the average PPheight values for each
>>>
>>> condition separately (I had several occurrences of each run type),
>>>
>>> and had featquery use these values instead of the average PPheight
>>>
>>>  value across conditions.
>>>
>>> Does this make sense, or am I even more confused than I thought? :-)
>>>
>>> Thanks again for your help and time!
>>>
>>> Best,
>>>
>>> Stephane
>>>
>>> On Fri, 1 Feb 2008 08:51:30 +0000, Steve Smith < [log in to unmask]<mailto:[log in to unmask]> >
>>>
>>> wrote:
>>>
>>>> Hi,
>>>>
>>>> Reasonable question - however,  I still believe that this is fine: the
>>>>
>>>> higher-level .lcon has already (at the start of the higher-level FEAT
>>>>
>>>> run) been created to contain the average of all the relevant first-
>>>>
>>>> level contrasts' PPheight values - this averaging has already been
>>>>
>>>> done and Featquery just reads this single value in.
>>>>
>>>> Hope this answers the query?
>>>>
>>>> Cheers.
>>>>
>>>> On 29 Jan 2008, at 02:00, Stephane Jacobs wrote:
>>>>
>>>>> Hi Steve,
>>>>>
>>>>> Steve Smith wrote:
>>>>>
>>>>>>> My question was: is this correct in this case to go back to the
>>>>>>>
>>>>>>> first
>>>>>>>
>>>>>>> level design.con file, and selectively average the ppheight value
>>>>>>>
>>>>>>> for
>>>>>>>
>>>>>>> each contrast separately, and have featquery use those  instead?
>>>>>>>
>>>>>> I think what FEAT/Featquery are doing should be right - it should be
>>>>>>
>>>>>> averaging the effective ppheight across all the relevant first-level
>>>>>>
>>>>>> contrasts, and as far as I can see this should be correct.
>>>>>>
>>>>>> Cheers, Steve.
>>>>>>
>>>>> I'm probably missing something here. I did not check into details for
>>>>>
>>>>>  FEAT, but I have looked into the Featquery script, and here is what I
>>>>>
>>>>> understood: when calculating the scaling factor, it determines whether
>>>>>
>>>>> it's a first or higher level analysis just by looking whether a
>>>>>
>>>>> design.lcon file exists within the feat directory. If not (first
>>>>>
>>>>> level),
>>>>>
>>>>> it uses the ppheight from the design.con file, which specifies a
>>>>>
>>>>> ppheight value for each EV. If it does  find design.lcon (higher
>>>>>
>>>>> level),
>>>>>
>>>>> however, then it uses the single value contained in this file to scale
>>>>>
>>>>> all the copes contained in the feat directory - which, in my case,
>>>>>
>>>>> come
>>>>>
>>>>> from different 1st level copes. If this is right, then I don't see how
>>>>>
>>>>> this ppheight is related only to the relevant 1st level contrasts...
>>>>>
>>>>> Could you tell me where  I'm getting lost here?
>>>>>
>>>>> Thanks again
>>>>>
>>>>> Best
>>>>>
>>>>> Stephane
>>>>>
>>>>>>> Thanks again for your time!
>>>>>>>
>>>>>>> Stephane
>>>>>>>
>>>>>>> Steve Smith wrote:
>>>>>>>
>>>>>>>> Hi,
>>>>>>>>
>>>>>>>> Yes, this should work fine; FEAT should extract the correct
>>>>>>>>
>>>>>>>> ppheight
>>>>>>>>
>>>>>>>> values from the correct contrast specification files, according
>>>>>>>>
>>>>>>>> to the
>>>>>>>>
>>>>>>>> copes that you have selected. It should appropriately estimate the
>>>>>>>>
>>>>>>>> right average ppheight for each of your  second-level contrasts.
>>>>>>>>
>>>>>>>> Cheers, Steve.
>>>>>>>>
>>>>>>>> On 15 Jan 2008, at 19:39, Stephane Jacobs wrote:
>>>>>>>>
>>>>>>>>> Hello,
>>>>>>>>>
>>>>>>>>> I had a question about the way the model peak-to-peak height was
>>>>>>>>>
>>>>>>>>> computed for a second level analysis of which input were cope
>>>>>>>>>
>>>>>>>>> images
>>>>>>>>>
>>>>>>>>> rather than feat directories, and about running featquery on it.
>>>>>>>>>
>>>>>>>>> I had
>>>>>>>>>
>>>>>>>>> forgotten to mention that I am interested in percent signal
>>>>>>>>>
>>>>>>>>> change for
>>>>>>>>>
>>>>>>>>> contrasts (condition vs. modeled rest), which explains  why I'm
>>>>>>>>>
>>>>>>>>> looking
>>>>>>>>>
>>>>>>>>> at the ppheight values in design.lcon. Also, I'm looking at
>>>>>>>>>
>>>>>>>>> contrasts
>>>>>>>>>
>>>>>>>>> that have been set at the first level already, then I have
>>>>>>>>>
>>>>>>>>> ppheight
>>>>>>>>>
>>>>>>>>> values for each of those and for each first level run.
>>>>>>>>>
>>>>>>>>>  Can anybody tell me whether I'm doing the right thing here?
>>>>>>>>>
>>>>>>>>> Thanks a lot,
>>>>>>>>>
>>>>>>>>> Stephane
>>>>>>>>>
>>>>>>>>> Stephane Jacobs wrote:
>>>>>>>>>
>>>>>>>>>> Hello,
>>>>>>>>>>
>>>>>>>>>> I would like  to run featquery on a second level analysis (cross
>>>>>>>>>>
>>>>>>>>>> session -
>>>>>>>>>>
>>>>>>>>>> within subject level) to compute percent change of COPEs within a
>>>>>>>>>>
>>>>>>>>>> given ROI.
>>>>>>>>>>
>>>>>>>>>> I understand that featquery is using the average ppheight found
>>>>>>>>>>
>>>>>>>>>> in
>>>>>>>>>>
>>>>>>>>>> the
>>>>>>>>>>
>>>>>>>>>> design.lcon file  in the copeX.feat directory as a scale factor to
>>>>>>>>>>
>>>>>>>>>> compute
>>>>>>>>>>
>>>>>>>>>> percent change.
>>>>>>>>>>
>>>>>>>>>> However, I am wondering whether this is still correct to do so
>>>>>>>>>>
>>>>>>>>>> in my
>>>>>>>>>>
>>>>>>>>>> case.
>>>>>>>>>>
>>>>>>>>>> Indeed, I have fed cope images into my second level analysis,
>>>>>>>>>>
>>>>>>>>>> instead of
>>>>>>>>>>
>>>>>>>>>> .feat directories, as I needed to contrast EVs coming from
>>>>>>>>>>
>>>>>>>>>> different
>>>>>>>>>>
>>>>>>>>>> runs.
>>>>>>>>>>
>>>>>>>>>> Then, I end up with one single cope1.feat directory at the output
>>>>>>>>>>
>>>>>>>>>> of my
>>>>>>>>>>
>>>>>>>>>> second level analysis, which contains as many cope images as I
>>>>>>>>>>
>>>>>>>>>> have set
>>>>>>>>>>
>>>>>>>>>> contrasts at the 2nd level (4), rather than getting
>>>>>>>>>>
>>>>>>>>>> cope1.feat..cope4.feat
>>>>>>>>>>
>>>>>>>>>> as when you feed feat directories containing all the same EVs.
>>>>>>>>>>
>>>>>>>>>> Therefore, it seems that the value contained in design.lcon is
>>>>>>>>>>
>>>>>>>>>> the
>>>>>>>>>>
>>>>>>>>>> average
>>>>>>>>>>
>>>>>>>>>> of the ppheight across all my contrasts. I wonder if I rather
>>>>>>>>>>
>>>>>>>>>> should
>>>>>>>>>>
>>>>>>>>>> compute
>>>>>>>>>>
>>>>>>>>>> an average ppheight for each of my 2nd level contrast
>>>>>>>>>>
>>>>>>>>>> separately, to
>>>>>>>>>>
>>>>>>>>>> be more
>>>>>>>>>>
>>>>>>>>>> accurate?
>>>>>>>>>>
>>>>>>>>>>  Thanks in advance for all your thoughts and advice,
>>>>>>>>>>
>>>>>>>>>> Best,
>>>>>>>>>>
>>>>>>>>>> Stephane
>>>>>>>>>>
>>>>>>>>  ---------------------------------------------------------------------------
>>>>>>>>
>>>>>>>> Stephen M. Smith, Professor of Biomedical Engineering
>>>>>>>>
>>>>>>>> Associate Director,  Oxford University FMRIB Centre
>>>>>>>>
>>>>>>>> FMRIB, JR Hospital, Headington, Oxford  OX3 9DU, UK
>>>>>>>>
>>>>>>>> +44 (0) 1865 222726   (fax 222717)
>>>>>>>>
>>>>>>>>  [log in to unmask]<mailto:[log in to unmask]>      http://www.fmrib.ox.ac.uk/~steve<http://www.fmrib.ox.ac.uk/%7Esteve>
>>>>>>>>
>>>>>>>> ---------------------------------------------------------------------------
>>>>>>>>
>>>>>> ---------------------------------------------------------------------------
>>>>>>
>>>>>> Stephen M. Smith, Professor of Biomedical Engineering
>>>>>>
>>>>>> Associate Director,  Oxford University FMRIB Centre
>>>>>>
>>>>>> FMRIB, JR Hospital, Headington, Oxford  OX3 9DU, UK
>>>>>>
>>>>>> +44 (0)  1865 222726  (fax 222717)
>>>>>>
>>>>>>  [log in to unmask]<mailto:[log in to unmask]>      http://www.fmrib.ox.ac.uk/~steve<http://www.fmrib.ox.ac.uk/%7Esteve>
>>>>>>
>>>>>> ---------------------------------------------------------------------------
>>>>>>
>>>>  ---------------------------------------------------------------------------
>>>>
>>>> Stephen M. Smith, Professor of Biomedical Engineering
>>>>
>>>> Associate Director,  Oxford University FMRIB Centre
>>>>
>>>> FMRIB, JR Hospital, Headington, Oxford  OX3 9DU, UK
>>>>
>>>> +44 (0) 1865 222726  (fax 222717)
>>>>
>>>>  [log in to unmask]<mailto:[log in to unmask]>      http://www.fmrib.ox.ac.uk/~steve<http://www.fmrib.ox.ac.uk/%7Esteve>
>>>>
>>>> ---------------------------------------------------------------------------
>>>>
>>
>>
>>
>> ---------------------------------------------------------------------------
>>
>>
>>
>> Stephen M. Smith, Professor of Biomedical Engineering
>>
>> Associate Director,  Oxford University FMRIB Centre
>>
>>
>>
>> FMRIB, JR Hospital, Headington, Oxford  OX3 9DU, UK
>>
>> +44 (0) 1865 222726  (fax 222717)
>>
>>  [log in to unmask]<mailto:[log in to unmask]>      http://www.fmrib.ox.ac.uk/~steve<http://www.fmrib.ox.ac.uk/%7Esteve>
>>
>> ---------------------------------------------------------------------------
>>
>>
>>
>>
>>
>>
>>
>>
>
>
>
>
>> --
>>
>>
> :::::::::::::::::::::::::::::::::::::::::::::::::::
>
> Stephane Jacobs
>
>
>
> Postdoctoral Research Associate
>
> Human Neuroimaging and  Transcranial Stimulation Lab
>
> Department of Psychology
>
> 1227 University of Oregon
>
> Eugene, OR 97403 - USA
>
>
>
> Email:  [log in to unmask]<mailto:[log in to unmask]>
>
> Tel: (1) 541-346-4184
>
> :::::::::::::::::::::::::::::::::::::::::::::::::::
>
>
>
>>
>
>
>

Top of Message | Previous Page | Permalink

JiscMail Tools


RSS Feeds and Sharing


Advanced Options


Archives

April 2024
March 2024
February 2024
January 2024
December 2023
November 2023
October 2023
September 2023
August 2023
July 2023
June 2023
May 2023
April 2023
March 2023
February 2023
January 2023
December 2022
November 2022
October 2022
September 2022
August 2022
July 2022
June 2022
May 2022
April 2022
March 2022
February 2022
January 2022
December 2021
November 2021
October 2021
September 2021
August 2021
July 2021
June 2021
May 2021
April 2021
March 2021
February 2021
January 2021
December 2020
November 2020
October 2020
September 2020
August 2020
July 2020
June 2020
May 2020
April 2020
March 2020
February 2020
January 2020
December 2019
November 2019
October 2019
September 2019
August 2019
July 2019
June 2019
May 2019
April 2019
March 2019
February 2019
January 2019
December 2018
November 2018
October 2018
September 2018
August 2018
July 2018
June 2018
May 2018
April 2018
March 2018
February 2018
January 2018
December 2017
November 2017
October 2017
September 2017
August 2017
July 2017
June 2017
May 2017
April 2017
March 2017
February 2017
January 2017
December 2016
November 2016
October 2016
September 2016
August 2016
July 2016
June 2016
May 2016
April 2016
March 2016
February 2016
January 2016
December 2015
November 2015
October 2015
September 2015
August 2015
July 2015
June 2015
May 2015
April 2015
March 2015
February 2015
January 2015
December 2014
November 2014
October 2014
September 2014
August 2014
July 2014
June 2014
May 2014
April 2014
March 2014
February 2014
January 2014
December 2013
November 2013
October 2013
September 2013
August 2013
July 2013
June 2013
May 2013
April 2013
March 2013
February 2013
January 2013
December 2012
November 2012
October 2012
September 2012
August 2012
July 2012
June 2012
May 2012
April 2012
March 2012
February 2012
January 2012
December 2011
November 2011
October 2011
September 2011
August 2011
July 2011
June 2011
May 2011
April 2011
March 2011
February 2011
January 2011
December 2010
November 2010
October 2010
September 2010
August 2010
July 2010
June 2010
May 2010
April 2010
March 2010
February 2010
January 2010
December 2009
November 2009
October 2009
September 2009
August 2009
July 2009
June 2009
May 2009
April 2009
March 2009
February 2009
January 2009
December 2008
November 2008
October 2008
September 2008
August 2008
July 2008
June 2008
May 2008
April 2008
March 2008
February 2008
January 2008
December 2007
November 2007
October 2007
September 2007
August 2007
July 2007
June 2007
May 2007
April 2007
March 2007
February 2007
January 2007
2006
2005
2004
2003
2002
2001


JiscMail is a Jisc service.

View our service policies at https://www.jiscmail.ac.uk/policyandsecurity/ and Jisc's privacy policy at https://www.jisc.ac.uk/website/privacy-notice

For help and support help@jisc.ac.uk

Secured by F-Secure Anti-Virus CataList Email List Search Powered by the LISTSERV Email List Manager