Print

Print


Hi Andreas,

Your quote from Jeanette's tutorial points out that she is unsure what 
*featquery* does when run on a second level analysis from an 
event-related design. She gives directions to instead compute % signal 
change yourself in the last part of her document (section 4- How to 
calculate percent signal change). This is what I've used for my study, 
and this is quite simple to script.

I've chosen to compute the % signal change for each first level cope. 
For this you need:

1- to compute your scaling factor, which depends on the baseline-to-max 
range of an isolated event of a duration that is typical in your design 
- if you're lucky you'll find it in the table at the end of the 
tutorial, otherwise you'll have to run a simulation yourself. Pick the 
one that corresponds best to the duration of your events (mean duration 
if it varies in your design), and that was obtained for the HRF function 
your used to convolve your data. You also eed to calculate what Jeanette 
calls your "contrast fix", which is the factor by which you need to 
divide your contrast weights to normalize them so that the sum of the 
positive terms equals 1, and the sum of the negative terms equals -1 
(e.g. for a contrast [1 1 -1 -1], the contrast fix equals 2).
Then, to get your scaling factor:         scaling factor = 100 * 
(baseline-to-max range) / (contrast fix)

2- to create a map of % change masked by your ROI. If you're using a 
cope image as an input:      avwmaths cope_img -mul scale_factor -div 
mean_func -mul mask_image output_image

3- to calculate the mean % change across all voxels of your ROI:         
     avwstats -M output_image


This gives you a mean % change for your ROI for each cope and each 
subject. Then, you just have to calculate the mean for each subject and 
each condition (if you have several sessions and therefore several cope 
for the same condition within each subject), and finally the group 
average and s.e.m. ...

Cheers,

Stéphane



Stéphane Jacobs - Chercheur post-doctorant / Post-doctoral researcher

Espace et Action - Inserm U864
16 avenue du Doyen Lépine
69676 Bron Cedex, France
Téléphone / Phone: (+33) (0)4-72-91-34-33



Andreas Bartsch a écrit :
> Hi Stephane,
>
> thanks for your response. Yes, PEATE converts to %BOLD signal change plots - featquery gives you the values (max, mean, whatever).
> Anyway, I'm not sure about how to best handle it at the higher level. Jeanette points out:
> "• % change at higher level analyses.
> – Not quite sure actually (sorry). I haven’t been able to figure it out exactly.
> – Good: It will use the same scale factor for all copes within that design. So, if it
> is at the second level of a 3 level design, all subjects are scaled the same.
> – Bad: Even though it is using the same scale factor across copes, who knows what
> the scale factor is referring to? For example, running featquery on 16 copes of
> the second level analysis used a scale factor of 1.4183, when in reality if I wanted
> to scale according to an isolated 2 second event, I’d use 0.4073 (note I took into
> account the rules of contrast and design matrix construction mentioned below).
> • A really bad thing to do: Running featquery on multiple first level designs from
> an event related and then combining results.
> – Different scale factors may be used for each run, leading to completely incompa-
> rable things.
> – Especially a problem if the design varies greatly across runs; for example, looking
> at correct trials per run."
> How have you handled it? Did you just average your first level results (irrepective of the scaling)?
> Cheers-
> Andreas
>
>
> ________________________________________
> Von: FSL - FMRIB's Software Library [[log in to unmask]] im Auftrag von Sté�phane Jacobs [[log in to unmask]]
> Gesendet: Mittwoch, 6. Mai 2009 15:25
> An: [log in to unmask]
> Betreff: Re: [FSL] AW: [FSL] featquery on a second level analysis
>
> Hi Andreas,
>
> I don't know if there is any consensus on these issues, but the method
> proposed by Jeanette in her tutorial is quite simple to follow. This is
> what I've done following the discussion you're referring to.
>
> As for PEATE, if I'm not missing anything, a slight nuance with respect
> to featquery is that it computes % BOLD signal change using the
> preprocessed "raw" data, not the PEs/COPEs created by the model.
>
> Best,
>
> Stéphane
>
>
> Stéphane Jacobs - Chercheur post-doctorant / Post-doctoral researcher
>
> Espace et Action - Inserm U864
> 16 avenue du Doyen Lépine
> 69676 Bron Cedex, France
> Téléphone / Phone: (+33) (0)4-72-91-34-33
>
>
>
> Andreas Bartsch a écrit :
>   
>> Hi everybody,
>>
>> is there a consensus of how to extract %BOLD signal change plots for a second level group analysis based on a set of first-level feats? Jeanette's tutorial has been pointing out that running featquery on multiple first level designs from an event-related and then combining the results is really bad and that she has not figured out how to do it right at the higher level. PEATE seems no different (except that it makes the bad quite easy;) - no criticism intended!).
>> Also, ours is actually a freesurfer higher level analysis on the fsaverage surface. Any ideas how to best get % BOLD changes for a given EV from that? It seems like mri_label2vol can convert from the surface into MNI152 volume space and we could apply standard (volume) space coordinates and/or masks to our 1st level feats and then average but there are the problems associated with that Jeanette indicates in her tutorial...
>> Thanks + cheers-
>> Andreas
>>
>> ________________________________
>> Von: FSL - FMRIB's Software Library [[log in to unmask]] im Auftrag von Jeanette Mumford [[log in to unmask]]
>> Gesendet: Dienstag, 18. März 2008 00:40
>> An: [log in to unmask]
>> Betreff: Re: [FSL] featquery on a second level analysis
>>
>> Hi again,
>>
>> I finally see what you're getting at and I understand that if you entered the copes instead of the feats you might get a different ppheight in each case.  For the same reason different runs shouldn't have different weights, different copes shouldn't have different weights (unless you're fixing a contrast scaling problem...say if in one cope you used -1 -1 1 1 and you need to scale by .5 and the other cope is 1 -1 0 0 and doesn't need scaling).  In that respect if I'm remembering your model correctly I think featquery will be using the same scale for both conditions.  If it seems like featquery isn't going to give you the result you desire, I recommend scripting it up yourself, it is only 2 lines of code.
>>
>> Good luck!
>> Jeanette
>>
>> On Mon, Mar 17, 2008 at 3:22 PM, Stephane Jacobs <[log in to unmask]<mailto:[log in to unmask]>> wrote:
>> Hi Jeannette,
>>
>> Thanks a lot for helping me with this! First of all, I should say that I did not really expect featquery to be doing exactly the same thing on outputs of 1st and 2nd level analyses, precisely because I have read you writeup already. :) Thanks for this very helpful document!
>>
>> I guess my questions boil down to this: assume you have 2 conditions, represented by 2 contrasts at the 1st level (e.g.cond A > rest and cond B > rest).
>>
>> - if the conditions are run within the same run, when you run your cross-session 2nd level analysis with feat directories as inputs, you end up with cope1.feat and cope2.feat, corresponding to condition A and B, respectively. Cope1.feat contains a design.lcon file with an average PPheight value obtained by averaging all PPheight values obtained for condition A at the first level. Same goes for cope2.feat and condition B. When you run featquery on this 2nd level output, it will actually run separately for cope1.feat and cope2.feat, thus using different scaling factors for condition A and B. Right?
>>
>> - now if the conditions are run in different sessions, as in my case, and you want to combine them at the second level, you use cope images as inputs to the second level analysis. You then end up with a single cope1.feat directory containing cope images for both conditions, and a single design.lcon file that has only one average PPheight value, calculated by averaging all PPheight values obtained at the first level for condition A *and* condition B. When you run featquery on this 2nd level analysis, both condition A and B get scaled by the same factor.
>>
>> My question: is it normal that in these 2 cases conditions A and B get scaled either by different or by the same factor? Or should one correct for this in the second case by calculating the average PPheight value separately for each condition?
>>
>> Thanks again for your help!
>>
>> Stephane
>>
>> On Mon, 17 Mar 2008 11:16:09 -0700, Jeanette Mumford <[log in to unmask]<mailto:[log in to unmask]>> wrote:
>>
>>     
>>> Hi
>>>
>>>       
>> Stephane, I'll have a try answering your questions.
>>
>>     
>>> On Fri, Mar 14, 2008 at 11:20 AM, Stephane Jacobs < [log in to unmask]<mailto:[log in to unmask]> > wrote:
>>>
>>>
>>>       
>> Hi Steve,
>>
>>
>>
>> Sorry if this is a confusing discussion. I guess my question comes from
>>
>> the fact that I would expect Feat query to do the same thing (if  we
>>
>> ignore the transformations of a mask in standard space, for example)
>>
>> regardless of whether it's run on a 1st level or on a 2nd level output.
>>
>>     
>>> I agree %-change can seem a bit confusing, but hopefully this will clear some things up for you.  Actually you should not prefer first level featquery to produce exactly the same result as a second level featquery *especially* if the pp heights vary greatly.  If they do vary, which can happen perhaps because you're only  looking at correct trials across subjects or the event length is dependent on something like reaction time, which will vary across subjects, it doesn't make any sense to give different subjects different weights.  After all, if this were necessary we'd be having huge problems with the multilevel GLMs since we don't re-weight subjects by their ppheight (it wouldn't make sense).  This is why in the writeup that I made (Steve gave the link in an earlier email  http://mumford.bol.ucla.edu/perchange_guide.pdf ) I advise *not* to run featquery on the first level GLM copes, especially in an event related study.  Running at the second level makes more sense and if you don't like the interpretation of the featquery %-change (%-change using the average pp height), then the writeup gives instructions for doing it other ways.
>>>
>>>
>>>       
>>     
>>>       
>> The way I understand things, in my case, the PPheights are combined
>>
>> across all my conditions because I used cope images as inputs to my 2nd
>>
>> level analysis rather than feat directories. Whereas, if I had used feat
>>
>> directories as inputs to my 2nd level analysis, PPheights would have
>>
>> been estimated separately for my different conditions. And the latter is
>>
>> what happens when one runs Featquery on a 1st level output (one would
>>
>> probably  not combine PPheights from different conditions when doing
>>
>> cross sessions and then cross subjects averaging). Am I right so far?
>>
>>
>>
>>     
>>> I think if you were to run an analysis both ways (feed copes, feed Feats) you'll find featquery will give you the same result.  I realize you can't test this out on your data and I haven't tested it out myself, but it seems like that would be the case.  Again, if your PPheights are different between runs/subjects the you  wouldn't want to run featquery at the first level, actually the second level is more sensible since it uses the same scale factor, which is the average ppheight.  For an event-related study design I think the most interpretable thing to do is choose your own scale factor as I describe in the writeup Steve referenced in his last email ( http://mumford.bol.ucla.edu/perchange_guide.pdf ).
>>>
>>>
>>>       
>>     
>>> Hope that clears a few things up for you.
>>>
>>> Cheers,
>>> Jeanette
>>>
>>>
>>>
>>>       
>> If I am, the difference between the 2 cases (feat directories vs. cope
>>
>> images input into a 2nd level analysis) is indeed probably minor if
>>
>> PPheights are similar from one condition to the other, but I wanted to
>>
>> make sure I understood the logic behind this.
>>
>> Also, is it realistic to imagine that one could get fairly different
>>
>> PPheights between conditions, or should these always be pretty similar?
>>
>> in other words, would this always be alright to combine PPheights across
>>
>> different conditions?
>>
>>
>>
>> Thanks again for your time and help!
>>
>>
>>
>> Best,
>>
>>
>>
>>     
>>>
>>>
>>>       
>> Stephane
>>
>>
>>
>> Steve Smith wrote:
>>
>>     
>>> Hi,
>>>
>>>
>>>
>>> On 26 Feb 2008, at 23:13, Stephane Jacobs wrote:
>>>
>>>
>>>
>>>
>>>       
>>>> Hi Steve,
>>>>
>>>> Thanks again for your answer, and sorry for  resurrecting a somewhat
>>>>
>>>> old discussion... I'm still confused about this, and unless I have a
>>>>
>>>> wrong idea about what are the "relevant contrasts" at the first
>>>>
>>>> level, I don't see how featquery could get the right PPheight values...
>>>>
>>>> So, to make sure I'm clear, here's my precise case. At the first
>>>>
>>>> level, I have for each subject two different types of run, each
>>>>
>>>> containing 2 of my four conditions:
>>>>
>>>> Run1: conditions A1 and B2
>>>>
>>>> Run2: conditions A2 and B1
>>>>
>>>> At the first level, I modeled each condition versus modeled rest
>>>>
>>>> periods. Therefore, the design.con file for each feat directory
>>>>
>>>> contains the PPheight values for condition A1 and B2 or A2 and B1,
>>>>
>>>> depending on the type of run.
>>>>
>>>> At the second level, I wanted to include all 4 conditions, so I used
>>>>
>>>> cope images instead of feat directories as  inputs, and included cope
>>>>
>>>> images corresponding to conditions A1, A2, B1 and B2. In my
>>>>
>>>> understanding, what happens then is that the PPheight value contained
>>>>
>>>> in the design.lcon file at the output of the 2nd level analysis is
>>>>
>>>> the average of the first level PPheight values across all inputs,
>>>>
>>>> that is in my case, all four conditions.
>>>>
>>>> I understand that this should be fine for FEAT, but what happens when
>>>>
>>>> one  runs featquery on the output of the second level analysis? From
>>>>
>>>> the code I've read, I don't see how it could get anything else but
>>>>
>>>> this average PPheight value across different conditions, which I
>>>>
>>>> don't think is what we want... Especially if different conditions
>>>>
>>>> give very different PPheight values.
>>>>
>>>>         
>>> I'm not sure exactly what you're saying is problematic with this, or
>>>
>>> even how this is different from what you are  doing below..... in
>>>
>>> general is you have very different ppheights for the different
>>>
>>> first-level conditions, then there's probably a problem in combining
>>>
>>> their resulting PEs/COPEs, surely? Or are you talking about the
>>>
>>> fine-detail tuning of such issues, such as is addressed in Jeanette's
>>>
>>> excellent techrep at  http://mumford.bol.ucla.edu/perchange_guide.pdf
>>>
>>> ?   As far as I  can see, both FEAT and Featquery are in general doing
>>>
>>> the right thing for most situations......?
>>>
>>>
>>>
>>> Cheers.
>>>
>>>
>>>
>>>
>>>
>>>
>>>       
>>>> So, what I've done is to go back to the first level output and the
>>>>
>>>> design.con files, computed the average PPheight values for each
>>>>
>>>> condition separately (I had several occurrences of each run type),
>>>>
>>>> and had featquery use these values instead of the average PPheight
>>>>
>>>>  value across conditions.
>>>>
>>>> Does this make sense, or am I even more confused than I thought? :-)
>>>>
>>>> Thanks again for your help and time!
>>>>
>>>> Best,
>>>>
>>>> Stephane
>>>>
>>>> On Fri, 1 Feb 2008 08:51:30 +0000, Steve Smith < [log in to unmask]<mailto:[log in to unmask]> >
>>>>
>>>> wrote:
>>>>
>>>>         
>>>>> Hi,
>>>>>
>>>>> Reasonable question - however,  I still believe that this is fine: the
>>>>>
>>>>> higher-level .lcon has already (at the start of the higher-level FEAT
>>>>>
>>>>> run) been created to contain the average of all the relevant first-
>>>>>
>>>>> level contrasts' PPheight values - this averaging has already been
>>>>>
>>>>> done and Featquery just reads this single value in.
>>>>>
>>>>> Hope this answers the query?
>>>>>
>>>>> Cheers.
>>>>>
>>>>> On 29 Jan 2008, at 02:00, Stephane Jacobs wrote:
>>>>>
>>>>>           
>>>>>> Hi Steve,
>>>>>>
>>>>>> Steve Smith wrote:
>>>>>>
>>>>>>             
>>>>>>>> My question was: is this correct in this case to go back to the
>>>>>>>>
>>>>>>>> first
>>>>>>>>
>>>>>>>> level design.con file, and selectively average the ppheight value
>>>>>>>>
>>>>>>>> for
>>>>>>>>
>>>>>>>> each contrast separately, and have featquery use those  instead?
>>>>>>>>
>>>>>>>>                 
>>>>>>> I think what FEAT/Featquery are doing should be right - it should be
>>>>>>>
>>>>>>> averaging the effective ppheight across all the relevant first-level
>>>>>>>
>>>>>>> contrasts, and as far as I can see this should be correct.
>>>>>>>
>>>>>>> Cheers, Steve.
>>>>>>>
>>>>>>>               
>>>>>> I'm probably missing something here. I did not check into details for
>>>>>>
>>>>>>  FEAT, but I have looked into the Featquery script, and here is what I
>>>>>>
>>>>>> understood: when calculating the scaling factor, it determines whether
>>>>>>
>>>>>> it's a first or higher level analysis just by looking whether a
>>>>>>
>>>>>> design.lcon file exists within the feat directory. If not (first
>>>>>>
>>>>>> level),
>>>>>>
>>>>>> it uses the ppheight from the design.con file, which specifies a
>>>>>>
>>>>>> ppheight value for each EV. If it does  find design.lcon (higher
>>>>>>
>>>>>> level),
>>>>>>
>>>>>> however, then it uses the single value contained in this file to scale
>>>>>>
>>>>>> all the copes contained in the feat directory - which, in my case,
>>>>>>
>>>>>> come
>>>>>>
>>>>>> from different 1st level copes. If this is right, then I don't see how
>>>>>>
>>>>>> this ppheight is related only to the relevant 1st level contrasts...
>>>>>>
>>>>>> Could you tell me where  I'm getting lost here?
>>>>>>
>>>>>> Thanks again
>>>>>>
>>>>>> Best
>>>>>>
>>>>>> Stephane
>>>>>>
>>>>>>             
>>>>>>>> Thanks again for your time!
>>>>>>>>
>>>>>>>> Stephane
>>>>>>>>
>>>>>>>> Steve Smith wrote:
>>>>>>>>
>>>>>>>>                 
>>>>>>>>> Hi,
>>>>>>>>>
>>>>>>>>> Yes, this should work fine; FEAT should extract the correct
>>>>>>>>>
>>>>>>>>> ppheight
>>>>>>>>>
>>>>>>>>> values from the correct contrast specification files, according
>>>>>>>>>
>>>>>>>>> to the
>>>>>>>>>
>>>>>>>>> copes that you have selected. It should appropriately estimate the
>>>>>>>>>
>>>>>>>>> right average ppheight for each of your  second-level contrasts.
>>>>>>>>>
>>>>>>>>> Cheers, Steve.
>>>>>>>>>
>>>>>>>>> On 15 Jan 2008, at 19:39, Stephane Jacobs wrote:
>>>>>>>>>
>>>>>>>>>                   
>>>>>>>>>> Hello,
>>>>>>>>>>
>>>>>>>>>> I had a question about the way the model peak-to-peak height was
>>>>>>>>>>
>>>>>>>>>> computed for a second level analysis of which input were cope
>>>>>>>>>>
>>>>>>>>>> images
>>>>>>>>>>
>>>>>>>>>> rather than feat directories, and about running featquery on it.
>>>>>>>>>>
>>>>>>>>>> I had
>>>>>>>>>>
>>>>>>>>>> forgotten to mention that I am interested in percent signal
>>>>>>>>>>
>>>>>>>>>> change for
>>>>>>>>>>
>>>>>>>>>> contrasts (condition vs. modeled rest), which explains  why I'm
>>>>>>>>>>
>>>>>>>>>> looking
>>>>>>>>>>
>>>>>>>>>> at the ppheight values in design.lcon. Also, I'm looking at
>>>>>>>>>>
>>>>>>>>>> contrasts
>>>>>>>>>>
>>>>>>>>>> that have been set at the first level already, then I have
>>>>>>>>>>
>>>>>>>>>> ppheight
>>>>>>>>>>
>>>>>>>>>> values for each of those and for each first level run.
>>>>>>>>>>
>>>>>>>>>>  Can anybody tell me whether I'm doing the right thing here?
>>>>>>>>>>
>>>>>>>>>> Thanks a lot,
>>>>>>>>>>
>>>>>>>>>> Stephane
>>>>>>>>>>
>>>>>>>>>> Stephane Jacobs wrote:
>>>>>>>>>>
>>>>>>>>>>                     
>>>>>>>>>>> Hello,
>>>>>>>>>>>
>>>>>>>>>>> I would like  to run featquery on a second level analysis (cross
>>>>>>>>>>>
>>>>>>>>>>> session -
>>>>>>>>>>>
>>>>>>>>>>> within subject level) to compute percent change of COPEs within a
>>>>>>>>>>>
>>>>>>>>>>> given ROI.
>>>>>>>>>>>
>>>>>>>>>>> I understand that featquery is using the average ppheight found
>>>>>>>>>>>
>>>>>>>>>>> in
>>>>>>>>>>>
>>>>>>>>>>> the
>>>>>>>>>>>
>>>>>>>>>>> design.lcon file  in the copeX.feat directory as a scale factor to
>>>>>>>>>>>
>>>>>>>>>>> compute
>>>>>>>>>>>
>>>>>>>>>>> percent change.
>>>>>>>>>>>
>>>>>>>>>>> However, I am wondering whether this is still correct to do so
>>>>>>>>>>>
>>>>>>>>>>> in my
>>>>>>>>>>>
>>>>>>>>>>> case.
>>>>>>>>>>>
>>>>>>>>>>> Indeed, I have fed cope images into my second level analysis,
>>>>>>>>>>>
>>>>>>>>>>> instead of
>>>>>>>>>>>
>>>>>>>>>>> .feat directories, as I needed to contrast EVs coming from
>>>>>>>>>>>
>>>>>>>>>>> different
>>>>>>>>>>>
>>>>>>>>>>> runs.
>>>>>>>>>>>
>>>>>>>>>>> Then, I end up with one single cope1.feat directory at the output
>>>>>>>>>>>
>>>>>>>>>>> of my
>>>>>>>>>>>
>>>>>>>>>>> second level analysis, which contains as many cope images as I
>>>>>>>>>>>
>>>>>>>>>>> have set
>>>>>>>>>>>
>>>>>>>>>>> contrasts at the 2nd level (4), rather than getting
>>>>>>>>>>>
>>>>>>>>>>> cope1.feat..cope4.feat
>>>>>>>>>>>
>>>>>>>>>>> as when you feed feat directories containing all the same EVs.
>>>>>>>>>>>
>>>>>>>>>>> Therefore, it seems that the value contained in design.lcon is
>>>>>>>>>>>
>>>>>>>>>>> the
>>>>>>>>>>>
>>>>>>>>>>> average
>>>>>>>>>>>
>>>>>>>>>>> of the ppheight across all my contrasts. I wonder if I rather
>>>>>>>>>>>
>>>>>>>>>>> should
>>>>>>>>>>>
>>>>>>>>>>> compute
>>>>>>>>>>>
>>>>>>>>>>> an average ppheight for each of my 2nd level contrast
>>>>>>>>>>>
>>>>>>>>>>> separately, to
>>>>>>>>>>>
>>>>>>>>>>> be more
>>>>>>>>>>>
>>>>>>>>>>> accurate?
>>>>>>>>>>>
>>>>>>>>>>>  Thanks in advance for all your thoughts and advice,
>>>>>>>>>>>
>>>>>>>>>>> Best,
>>>>>>>>>>>
>>>>>>>>>>> Stephane
>>>>>>>>>>>
>>>>>>>>>>>                       
>>>>>>>>>  ---------------------------------------------------------------------------
>>>>>>>>>
>>>>>>>>> Stephen M. Smith, Professor of Biomedical Engineering
>>>>>>>>>
>>>>>>>>> Associate Director,  Oxford University FMRIB Centre
>>>>>>>>>
>>>>>>>>> FMRIB, JR Hospital, Headington, Oxford  OX3 9DU, UK
>>>>>>>>>
>>>>>>>>> +44 (0) 1865 222726   (fax 222717)
>>>>>>>>>
>>>>>>>>>  [log in to unmask]<mailto:[log in to unmask]>      http://www.fmrib.ox.ac.uk/~steve<http://www.fmrib.ox.ac.uk/%7Esteve>
>>>>>>>>>
>>>>>>>>> ---------------------------------------------------------------------------
>>>>>>>>>
>>>>>>>>>                   
>>>>>>> ---------------------------------------------------------------------------
>>>>>>>
>>>>>>> Stephen M. Smith, Professor of Biomedical Engineering
>>>>>>>
>>>>>>> Associate Director,  Oxford University FMRIB Centre
>>>>>>>
>>>>>>> FMRIB, JR Hospital, Headington, Oxford  OX3 9DU, UK
>>>>>>>
>>>>>>> +44 (0)  1865 222726  (fax 222717)
>>>>>>>
>>>>>>>  [log in to unmask]<mailto:[log in to unmask]>      http://www.fmrib.ox.ac.uk/~steve<http://www.fmrib.ox.ac.uk/%7Esteve>
>>>>>>>
>>>>>>> ---------------------------------------------------------------------------
>>>>>>>
>>>>>>>               
>>>>>  ---------------------------------------------------------------------------
>>>>>
>>>>> Stephen M. Smith, Professor of Biomedical Engineering
>>>>>
>>>>> Associate Director,  Oxford University FMRIB Centre
>>>>>
>>>>> FMRIB, JR Hospital, Headington, Oxford  OX3 9DU, UK
>>>>>
>>>>> +44 (0) 1865 222726  (fax 222717)
>>>>>
>>>>>  [log in to unmask]<mailto:[log in to unmask]>      http://www.fmrib.ox.ac.uk/~steve<http://www.fmrib.ox.ac.uk/%7Esteve>
>>>>>
>>>>> ---------------------------------------------------------------------------
>>>>>
>>>>>           
>>>
>>> ---------------------------------------------------------------------------
>>>
>>>
>>>
>>> Stephen M. Smith, Professor of Biomedical Engineering
>>>
>>> Associate Director,  Oxford University FMRIB Centre
>>>
>>>
>>>
>>> FMRIB, JR Hospital, Headington, Oxford  OX3 9DU, UK
>>>
>>> +44 (0) 1865 222726  (fax 222717)
>>>
>>>  [log in to unmask]<mailto:[log in to unmask]>      http://www.fmrib.ox.ac.uk/~steve<http://www.fmrib.ox.ac.uk/%7Esteve>
>>>
>>> ---------------------------------------------------------------------------
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>       
>>
>>
>>     
>>> --
>>>
>>>
>>>       
>> :::::::::::::::::::::::::::::::::::::::::::::::::::
>>
>> Stephane Jacobs
>>
>>
>>
>> Postdoctoral Research Associate
>>
>> Human Neuroimaging and  Transcranial Stimulation Lab
>>
>> Department of Psychology
>>
>> 1227 University of Oregon
>>
>> Eugene, OR 97403 - USA
>>
>>
>>
>> Email:  [log in to unmask]<mailto:[log in to unmask]>
>>
>> Tel: (1) 541-346-4184
>>
>> :::::::::::::::::::::::::::::::::::::::::::::::::::
>>
>>
>>
>>     
>>
>>     
>
>