Print

Print


Dear Will,

I've done some digging and found the following. I'm using a blocked
design, i.e. boxcar function convolved with the HRF. This convolution is
comparable to a rapid event related design convolved with the HRF and
thus leads to a superposition of HRFs which add up to a higher peak then
the basis function. This peak should then be used as a scale factor to
scale the percent signal change calculation. For an event related
design, however, where stick functions are far apart the peaks of these
stand-alone regressors should be the same as for the basis function (I
think this is always 0.21). One should thus be careful when calculating
the percent signal change.

I got this info from Paul Mazaika
http://cibsr.stanford.edu/content/dam/sm/cibsr/documents/tools/methods/artrepair-software/FMRIPercentSignalChange.pdf

Re: point 2 from my previous question, if I have a contrast [1 1 -1 -1]
then the mean contrast should be calculated from the positive values
only. The vector is divided by 2 instead of 4, since we have an equal
number of negative contrasts and the overall sum is zero. Another way to
put it is, you are comparing the mean percent signal change from one
condition with the mean percent signal change in the other condition. If
I would divide by 4 (sum of absolute values of the contrast vector) I
would have half of the mean difference of percent signal change.

Kind regards,
Glad



On 03/06/2017 10:37 AM, Penny, William wrote:
> Dear Glad,
> 
> 
> Re point 2 - yes I think it makes sense to report the average activation
> over conditions, in which case dividing by 4 in this example would be
> the thing to do.
> 
> 
> Re points 3 and 1 - its been a while since I've looked at this. 
> 
> See below for what we say in the SPM manual. Cyril says let sf=max(trial
> Xss) in his equation (8) where Xss is the HRF (at microtime resolution).
> It seems to me that these are the same. But I may be wrong.
> 
> Cyril/Rik - can you clarify ?
> 
> 
> Best,
> 
> 
> Will.
> 
> 
> Let sf =max(SPM.xBF.bf(:,1))/SPM.xBF.dt (alternatively, press
> “Design:Explore:Session 1” and select
> any of the conditions, then read off the peak height of the canonical
> HRF basis function (bottom
> left)).
> Then, if you want a size threshold of 1% peak signal change, the value
> you need to enter for
> the PPM threshold (ie the number in the units of the parameter
> estimates) is 1/sf (which should
> be 4.75 in the present case).
> 
> ------------------------------------------------------------------------
> *From:* SPM (Statistical Parametric Mapping) <[log in to unmask]> on
> behalf of Paul Glad Mihai <[log in to unmask]>
> *Sent:* 01 March 2017 09:42
> *To:* [log in to unmask]
> *Subject:* Re: [SPM] How to run a (1st + 2nd level) Bayesian analysis in
> SPM
>  
> Dear Will,
> 
> I'm piggybacking on this thread as I have a question regarding the
> Bayesian discussion.
> 
> 1. First Level Bayesian Inference: Concerning the scaling factor for the
> parameter estimates (as outlined in the manual on page 268), doesn't it
> make more sense to use the maximum of the basis function convolved with
> the regressors (stick functions or boxcars, depending on the design)?
> The maximum is different between the basis function and the one
> convolved with the regressors.That's what I understood from this paper:
> Pernet, C. R. (2014). Misconceptions in the use of the General Linear
> Model applied to functional MRI: a tutorial for junior neuro-imagers.
> Frontiers in Neuroscience, 8(January), 1–12.
> http://doi.org/10.3389/fnins.2014.00001
> 
> 2. Calculating the contrasts on the first level requires the average of
> the parameter estimates for the canonical hrf. If you have a contrast
> like the following:
> [1, 1, -1, -1] would you then divide by 4 to scale the contrast to [1/4,
> 1/4, -1/4, -1/4]?
> 
> 3. When taking the computed contrasts from the first level to the second
> level, would you need to take into account the average of the parameter
> estimates (as in point 2 above) AND the scaling factor? So instead of
> calculating the contrast as
> [1, 1, -1, -1]/4 you would then calculate it for a 1% signal change as
> [1, 1, -1, -1]/4-(1/sf)? Or does the scaling factor not play a role in
> the second level analysis when using contrasts?
> 
> Regards,
> Glad
> 
> On 02/08/2017 01:00 AM, SPM automatic digest system wrote:
>> Date:    Tue, 7 Feb 2017 19:50:42 +0000
>> From:    "Penny, William" <[log in to unmask]>
>> Subject: Re: How to run a (1st + 2nd level) Bayesian analysis in SPM
>> 
>> Dear David,
>> 
>> 
>> Here are my answers to your follow-ups.
>> 
>> 
>> 1. This is hard to quantify - there is potentially an advantage (assuming you used some form of spatial prior at the first level) - in that the regression coefficients and therefore contrasts are implicitly smoothed by a data-defined amount - and this is tuned to each regression coefficient. So the advantage, if any, would
> be that an optimal smoothing would have been applied. Whether this
> justifies the extra amount of time to fit the model is up to the user.
>> 
>> 
>> 2. That's correct - given the connection with FDR there is no need for a multiple comparisons correction.
>> 
>> 
>> 3. The main article to read is:
>> 
>> 
>> http://www.fil.ion.ucl.ac.uk/spm/doc/papers/karl_posterior.pdf
> Posterior probability maps and SPMs
> <http://www.fil.ion.ucl.ac.uk/spm/doc/papers/karl_posterior.pdf>
> www.fil.ion.ucl.ac.uk
> Technical Note Posterior probability maps and SPMs K.J. Friston* and W.
> Penny The Wellcome Department of Imaging Neuroscience, London, Queen
> Square, London WC1N 3BG, UK
> 
> 
>> 
>> 
>> More recently we have added a new functionality for the equivalent of F-contrasts which does not require an effect size threshold. It computes log-evidence maps and you just threshold the log-odds ratio:
>> 
>> 
>> http://www.fil.ion.ucl.ac.uk/~wpenny/publications/penny13.pdf
> Efficient Posterior Probability Mapping Using Savage ...
> <http://www.fil.ion.ucl.ac.uk/~wpenny/publications/penny13.pdf>
> www.fil.ion.ucl.ac.uk
> Efficient Posterior Probability Mapping Using Savage-Dickey Ratios
> William D. Penny*, Gerard R. Ridgway Wellcome Trust Centre for
> Neuroimaging, Institute of Neurology ...
> 
> 
>> 
>> 
>> Best,
>> 
>> 
>> Will.
>> 
>> 
>> ________________________________
>> From: David Hofmann <[log in to unmask]>
>> Sent: 06 February 2017 11:27
>> To: Penny, William
>> Cc: [log in to unmask]
>> Subject: Re: [SPM] How to run a (1st + 2nd level) Bayesian analysis in SPM
>> 
>> Hi William,
>> 
>> thanks for the helpful reply! I have a few follow-up questions and hope you can also help me with those:
>> 
>> 1. Is there any advantage in running a first level Bayesian analysis beforehand, i.e. what more can be done?
>> 
>> 2. Is it necessary to correct for multiple comparisons (either 1st or 2nd level respectively)? I read that this is never necessary and that a PPM thresholded at 95 % confidence is related to an FDR of 5 % in classical analysis.
>> 
>> 3. Can you recommend an article which can be cited and that explains the method used for running a 2nd level Bayesian analysis on top or a normal GLM?
>> 
>> Thanks again!
>> 
>> David
>> 
>> 2017-02-03 14:57 GMT+01:00 Penny, William <[log in to unmask]<mailto:[log in to unmask]>>:
>> 
>> Dear David,
>> 
>> 
>> For one-dimensional contrasts (e.g. t-tests) SPM asks you for two parameters for Bayesian inference at the second level (i) Effect Size Threshold (Default 0.1) and (ii) Log Odds Threshold (Default 10).
>> 
>> 
>> Other reasonable choices would be 0 and 3.
>> 
>> 
>> The effect size threshold, T, tells SPM that you are only interested in voxels with contrast values C^beta > T. ie. that your experimental effect is bigger than T.
>> 
>> 
>> The Log Odds Threshold, L, tells SPM that you are only interested in voxels where SPM is sure (with posterior probability 1/(1+exp(-L)) )
>> 
>> that this is the case.
>> 
>> 
>> Note that L=3 gives you p=0.95.
>> 
>> L=10 is much, much more stringent giving p=0.99995.
>> 
>> 
>> I would advise you use the most recent version of SPM when doing this.
>> 
>> 
>> Also, you don't have to do a first level Bayesian analysis if you want to a second-level one.
>> 
>> 
>> All the best,
>> 
>> 
>> Will.
>> 
>> 
>> ________________________________
>> From: SPM (Statistical Parametric Mapping) <[log in to unmask]<mailto:[log in to unmask]>> on behalf of David Hofmann <[log in to unmask]<mailto:[log in to unmask]>>
>> Sent: 31 January 2017 10:52
>> To: [log in to unmask]<mailto:[log in to unmask]>
>> Subject: [SPM] How to run a (1st + 2nd level) Bayesian analysis in SPM
>> 
>> Hi all,
>> 
>> I have an fMRI event-related design in which subjects viewed fearful and neutral faces. I want to run a 1st level and a second level Bayesian analysis in SPM. For this, I did the following steps:
>> 
>> 1. 1st level Bayesian analysis with standard settings as described in the manual
>> 2. Contrast fear faces > neutral faces
>> 3. For the 2nd level analysis, I smoothed the con-files and ran a one-sample t-test (estimated the model first with the classical and then with the 2nd level Bayesian option)
>> 4. I specified a t-contrast (i.e. [1]) for the one-sample t-test of the subjects
>> 5. I chose apply masking - none
>> 
>> Now SPM is asking me for the Effect size threshold for PPM at the 2nd level and suggests 0.99. Whereas the meaning of the effect size threshold was clearly explained in the manual for the 1st level analysis, I not sure what value to choose for the 2nd level analysis and what this value means.When I select the suggested value
> (0.99) and choose as Log Odds Threshold 10, which should correspond to
> 95 % certainty, then there is no effect. There are also no effects for a
> value as low as 0.2. This is very strange since in the classical
> analysis there are very strong effects (fusiform gyrus) which survive an
> FWE correction at 0.01.
>> 
>> The questions are as follows:
>> 
>> 1. Are the analysis steps I did correct or is there a better way to test for group effect by means of Bayesian analysis (e.g. Bayesian model comparison, Rosa, M.J. et al., 2010)
>> 
>> 2. What does the effect size threshold at the 2nd level mean and what are reasonable values?
>> 
>> 
>> Here is an overview of posts with topic Bayesian analysis, which did not help me answering my questions:
>> 1.https://www.jiscmail.ac.uk/cgi-bin/webadmin?A2=spm;888fe64.1503
>> 2.https://www.jiscmail.ac.uk/cgi-bin/webadmin?A2=SPM;41144d5.1403
>> 3.https://www.jiscmail.ac.uk/cgi-bin/webadmin?A2=SPM;5f9a54e5.1405
>> 4.https://www.jiscmail.ac.uk/cgi-bin/webadmin?A2=ind1603&L=spm&F=&S=&P=639757
>> 5.https://www.jiscmail.ac.uk/cgi-bin/webadmin?A2=spm;2e6e6dca.1405
>> 6.https://www.jiscmail.ac.uk/cgi-bin/webadmin?A2=spm;a5ab6e97.1603
>> 7.https://www.jiscmail.ac.uk/cgi-bin/webadmin?A2=SPM;377114fa.0909
>> 
>> 
>> greetings
>> 
>> David
> 
> 
> -- 
> Paul Glad Mihai, PhD
> 
> Independent Research Group "Neural Mechanisms of Human Communication"
> Max Planck Institute for Human Cognitive and Brain Sciences
> Stephanstraße 1A, 04103 Leipzig, Germany
> 
> Phone:   +49 (0) 341-9940-2478
> E-mail:  [log in to unmask]

-- 
Paul Glad Mihai, PhD

Independent Research Group "Neural Mechanisms of Human Communication"
Max Planck Institute for Human Cognitive and Brain Sciences
Stephanstraße 1A, 04103 Leipzig, Germany

Phone:   +49 (0) 341-9940-2478
E-mail:  [log in to unmask]