Print

Print


Stan,

Please see my comments/opinions below.




On Tue, Feb 4, 2014 at 4:43 AM, Stan Hrybouski <[log in to unmask]> wrote:

> Greeting to all the awesome fellow SPMers who are reading this post! Do I
> have a question for you at this perfect time of day (i.e answers from all
> time zones are accepted)!  Stan is an equal opportunity listener :)
>
> Now let's get to business. As I understand, SPM has the masking threshold
> set to 0.8. Areas that are susceptible to B0 inhomogeneities experience
> rapid dephasing following an RF pulse, which leads to dropouts of the areas
> I tend to study (namely MTL and OFC). SPM then computes the global signal
> and if a given voxel is less than 0.8 of that value, SPM treats that
> voxel's signal as NaN during the regression. Correct?
>
>
Correct. This is a true statement and not an opinion.


> So here's the question. Is it OK to lower this value to retain more of the
> low signal data and what are the consequences of doing so?


In my opinion, this is valid. Particularly when you have very high values
in one brain region and low values in another region, but believe that the
SNR is still sufficiently high. This will likely become more of an issue
with head coils with a higher number of channels.



> After all, Karl's group set this value to 0.8 for a reason. Does the SNR
> of BOLD become exponentially worse once the data falls below this cutoff?
>

I do not know that answers.


>
> To retain some areas that are prone to dropouts in the group analysis, one
> approach is to proceed with the masking threshold set at 0.8, but to use
> GLMflex to keep voxels that are present in at least n subjects during the
> group model. Is there any standard for the percentage of data points that
> should be kept for the second level analysis in GLMflex? Currently, my
> analysis is set up in such a way that a given voxel must be present in at
> least 60 percent of the subjects to be considered for the group GLM. I am
> all ears for ideas on how to optimize these parameters.
>

There isn't currently an accepted value. I think the percentage depends on
the group size. If you have 1000 people, then 600 is reasonable. If you
have 10 subjects and drop to 6, then I'm not sure that is a reasonable
change. I think that I would prefer lowering the .8 threshold. I would look
at the data to see if the SNR drops in the regions between your threshold
and .8 compared to regions above .8.



>
> And last, but not least. It should theoretically be possible to force SPM
> to keep all the voxels in the subject-level analysis and to forego GLMflex,
> sticking instead with SPM's group-level procedure. What are
> advantage/disadvantages to this approach? And how would it compared with
> GLMFlex. I am particularly interested in hearing from folks who tried to
> compare this approaches in a quantitative manner.
>

Set the the threshold at the first level to include all voxels. I'm not
sure sure what the differences would be between the methods. I suspect
having all voxels, if the SNR is lower in some, would lead to null findings.


>
>
> Well, we've come to the end of this (hopefully not too boring) wall of
> text :) ... To all those who leave a constructive answer, I am giving away
> virtual hugs for free!  :-)   .... And if you happen to live in Edmonton (AB,
> Canada), I will even buy you a pint of Guinness at Sherlock!
>
> Many thanks,
>
> Stan
>