Print

Print


Dear Howard,

I don't think there is any convention that the cluster-forming threshold should be p= 0.001 and even if there is, there is no mathematical reason behind it. Different cluster-forming thresholds make your analysis sensitive to different kinds of effects, that is why cluster-forming threshold should not be set based on the data. A higher threshold (in terms of p-value) would mean larger clusters with weaker effects inside, and a lower threshold vice-versa. As I said I think p<0.05 or p<0.01 are quite reasonable choices for sensor-level EEG or TF data because we often have the intuition that large chunks of significant voxels are a true physiological effect and cluster-level correction just quantifies that intutition saying how large is large enough. However in fMRI or source data, it is not necessarily true that large clusters are more likely to represent true effects.

I used cluster-level correction with that kind of threshold for TF data in my recent paper http://www.ncbi.nlm.nih.gov/pubmed/22855804

There must be plenty of other examples, people of the list might suggest something as well.

Best,

Vladimir


On Tue, Aug 6, 2013 at 2:19 PM, H.Bowman <[log in to unmask]> wrote:
Dear Vladimir,

Thank you for your valuable responses to my PhD student, Farzad's, questions
concerning our SPM for EEG analysis. We did actually meet a few years back
when I gave a talk to a method's meeting at the Centre for Neuroimaging
Sciences.

As Farzad indicated, we wish to plot scalp maps through time for a
cluster-level analysis. As you suggested, we can choose an uncorrected
analysis to avoid peak-level correction. Then you suggest setting an
uncorrected threshold of 0.05 or 0.01. To be clear, this is what would
usually be called the cluster forming threshold; right? That is, it is the
alpha-level that is applied separately at each space-time point?

I was under the impression that the standard/ default setting of this was
0.001, at least this certainly seems to be the case in the fMRI domain.
However, I am also aware that setting of this parameter is somewhat
arbitrary.

So, are there any papers that explore different settings of this parameter
in the EEG setting, or at the least a prior precedent for setting this to
higher than 0.001.

Many thanks for your continued help in this matter - it is very much
appreciated.

Howard + Farzad.

--------------------------------------------
Professor Howard Bowman (PhD)
Professor of Cognition & Logic
Joint Director of Centre for CNCS
Centre for Cognitive Neuroscience and Cognitive Systems
and the School of Computing,
University of Kent at Canterbury,
Canterbury, Kent, CT2 7NF, United Kingdom
Telephone: +44-1227-823815   Fax: +44-1227-762811
email: [log in to unmask]
WWW: http://www.cs.kent.ac.uk/people/staff/hb5/



---------- Forwarded message ----------
From: Vladimir Litvak <[log in to unmask]>
Date: Fri, Aug 2, 2013 at 2:51 PM
Subject: Re: [SPM] Cluster level
To: Farzad Beheshti <[log in to unmask]>, "[log in to unmask]"
<[log in to unmask]>



Dear Farzad,

What you definitely shouldn't do is choose the threshold based on the data
to get significant results. You can either use peak-level correction and
then FWE = 0.05 is a common choice or you choose cluster-level correction,
in which case you would start with an uncorrected threshold of usually 0.05
or 0.01 and see what clusters are significant. You can then plot just the
significant clusters by using the size of the smallest significant cluster
as the extent threshold.

Best,

Vladimir


On Fri, Aug 2, 2013 at 2:45 PM, Farzad Beheshti <[log in to unmask]>
wrote:


        Dear Vladimir,

        Thank you very much of your answer but to clarify everything:

        I have faced with a condition that nothing is significant at
peak-level. I mean if I choose FWE = 0.05 and extent threshold default value
of zero, nothing is reported at statistical table, But if I change FWE =
0.08 with default extent threshold, significant peak at this level (0.08)
appears in table that are actually more significant at cluster level based
on the information in table(0.000) with a big cluster size. Now my question
is that how should I choose an appropriate FWE level?

        Should I choose FWE = 0.9 to become completely free of threshold and
then apply a limitation on cluster size (extent threshold) to take active
clusters or no?

        Actually I do not have any idea how should I choose the FWE at first
step, in this case?

        Thanks


        On Wed, Jul 31, 2013 at 5:18 PM, Vladimir Litvak
<[log in to unmask]> wrote:


                Dear Farzad,

                Both SPM8 and SPM12 present cluster-level p-values for
t-tests but only SPM12 does it also for F-tests. You can present a MIP of
significant clusters by noting the the size of the smallest significant
cluster and using it as your extent threshold.

                Best,

                Vladimir



                On Tue, Jul 30, 2013 at 10:46 AM, Farzad Beheshti
<[log in to unmask]> wrote:


                        Dear SPMs

                        As far as I know SPM8 does not do any EEG cluster
analyses but it does Peak-level analyses. It has been tried to solve this
problem somehow in SPM12 beta version.

                        My reason is that the statistical table at version
12 reports significant clusters in addition to significant peaks but not in
version 8 (only peak level is reported).

                        However, when we try to show statistical MIPs both
versions take into account the peak-level statistics and MIPs are the same
in both of the versions.

                        Is there any way to show significant clusters
instead of peaks on the top of the scalp (or actually both of them) in
SPM12?

                        Thank you.