I have read a number of posts describing the issue
around the large SnPM_ST file when wanting to do cluster inference testing.
I have two questions:
1. What are the pitfalls of changing the defaults threshold? Are you simply
including fewer voxels?
This means that less information is saved
in the SnPM_ST file. This results in a narrower range of cluster defining threshold
you can choose in the Results step.
2. Is there a solution to this problem yet?
Specifying the cluster-defining threshold during
the SetUp step can tremendously reduce the size of SnPM_ST. There are some
ideas for eliminating the SnPM_ST file, but unfortunately we haven’t had
a chance to implement it in SnPM yet. Using RANDOMISE in the FSL package is
another possible solution.
-Satoru