Hi,
I have few questions regarding to VBM data analysis:
1. After running fslvbm_3_proc the output are the smoothed files
GM_mod_merg_s2-4 well understood, and the GM_mod_merg_s2_tstat1.nii.gz
(attached) I do not understand how these tstats maps can help me decide
which smoothing is the most relevant to feed into a full run of randomise, and
which threshold to use for the cluster-based thresholding.
2. On the next step, running the follow randomise script (in order to compare
between the 2 groups)
moran@tux:~/Moran/ALS/VBMn/template_list/stats$
randomise -i GM_mod_merg_s3 -m GM_mask -o fslvbm -d design.mat -t
design.con -c 2.3 -n 5000 –V
(Attached - design file-
only for the test analasis, 2 groups with 3 subject each)
The following output appeared:
Loading Data: ******
Data loaded
6 permutations required for exhaustive test of t-test 1
Doing all 6 unique permutations
Starting permutation 1 (Unpermuted data)
Starting permutation 2
Starting permutation 3
Starting permutation 4
Starting permutation 5
Starting permutation 6
For each one of the 4 t-test.
Why 6 and not 5000 permutation for each test? and what dose its mean
Unpermuted data mean?
In addition – the all step long was only about 1 mint.
Did I do something wrong?
Many thanks
Moran
|