Dear all,
I have a case study of three unique subjects that I like to compared to a group of 18 subjects
I like to calculate:
1. controls (n=18) GROUP MEAN
2. Each subject (codes: NB1 NB2 CA) compared to control (n=18)
GUI: higher-level analysis I chose Randomise (Stats) and TFCE (Post-stats)
Q1. I thought controls (n=18) GROUP MEAN [1 0 0 0] should have 18 permutation. it is calculate 1330 unique permutations, WHY?
For each subject (NB1 NB2 CA) compared to CONTROL its calculate 3990 unique permutations, WHY?
Q2. should I add variance smoothing? ("If you have fewer than 20 subjects, then you will usually see an increase in power by using variance smoothing")
Q3. when using GUI i get the following files (for each of the 4 contrasts):
thresh_pstat
thresh_zstat1.nii.gz
stats/pstat.nii.gz
stats/tstat.nii.gz
stats/zstat.nii.gz
WHICH ONE IS AFTER TFCE?
If I run (terminal)
randomise -i filtered_func_data.nii.gz -o randomise/ver3_rand_cope1 -d design.mat -t design.con -m mask.nii.gz -n 5000 -T
i get the file with filename extension _tfce_corrp_tstat
thanks!
########################################################################
To unsubscribe from the FSL list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=FSL&A=1
|