Dear all FSL experts.
I have some confusion about doing seed-based functional connectivity by using FEAT or fsl_sbca. I hope get the comments about the details of my processing:
preprocessing:
(1)by using FEAT-prestats:
input: the raw functional 4D data;
delete 10 time points;
high-pass with 100s (0.01Hz);
motion correction with MCFLIRT;
BET (non-brain tissue extraction for funtional data in order to optimize the registration)
spatial smoothing (6mm, abouth 2 times of volume size);
grand mean normalization(default, unclick)
registration:
T1-3D(after bet, including the non-bet T1-3D in one folder to faciliate BBR); MNI152_T1_2mm_brain
(2) bandpass
with the filtered_func_data as input, doing bandpass by using fslmaths:
fslmaths filetered_func_data -bptf -1 2.5 filtered_func_data_bp (because high-pass had been done in previous step);
(3) generate the residual files by regressing out nuisance signals
generate white matter and CSF by using FAST, thresholded the PVE with 0.8 as lower threshold.
using Flirt and transformation matrix generated in the registration step to resample white matter and CSF into functional space
flirt -in wm_mask -ref filtered_func_data -out wm_mask_func -applyxfm -init highres2example_func.mat
the same to CSF
extract the time series for both white matter and CSF mask by using fslmeants
fslmeants -i filtered_func_data_bp -m wm_mask_func > wm_func_ts.txt
generate nuisance regressor (lincluding 6 motion parameters, wm, csf and global )
paste mc.par wm_func_ts.txt csf_func_ts.txt global_ts.txt> regressor.txt
fsl_glm -i filtered_func_data_bp -d regressor.txt --dat_norm --out_res=residuals
scale the residuals:
fslmaths residuals -Tmean mean_residuals
fslmaths residuasl -Tstd std_residuals
fslmaths brain_mask -mul 100 scaled_mask
fslmaths residuals -sub mean_residuals -div std_residuals -add scaled_mask scaled_residuals
(4) extract the time series of seed from scaled_residuals;
(5) subject-level seed-based functional connectivity analysis
(5.1) lots of paper doing this by using FSL-FEAT first-level anslysis (stats+post-stats);
question: input should be the scaled residual files, right ?
stats: unclick prewhitening
full-mode set up: set the time series of seed as EV1, and set the contrasts with both 1 and -1 in order to get postive and negative connectivity map in individual level ?
post-stats: set as the default (cluster, Z>2.3 P<0.05)
is this setting right? although FEAT output the cope1 and cope2 in the stats directory, cope1 just represent the regression coefficient? right ? because this is a regression model rather than calculating the Pearson's r value?
one more question. how about the Z value setting in the Misc window, which default is 5.3, is this value should be set as 0 or 2.3 as the Z threshold in the post-stats window?
(5.2) calculating r value by using fsl_sbca
fsl_sbca can be used to do seed-based correlation
the usage is : fsl_sbca -i residuals -s ROI_mask -t residuals(or residuls binarised mask ) --mean
I am not sure about the usage for fsl_sbca, I can just obtain the ROI itself no matter I use residuals 4D data as target or the binarised brain mask as target, I think there is something wrong with this setting, how should I do if I want to obtain the corre_map.nii.gz in individual level?
(6) then, for regression and correlation, I found that correlation just tell you the linear correlation between the seed and other voxels in the brain, without the magnitude of correltion. while regression output the slope of regression and can tell you about the magnitude rather than correlation? how should I choose if I want to test the functional connectivity changes of a seed with the whole brain between two groups. because both regression and correlation with fisher's z transformation were used to compare functional connectivity changes among groups in published papers.
Any help will be greatly appreciated!
Best
Qiuli Zhang
|