Hi Anderson,
Thank you very much for the feedback. Kindly I have the following questions:
1. I want to run seed based resting state analysis between two groups. I ran the first (run by run for every subject) and second (merging the runs for every subject) levels in feat. My seed is defined as a mask from previous analysis (pet scan between the same subjects). How can I use this seed as a 4D file in dual regression (relevant to your suggestion step #3)?
2. Can I used the script "fsl_sbca" instead of dual regression to achieve what I am looking for?
3. In Feat/ first level analysis. I see a design matrix generated as by a first step of the analysis. This design.mat contain two columns and a number of rows. The rows are equal to the number of volumes in the raw bold data. The numbers are positive and negative (it seems z scores ?!!). Please what is this design and how FEAT create it?
Looking forward to learn from you
Cheers
Jon
Hi Jon,
Yes, overall the pipeline seems right, although in each step things may go wrong depending on options used. Perhaps a much simpler strategy is:
1) Use the preprocessing in FEAT all the way so as to have the filtered_func_data file for each subject.
2) Use featregapply to put all subjects into standard space.
3) Prepare 4D file containing one seed per "timepoint". For instance if there are 5 seeds, produce a 4D file with 5 such "timepoints", each containing a single seed (no spatial overlap between them).
4) Run the dual regression using the 4D file with the seeds in place of what would be the melodic_IC. The dual regression script will already invoke randomise.
Hope this helps.
All the best,
Anderson
On 30 September 2016 at 16:49, John anderson <[log in to unmask]> wrote:
Dear FSL experts,
I need your feedback regarding resting state analysis that I am working on.
I have 20 subjects. Every subject has multiple runs (each run is 180 volume). The number of runs is between 3-5 ( not the same number between the subjects).
I did the following for every run, for every subject ( the output of every step is an input for the next step):
1. I ran the commands "slicetimer", "mcflirt" and "bet" on the raw fmri data (I name it "bold.nii")
2. I normalized (registered to MNI152-2mm using the commands "FLIRT", "FNIRT" and "applywarp") the fmri data which already brain extracted using "bet". This is done for every run, for every subject.
3. I smoothed (spatially -5 mm) the normalized data (in MNI) for every run, for every subject.
4. From this smoothed data. I extracted the time courses for every run, for every subject ( the seed of choice, white matter and CSF)
5. I merged all these time courses (seed, white matter and CSF) in one design matrix (for every run, for every subject)
6. I fed this design matrix to the command "fsl_glm" and I output the "copes", "varcopes" and "zstats", for every run, for every subject ( the resultant "cope" and "varcope" images are consisted of 8 volumes).
7. For every subject, separately, I used the command "fslmerge" to merge all the "copes" and the "varcopes" of the runs in one cope and var cope files (the output copes and varcopes are consisted of multiple volumes). Here I got one cope and varcope for every subject. Each cope and varcope is consisted of multiple volumes.
8. For the resultant merged copes and varcopes (which already in MNI) I calculated the mean using the command fslmaths:
falmaths cope -Tmean cope
9. I merged all the new copes in one 4 file and I feed it to "randomise." to study group differences.
Kindly, I have the flowing questions:
1. Are these steps correct?
2. I used spatial smoothing early in the pipeline ( I smoothed every run), can this hurt the data? Do I need to use it at the end of the analysis (smooth the merged/non smoothed copes for all the runs for every subject ), or it is fine to smooth every run.
3. In "fsl_reg" I output the copes then (atathe ened I fed it to randomise). Which approach is more advisable? Output the copes/varcopes or the residuals.
Thank you very much for any advice or feedback
Your help is highly appreciated.
Jon
|