Print

Print


Hello FSL community, 

I have 3 questions based on the following experimental set-up: For a given subject A, I collect two resting state scans, each 5 minutes long, and within one session. Specifically, A’s resting state (RS1) scan is conducted following which he/she is exposed to a task, following which another resting scan (RS2) is conducted.  So, to summarize per subject, the scanning procedure is: RS1 – task –RS2.

I am currently examining data specific to RS scans. 

I have done the following:

Method 1: (1) Concatenated RS1 and RS2; (2) pre-processed the “fslmerged” data (3) ran data through melodic. This generates components that are relatively easy to classify as signal or noise.
Method 2: (1) Preprocessed each run separately (including registration to MNI space) (2) Concatenated the res4ds generated (3) Ran res4ds through melodic. This generates components that are mostly noise (the probabilistic PCA model generated is very poor). 
Method 3: (1) Preprocessed each run separately (2) Each run’s res4d was used as input to the MELODIC gui and temporal concatenation was used (instead of single-session ICA used in Method 1 and Method 2). The components generated are very clean, and much easier to classify as signal or noise. 

Question (1): Method 2 (done manually) and Method 3 (through GUI; interpretation based on report log) appear to use very similar approaches. For Method 2 (like method 3), during the concatenation stage, a mask was generated to find common space (by adding mask for run1 and run2). Then this mask was applied over the fslmerged res4d image. What is Method 3 doing in addition that has generated such a vast difference in component images? Or, more generally, how is method 3 different from the other two approaches?
Question (2): Is doing temporal concatenation for 2 runs (given the scanning procedure discussed above) equivalent to doing a multi-session? Since this method generates the best output, we are strongly persuaded by this method of analysis most.