Print

Print


Hi David,

So for P2, in addition to the same regressors of interest (and perhaps some nuisance) in the design as P1, there is also one extra nuisance EV, i.e., one that is specific to P2.

That nuisance can still be entered in the design with subtractions even though it refers to just P2 (the Option B of the earlier email), or in the model in which both P1 and P2 enter directly (Option A), in which case the P1 receive 0 (or in fact any constant value that is the same for all subjects). The reason why this can be done is that although the nuisance relates to P2 and not P1, it would affect the difference P1-P2 in the same way as it would just P2.

Regarding correction across both pipelines, you'd use the same call to PALM, but add a second "-d", for the second design with the nuisance, and further include the option "-designperinput", so that the first "-d" pairs with the first "-i", the second "-d" with the second "-i", and so forth.

All the best,

Anderson


On 3 November 2017 at 08:34, Szabolcs David <[log in to unmask]> wrote:
Hi Anderson,

Thank you very much for this very deep and thorough explanation. It took me some time, but did everything as you wrote and it works perfectly! Checked the two ways for testing the interaction effect (and sure, the main as well) and yes, it's 99.9% the same message/concept.

Just one last last question if I may have: how should I manage regressors of not interest if I would like to test the interaction between applying the regressor or not. Using my original question: p1 is the 'regular' pipeline and p2 is the same pipeline, but with a regressor. Can I just add the regressor as an extra EV and demean it across all subjects in p2, while keep 0s for p1?

The issue is I can not get my head around if I should apply the corrections (on the FA and therefore the scalars are changing) OR just use the correction as a (voxelwise) regressor, hence my very first question was related to 'pipeline comparison' and this current one is 'applying regressor or not'.

Best,
Szabolcs

On Wed, Nov 1, 2017 at 10:35 PM, Anderson M. Winkler <[log in to unmask]> wrote:
Hi Szabolcs,

Apologies for the delay. Please see below:

On 26 October 2017 at 12:07, Szabolcs David <[log in to unmask]> wrote:
Hi Anderson,

Thanks for the reply.

I don't understand some parts:
If I use multiple inputs (one for P1 and one for P2 ) than the design matrix doesn't match up: for (50+50)*2 I should have 200 rows, but with multiple inputs (100 subj per input)...it does not work (and got an error about).

So you have 50 HC and 50 AD subjects, each analysed with two pipelines. So, you can:

Option A - Use the exact same model shown in the FEAT section of example https://fsl.fmrib.ox.ac.uk/fsl/fslwiki/GLM#ANOVA:_2-groups.2C_2-levels_per_subject_.282-way_Mixed_Effect_ANOVA.29, in randomise or PALM, provided that you use exchangeability blocks, 1 block per subject.
The first EV can be used to code group (HC or AD, coded as 1 and -1), and the other used to code pipeline (for HC: +1 for P1 and -1 for P2; for AD: -1 for P1 and +1 for P2). This design will have 200 rows, and the input data will have 200 volumes. If you use PALM, I recommend including the options "-whole" and "-within" so that permutations will happen within and between blocks (subjects), which should increase power a bit.

Option B - Do the subtractions between P1 and P2, and use the difference as input. The design will be a simple (not paired) 2-sample t-test, in which HC are compared to AD. The design will have 100 rows (one per subject) and the input data will have 100 volumes (the differences). If you use PALM, use both "-ee" and "-ise", so that permutations will happen with sign-flippings, again increasing power a bit. In randomise, to have the same effect, include the option -1 (randomise will always permute; the -1 will further sign-flip).

Although these two ways are identical if all permutations and sign-flippings are done exhaustively, there will be slight differences due to the randomness of the subset of permutations done, and some implementation aspects.

 
Upon concatenating all the files into 1 4D, I could use the following setup:

-i P1_P2.nii
-d design.mat
-eb EB.csv
-within
-whole
-t design.con
-m FA_mask.nii
-o results
-saveglm
-logp
-n 500
-accel tail
-corrcon
-corrmod
-T

Got this warning immediately:

Warning: You chose to correct over contrasts, or run NPC
         between contrasts, but with the design(s) and,
         contrasts given it is not possible to run
         synchronised permutations without ignoring repeated
         elements in the design matrix (or matrices). To
         solve this, adding the option "-cmcx" automatically


This warning can be ignored. I recommend including the option "-nouncorrected", particularly given that you'll use the tail acceleration; otherwise it takes to long to fit the tail for all and every voxel.

 
I attached the design and contrast files along with the EB. Could you have a look on those if I'm not messing up something? Totally I have 50+50 in P1 and P2 as well.

I can't tell if all the +1 and -1 are correct in the design as it depends on the order that the subjects were entered in the 4D file. The overall assembly is correct, though. For the contrasts, you can include the negative ones (and keep the -corrcon). The -corrmod isn't necessary here.
 

In the contrast file - Contrast1 is testing for the difference between the pipelines: is it the same as I would just run a paired t-test between P1 and P2, of course without any consideration who is HC or AD?

Not really because the interaction acts as a nuisance. It's conceptually similar, but the results won't the the same as in the simple paired t-test.

 
Contrast2 is what I'm really interested in - but if it is significant that only tells me that there is a difference, but nothing about the direction, for that I need to run t-tests. Would that be the subtraction based t-testing? I would definitely tell something about the direction of the differences between P1 and P2 as well.

No need to run t-tests. Just look at the sign of the regression coefficient to see the direction (positive or negative). If it helps, you can use a different design that is equivalent:

EV1: For HC: use +1 for P1 and -1 for P2; for AD: use 0.
EV2: For HC: use 0; for AD: use +1 for P1 and -1 for P2.
EV3 onwards: subject-specific EVs.

The contrasts are then:

C1: [1 1 0 0 0 0 ...] - Main effect of pipeline.
C2: [1 -1 0 0 0 0 ...] - Interaction

As before, you can include the negative versions of these contrasts.
 

When you write that I can correct for multiple modalities and constraints, should I include 2 more, so totally 3 inputs: 1 concatenated and 1-1 for P1 and P2 separately and also 3 design matrices: 1 for the repeated anova and 1-1 for testing the group differences within P1 and P2? Than it should look something like this:

-i P1_P2.nii
-d design.mat
-eb EB.csv
-within
-whole
-t design.con
-i P1.nii
-d P1_grp_diff.mat
-t P1_grp_diff.con
-i P2.nii
-d P2_grp_diff.mat
-t P2_grp_diff.con
-m FA_mask.nii
-o results
-saveglm
-logp
-n 500
-accel tail
-corrcon
-corrmod
-T

I was thinking something else, as below:

-i P1.nii
-i P2.nii
-d design.mat
-t design.con
-m FA_mask.nii
-o results
-saveglm
-logp
-n 500
-accel tail
-corrcon
-corrmod
-T
-nouncorrected

Hope this helps!

All the best,

Anderson


 

Best,
Szabolcs



On Wed, Oct 25, 2017 at 7:42 PM, Anderson M. Winkler <[log in to unmask]> wrote:
Hi Szabolcs,

Yes, that looks correct. You'd test the interaction group by pipeline; if significant it means that group differences depend on pipeline differences.

Two further comments that apply to PALM:

1) The subtractions can in fact be omitted. Use the same design as described in the FSL GLM manual, define one exchangeability block per subject, and run PALM with the options "-within" and "-whole" such that permutations will happen between pipelines and between subjects.

2) You can correct for the fact that multiple pipelines were used (this is mentioned in the NPC paper): assemble the design as a simple two-sample t-test (not paired) and use multiple "-i", one for each pipeline, specifying the respective 4D file for each. If pipelines are so different to the point of yielding independent results (unlikely), this would be equivalent to Bonferroni; if pipelines are so similar to the point of yielding identical results (also unlikely), this would take into account the lack of independence. Probably the reality is somewhere in between.

Hope this helps!

All the best,

Anderson


On 24 October 2017 at 13:52, Szabolcs David <[log in to unmask]> wrote:
Dear Anderson and Co.,

I would like to look at if different (preprocessing) pipelines, eg.: P1 and P2, have an effect on the difference of healthy (HG) vs patient groups (AD), something like:
P1(HG vs AD) > P2(HG vs AD) and my question here would be if there is a difference between the two differences because of the pipelines (p1 vs p2).

Based on the description here, especially the last paragraph:
I think need to do the following:

First, calculate the paired differences for both groups per subjects:
HG_subj1_diff=P1_HG_subj1-P2_HG_subj1
HG_subj2_diff=P1_HG_subj2-P2_HG_subj2
.
.
AD_subj1_diff=P1_AD_subj1-P2_AD_subj1
AD_subj2_diff=P1_AD_subj2-P2_AD_subj2
.
.
Then compare the two group of (concatenated) subtractions with a two sample (unpaired) t-test. The two contrasts could be: all_HG_diff < all_AD_diff & all_HG_diff > all_AD_diff. All the maps are in standard space, the metric is FA.

Could you please check if this is a correct way?

Best,
Szabolcs