Print

Print


Hi Anderson,

Thanks for your input.  I’ve concluded all my analysis.  I had some issues with registration etc that I worked out through FSL instead of through SPM and that seemed to help out a lot.  Out of my analysis I do have some significance in a network in cfwep and mfwep, but no mcfwep unfortunately.  I’m planning on writing my results just for those p-values. 

Would I be correct in stating that since I was correcting over many modalities (16) and contrasts (6) that would be extremely difficult to get significance after correcting across both modalities and contrasts?  If so, how would you justify these results? Would simply saying that I am controlling for Type 1 Error over contrasts and over modalities (in the separate images) be sufficient? 

As always, thank you for your help.

Pranish
On Jun 29, 2016, at 12:49 AM, Anderson M. Winkler <[log in to unmask]> wrote:

Hi Pranish,

There would in principle be one EB per subject, as each subject is experimented under two conditions, and shufflings would happen within subject. I wouldn't in principle consider heteroscedasticity, and would rather treat all as members of the same variance group (which is the default anyway).

Although it is possible (and it has been described in the list multiple times), I think for your case, with multiple designs and pipelines, for simplicity, I would do subtractions for the within-subject effects and interactions, then get rid of exchangeability blocks.

All the best,

Anderson


On 28 June 2016 at 02:36, Pranish Kantak <[log in to unmask]> wrote:
Hi Anderson,

Thanks for letting me know about that -designperinput.  As far as the single call running PALM, would there be an exchangeability block (EB) file for the dr_stage2 files of the fasted obese vs. lean group difference? I would assume that there would need to be a file like this, because I would assume the data (i.e.: the fasting resting state scans for the obese and lean subjects) are not homoscedastic.  Although, I’m not sure about this.  

Thanks,

Pranish
On Jun 27, 2016, at 2:25 AM, Anderson M. Winkler <[log in to unmask]> wrote:

Hi Pranish,

Please, see below:

On 27 June 2016 at 02:12, Pranish Kantak <[log in to unmask]> wrote:
Hi Anderson,

Thanks for your responses.  I’ve decided to run the PALM with more permutations, just to have the best estimations of my p-values that I can have.  Just to clarify on one of your points below:

After I’ve ran the melodic (with both fed and fasted scans) and done PALM I am getting an effect of time, no interaction effect, and no effect of weight (through averaging the two scans per subject).  I would like to additionally run melodic and palm with just the fasted scans for the two groups, because there was an effect of time and averaging the scans probably isn’t best.  From what you said below, this seems ok to do.  Just want to clarify before i move forward.

Yes, it can be done, only note that more pipelines running in parallel (i.e., Melodic running twice, etc) mean more tests to be corrected for. See the PALM manual -- you can assemble a single call and do all in a single run using the fact that multiple inputs (-i), multiple designs and contrasts (-d/-t), one design per input (-designperinput), and correction over contrasts (-corrcon) and over inputs (-corrmod) are possible.


Lastly, regarding the cross correlations fslcc does between my reference network and my melodic_IC file.  Do these start at 0 or 1? Meaning if it outputs, say, 29, is that really 28 in the melodic_ic and dr_stage2 files? Just making sure.

I don't remember but this should be obvious from the output no?

All the best,

Anderson


 

Thanks again for all your help.

Pranish

On Jun 26, 2016, at 3:45 AM, Anderson M. Winkler <[log in to unmask]> wrote:

Hi Pranish,

Please, see below:

On 24 June 2016 at 01:12, Pranish Kantak <[log in to unmask]> wrote:
Hi Anderson,

The PALM has finished running.  One modality, corrected for modality + contrast, revealed significance in the contrast “fed > fasted".  I think the logp value was 2.3 or something close to that.  That was with 500 permutations.  There were a few modalities (mcfwep) that were extremely close to significance.  Is there anything computationally I can do to pull those out? I was thinking of increasing the number of permutations to 1000.

Although more permutations don't increase power, they do allow better estimated p-values. These p-values can either be closer to significance, or farther from it, we can't know until we run. But if you run with more permutations, whatever the results are, these are the ones you should retain.
 

Additionally, I’m still extremely surprised to find no group effect.  The BMI’s of these participants were such that one group was around 21 (extremely lean) and the other group was around 34 (morbidly obese).  Through literature review, I find it extremely odd to have no effect here.  I’ve been thinking about what could be wrong and I’ve come up with the following, please let me know if you agree:

1) In our design, we’re averaging across runs per person and then comparing groups.  Since there is an effect of run, could this potentially wash out a group effect? I was thinking about running MELODIC/PALM with just the fasted data of the obese and lean subjects (with a Two-Sample Unpaired T-Test in PALM) and having that as a separate figure from the run and group x run interaction.  

Yes. In fact, should run the interaction first, before averaging the runs.
 
2) Preprocessing stream.  Our standard preprocessing stream in SPM12 may be too much for ICA analysis.  We motion correct, slice time correct, smooth, and warp.  From what I’ve been reading in the lectures and reading papers we only should be FWHM at about 6mm (Filipini et al used this), brain extracted, and motion corrected.  I’m running FEAT right now with this setup and also high pass filtering at 150s, so should have that data soon to test.

I don't think there are strict rules on this. The settings you described from SPM seem fine to me.
 
3) The amount of contrasts could be too high? I’m just thinking back to Bonferroni where the modalities + contrasts drop the threshold for significance quite a bit.

Yes, too many ICs make it harder for anyone to reach significance. Selecting only ICs for which there is a clear hypotheses, or splitting the sample in testing and confirmatory may help, but then of course there is less power with smaller sample sizes.

All the best,

Anderson


Thanks again for taking a look.

Regards,

Pranish

On Jun 22, 2016, at 4:39 AM, Anderson M. Winkler <[log in to unmask]> wrote:

Hi Pranish,

Design and contrasts are fine, but sorry -- my mistake -- you also need to include the options "-whole" and "-within", so that the observations will be shuffled within-block (as needed for the within-subject effects) and the blocks themselves will be shuffled as a whole (as needed for the between-subject effects). Exchangeability is preserved in this way, and not otherwise, which makes the test invalid. There will be a warning about synchronised permutations, which is fine.

I don't think though that the tail approximation would be sensitive to this at all, but let's nonetheless fix and then move on from there.

All the best,

Anderson


On 21 June 2016 at 11:21, Pranish Kantak <[log in to unmask]> wrote:
HI Anderson,

Here is the design file.  I copied all of the dummy variables over into Excel and saved as a .csv file for use in PALM.  Could it maybe be the EB values all just need to be 1 as in the FSL GLM page? One of the reasons I was suspicious about the design files is that when I just ran the 2nd design and contrast files in PALM (just the group effect, which before was the only significant result, albeit it with a wrong design), it wasn’t running and essentially wasn’t going through the shuffling process.  It basically said it was done after about 30 seconds.

I will check on the files with the “inf” and get back to you.  When you say make sure I am testing the means of a given group.  Where is that specified? The design/contrast files?

Thanks again,

Pranish



On Jun 21, 2016, at 2:49 AM, Anderson M. Winkler <[log in to unmask]> wrote:

Hi Pranish,
Could you show the design? From the description it seems fine, though, and these should be equivalent.
All the best,
Anderson


On 21 June 2016 at 02:01, Pranish Kantak <[log in to unmask]> wrote:
Hi Anderson,

Just another thought I had when looking more closely at the GLM page regarding the 2 groups, 2 levels ANOVA (the last one under the repeated measures).

Could it be that my design matrix and/or contrast files were not set up correctly? It looks as if there is an EV for Run and an EV for G x R, with the contrasts being [1,0] for run effect and [0,1] for interaction.  The way we’ve set it up it looks like there are separate EVs showing each group (i.e.: EV1 —> lean; EV2 —> obese) and then the contrasts are much different.  Wondering if this could be the reason for the strange PALM results?

Thanks again for all the help.

Pranish

On Jun 18, 2016, at 5:54 AM, Anderson M. Winkler <[log in to unmask]> wrote:

Hi Pranish,

Please, see below:

On 18 June 2016 at 01:19, Pranish Kantak <[log in to unmask]> wrote:
Hi Anderson,

Thank you for the link to those.  Based on that thread you suggested I look at, and our conversation, I think I’m going to drop the clinical score comparison.  It seems like its kind of redundant information.  I’ve redone my design matrix and contrasts…I’m a little confused on some of the contrasts in design 1 however.  Design 2 is simple enough, so I’m not really worried about that one.  

The 4 contrasts I want are as follows:

C1 Fasted > Fed (i.e.: t1 > t2)
C2 Lean > Obese + Fasted > Fed (i.e.: g1 > g2 + t1 > t2)
C3 Fed > Fasted
C4 Obese > Lean + Fed > Fasted

Do you mind taking a look at the attached file? 

C1 and C3 are fine. They can be replaced by [1 1] and [-1 -1] (these are equivalent).

C2 and C4 are also fine, and can likewise be replaced by [1 -1] and [-1 1]. However, their interpretation differ slightly than what is written. These two assess the interaction of group by intervention, and will be significant if:

C2: (fasted_lean - fed_lean) > (fasted_obese - fed_obese), which is the same as (fed_obese - fed_lean) > (fasted_obese - fasted_lean).
C4: (fasted_lean - fed_lean) < (fasted_obese - fed_obese), which is the same as (fed_obese - fed_lean) < (fasted_obese - fasted_lean).

About the F-tests: there is no need for them at all, and they will provide less information than using correction over contrasts (more below).
 
If I have it right…EV1 & EV2 are the differences between t1/t2 in the two groups…where if you have a positive dummy variable, that corresponds to the t1 time point, and negative to the t2 time point?

Yes. It could also be reversed, in which case the contrasts would have the signs swapped.
 

Finally, how do I get this into PALM? I was planning on using the text2vest with the .ods file.  

Copy/paste the design to a text file (no column or row captions, just the numbers). Save this file as .csv (internally it can be either comma-separated values or tab-separated values, doesn't really matter). Do the same for the contrasts. Repeat for design2, thus producing 4 files.

Do the same for the EB (single column).
 
Will I have to run PALM once with design 1 and once with design 2 and the associated contrasts files? Or is there a way to run them together?

Can run them together:

palm -i input4d.nii -d design1.csv -d design2.csv -t contrasts1.csv -t contrasts2.csv -eb EB.csv -n 1000 -approx tail -corrcon -logp -nouncorrected -o myresults

The -corrcon will correct across all contrasts throughout both designs. The -n 1000 is for the number of permutations, and the -approx tail will take these 1k perms and produce p-values that approximate as if more permutations had been done. The -logp will save p-values as -log10(p).

The -nouncorrected will prevent uncorrected p-values from being saved. If you really want uncorrected p-values, then perhaps reduce the number of permutations to -n 500, or drop the tail approximation (the reason is that, with the tail approximation all permutations need to stay in the memory, and for uncorrected, this means keeping the full images for 1000 perms * 6 contrasts = 6k images; for the corrected, only the maximum is kept).

All the best,

Anderson

 

Thanks again!

Pranish


 
On Jun 17, 2016, at 2:54 AM, Anderson M. Winkler <[log in to unmask]> wrote:

Hi Pranish,

Thanks for these details. It's necessary to use a different design for this, given the repeated measurements. There is an example is this earlier thread: https://www.jiscmail.ac.uk/cgi-bin/webadmin?A2=FSL;3cee56eb.1510

For the within-subject effects (i.e., the effect of intervention and the interaction of group by intervention), yours would be identical to the design1 in the spreadsheet. There is no need to include the clinical score, as it seems it's a single measurement per subject, and therefore it's already taken care of by the subject-specific EVs.

For the between-subject effects (i.e., the effect of group), start with design2, but then include an extra EV to code the clinical score. The values for each subject would be entered twice.

Hope this helps.

All the best,

Anderson


On 16 June 2016 at 15:39, Pranish Kantak <[log in to unmask]> wrote:
Hi Anderson,

Thanks for your response.  The details of the experiment are below:

2X2 Anova 

15 Lean (based on BMI) subjects —> Control
12 Obese (based on BMI) subjects —> Case

Pre - Intervention = Fasting rsFMRI Scan 6:30 min —> Time: Baseline

Feeding Intervention
also behavioral questionnaires, such as the Dutch Eating Behavior Questionnaire (which gave me the clinical score I’m talking about)

Post - Intervention = Fed rsFMRI Scan 6:30 min —> Time: 1 hr after end of Baseline scan

Full Design is in the attached .fsf file from GLM.  Essentially:

Obese>Lean
Fed > Fasted
Interaction of Obese(Fasted) - Obese(Fed) = Lean(Fasted) - Lean(Fed)
Lean>Obese
Interaction of Obese(Fed) - Obese(Fasted) = Lean(Fed) - Lean(Fasted)

Thanks!

Pranish