Print

Print


Hi Bernadet,

Let's start from the design, as by changing it, the error message may also disappear. The design doesn't show the contrasts but I suspect you're interested on the effect of treatment, and this design isn't appropriate for this. It is possible to put the proper EVs in place and do it, but as commented earlier, an univariate repeated measures ANOVA (with 4 measurements post-treatment) will have more assumptions, and be less powerful, than NPC. MANCOVA can also be considered, and it's easier to run, but it doesn't allow for directional effects and it's less powerful than NPC.

Let's pretend for a moment that you have just 1 scan pre and 1 scan post treatment, and two treatments (drug/placebo), such that each subject has 4 scans total. Prepare a paired t-test for comparing drug vs. placebo, plus the baseline scans as a nuisance voxelwise EV. It seems you have 12 subjects, so this design will have 24 columns.

The error message you received indicates memory issues, so let's further reduce the size of this analysis (it will also run faster): a paired t-test can be replaced by subtractions within-subject, and a 1-sample t-test with sign-flippngs, yielding exactly the same results. So, instead of the above, subtract drug-placebo for post and pre (do it in a consistent order for all subjects), and make the design a 1-sample t-test with additional nuisance. The nuisance is the voxelwise difference between drug and placebo before treatment. Use the option -ise (sign flippings) to test for the intercept in this model, which is the same as testing the difference between drug and placebo at post treatment, while having baseline as nuisance.

Now it comes NPC: repeat the above for each of the 4 post-treatment scans. There will be 4 such input files, one for each post-treatment difference drug-placebo. The design is the same for all: a one-sample t-test with a nuisance variable (voxelwise, baseline), and tested with sign flippings.

The call to PALM will then be:

palm -i 4d_diff_t1.nii -i 4d_diff_t2.nii -i 4d_diff_t3.nii -i 4d_diff_t4.nii -d design.mat -t design.con -evperdat 4d_diff_t0.nii 2 1 -ise -npc -o myresults

You can add other options, such as -logp, or options for spatial statistics as cluster extent, mass, or TFCE (slower to compute, though).

The above will allow testing each IC separately, but not all ICs as a whole (so, no correction for ICs). Although PALM can do it, for this analysis it would require a further generalisation, to do a combination of combinations, and it isn't currently possible. There is a workaround, with a lot of work, though. Try running the above first for each IC, correcting with Bonferroni, and let me know later if you want some extra work to correct across ICs in a different way.

Now, specifically about the error message: it's a memory issue. That offending line pre-computes an array that is used for all permutations, and for voxelwise EVs it can become quite large. The code can be changed, but then the same needs to be computed over and over again at every permutation, which makes it very slow. I guess we'll need to have a solution for this eventually, but with the changes above I expect these issues to disappear (it will use 64x less memory).

All the best,

Anderson



On 3 November 2015 at 13:33, Bernadet Klaassens <[log in to unmask]> wrote:
Hi Anderson,

Thanks for taking the time to look at this problem! The clarification on the website is very helpful. However, I restarted the analysis with the new PALM version and now encounter the following error:

error: out of memory or dimension too large for Octave's index type
error: called from:
error:   /usr/local/LKeb/FSL/PALM65/palm_core.m at line 616, column 32
error:   /usr/local/LKeb/FSL/PALM65/palm.m at line 80, column 1

This is probably not a memory problem, as it even shows up with 1 input and 1 EV (there are no errors when I use 10 inputs without EVs). We are not sure how to solve this. No matter how many input/EVs I use and how much memory I assign to the job, it gets stuck at the same point:

….
Reading design matrix and contrasts.
Elapsed time parsing inputs: ~ 15.7903 seconds.
Number of possible permutations is 1.84606e+55.
Generating 100 shufflings (permutations only).
Building null distribution.
Doing maths for -evperdat before model fitting: [Design 1/1, Contrast 1/2] (may take some minutes)

Hopefully, you know how to overcome this.

Furthermore, with regard to my design: I added an excel file with the mixed effects design and contrasts as currently set up. We modelled treatment and time as fixed factors plus random intercepts for the (12) different subjects. There are 2 study days, on one day the subjects receive a drug and on the other a placebo. On both days, 1 baseline scan was made (pre dosing) and after receiving drug or placebo, 4 additional (post dosing) scans were acquired. We are mainly interested in the drug treatment effect —> for each of 10 modalities (with equal resolution), is connectivity decreased and/or increased as a consequence of taking drug vs. placebo? The plan is to add the baseline scan (Z-map) as voxelwise EV, to correct for possible differences at baseline level (instead of subtracting baseline from each post scan).
Additional analyses of interest might be:
-          Treatment effect for each time point separately
-          Treatment x time interaction effect

MAN(C)OVA or NPC may be possible as well, although we are interested in the effects for each modality separately, which will probably lead to subsequent post-hoc tests. What is your suggestion? I would be happy to receive your advice about the appropriate design/analysis.

Best,

Bernadet