Hello Anderson and FSL listserve,
I’m using palm for a ’multi-modal’ analysis (voxelwise within subjects) that I hope to use to explore what brain areas show convergence across two sets of images. The outputs are looking a little funny to me so I would be grateful for feedback on my steps:
In the GLM set up, put in ‘1’ for all entries under ‘Group,’ and ‘1’ for all entries under EV1. For voxelwise EV2, I put in the 4D dataset for imaging modality #2. Under contrasts, entered ‘1’ for EV1 and ‘1’ for EV2.
Then in octave, I used this XX model and contrast file to do:
palm -i Modality1_4D.nii.gz -i Modality2_4D.nii.gz -d XX.mat -t XX.con -n 10000 -T -npc -noniiclass -tonly -savemask -o Output_Image
Then I looked to see if anything was significant in the output by: fslmaths on the TFCE corrected output with -uthr 0.049999
Did I mess up anywhere up there? Would it matter if the inputs were zscores from one or both modalities?
Thanks lots!
Desmond
|