Print

Print


Dear Joe,

It's not necessary to repeat the dual regression, although doing so won't
be harmful either. The results from the dual regression are always the same.

Perhaps the easiest to do is to comment out the default randomise run
(instead of modifying it) from the script, and run it manually 3 times,
each with the modified inputs.

All the best,

Anderson


On 9 January 2015 at 04:31, jonathan starke <[log in to unmask]> wrote:

> Dear Anderson,
>
> thanks again for the usefull feedback. i have one final concern: now that
> i will be running the randomise as a separate process for each comparison
> (which i agree makes sense), should i also run the dual_reg separately for
> each group of subjects? if yes then what ,mat and .con setup should i use
> as something needs to be entered in the dual_reg setup? or is it
> irrelevant, since i wont be interested in the randomise output from these
> dual_regs and will only be using them to identify the subject-specific maps
> and time courses before doing the separate randomises?
> OR if it is still ok to run the dual_reg on all scans at the same time
> (one run), is it once more irrelevant what i put in my .com and .mat files
> at that stage, since i will only need to use the correct setups only for
> the subsequent separate randomises? my concern about this is that the
> dual_reg will bomb out if the GLm doesn't correspond to the data.
> hope it makes sense what i am asking here
> thanks again for the effort
> joe
>
> On 8 January 2015 at 11:19, Anderson M. Winkler <[log in to unmask]>
> wrote:
>
>> Dear Joe,
>>
>> Please, see below:
>>
>> On 8 January 2015 at 04:53, jonathan starke <[log in to unmask]> wrote:
>>
>>> Hi again
>>>
>>> just 2 other questions:
>>> 1) with this set-up, will i be comparing the controls to the two BP
>>> scans separately (which is what i want)? i.e. HC compared to BP1 gourp of
>>> scans and then compared (separately) to BP2 group of scans? instead of HC
>>> compared to all bp scans (1 and 2) joine into one larger group.
>>>
>>
>>
>> To compare HC with *either* BP1 or BP2, then it becomes much simpler.
>> You can ignore the previous design and instead run:
>> - For BP1 vs HC: a two-sample t-test (not paired)
>> - For BP2 vs HC: another two-sample t-test (not paired)
>> - For BP1 vs BP2: a two-sample paired t-test.
>>
>>
>>>
>>> 2) you mention that to do the reverse contrasts (which i do want), i
>>> should multiple the C1 and C2 by -1. if i was to do this, do you mean that
>>> i multiple the numbers (for e.g.) in the C2 row by -1 so that row would be:
>>> 0  30  2  2  2  2  2  2  etc... (ie -2 times -1 equals 2)? or perhaps i
>>> have not understood well here.
>>>
>>
>> Yes, C2 would become [0 -30 2 2 2 ... ]. But you won't need this design
>> anymore, because you are not comparing both BP1 and BP2 vs HC, but just
>> either BP1 or BP2 vs HC, which is much simpler.
>>
>> Remember to divide the significance level by the number of contrasts, so
>> instead of alpha = 0.05, use 0.05/3 = 0.0167 or 0.05/6 = 0.0083 if you look
>> at both tails.
>>
>>
>> All the best,
>>
>> Anderson
>>
>>
>>
>> On 8 January 2015 at 04:40, jonathan starke <[log in to unmask]> wrote:
>>
>>> Hi Anderson,
>>>
>>> thanks for the input, this is very helpful. just to confirm, in order to
>>> specify these options in the randomise, i will need to run it separately,
>>> rather then as the add on to the dual_reg (as it would normally
>>> automatically be done, cause in the usual dual_reg command setup i don't
>>> seem to get so many options in terms of the randomise part right?) so, for
>>> the dual_reg which i still need to do first, do i still use the same GLM
>>> design files, but then once complete and i have my subject specific data, i
>>> run the randomise again separately with your specifications using the
>>> dr_stage2_ic files from the dual_reg?
>>>
>>> if ok, i may have additional questions once i have tried this out.
>>> thanks again
>>> joe
>>>
>>>
>>> On 7 January 2015 at 12:52, Anderson M. Winkler <[log in to unmask]>
>>> wrote:
>>>
>>>> Dear Joe,
>>>>
>>>> On the contrary, I'd say this design isn't much straightforward. To
>>>> compare just the two timepoints that the BP subjects underwent, the control
>>>> subjects could be ignored and a paired t-test would be sufficient.
>>>>
>>>> To compare BP subjects with controls, things become a bit more
>>>> complicated, though, as it's not possible to put all subjects in the design
>>>> matrix and permute them freely, as that would violate exchangeability,
>>>> given the repeated measurements that only the BP subjects have. Permuting
>>>> within subject can't be done as for the controls nothing could be permuted
>>>> (and it wouldn't help with the hypothesis anyway), and permuting subjects
>>>> as a whole won't work because the subjects don't have all the same number
>>>> of timepoints.
>>>>
>>>> However, if we add just one assumption, that under the null (that is,
>>>> when there's no effect), the distribution of the errors is symmetric around
>>>> zero, then it's possible to do it all in a single run of randomise with the
>>>> attached design (you can open .ods files with LibreOffice, then copy/paste
>>>> in the FSL Glm GUI).
>>>>
>>>> When running randomise, make sure that exchangeability blocks (EBs) are
>>>> supplied with the option -e (the EBs are shown in a column marked in
>>>> orange). In addition, make sure you use the option --permuteBlocks and also
>>>> that the option -1 is supplied along with the design matrix and contrast
>>>> files. That is, something like this:
>>>>
>>>> randomise -i 4Dfile.nii.gz -d design.mat -t design.con -e design.grp
>>>> --permuteBlocks -1 -n 5000 -x -o myresults
>>>>
>>>> Run only the contrasts C1 and C2, not the others marked with a "-", as
>>>> these are there just to clarify how C1 and C2 were constructed.
>>>>
>>>> Hope this helps!
>>>>
>>>> All the best,
>>>>
>>>> Anderson
>>>>
>>>>
>>>>
>>>>
>>>> On 6 January 2015 at 08:04, Jonathan Starke <[log in to unmask]>
>>>> wrote:
>>>>
>>>>> Dear Experts,
>>>>>
>>>>> i am wanting to run dual regression and randomise analysis on 25
>>>>> subjects using the standard resting state network templates from FSL (smith
>>>>> 2009). the subjects consist of 15 patient with bipolar disorder scanned on
>>>>> 2 occassions (before and after an intervention), as well as 10 healthy
>>>>> subjects for comparison (only scanned on one occassion) total of 40 4D
>>>>> input files.
>>>>> i am wanting to look for changes in networks in the bipolar subjects
>>>>> before and after the intervention (i.e. compare scans 1 and 2), and also
>>>>> compare the healthy subjects with both of the bipolar scans. my proposed
>>>>> GLM.mat and.con setup is as follows. in the matrix, the first column is the
>>>>> Bipolar group scan 1, the second column is the bipolar group scan 2 and the
>>>>> third column is the healthy subject group. in the .con setup, the groups
>>>>> are codified as BPD1 (1st scan of bipolar group); BPD2 (second scan of
>>>>> bipolar group) and HC (health subjects group).
>>>>> i think it is a fairly straightforward analysis, but i am concerned
>>>>> that i am losing the statistical power of the fact that i have a paired
>>>>> comparison (ie BPD1 vs. BPD2) if i use this setup, but not sure what would
>>>>> be a better arrangement. i am interested in group-level differences, not
>>>>> inter-individual differences.
>>>>> please advise
>>>>> thanks
>>>>> joe starke
>>>>> masters student
>>>>> university of cape town
>>>>>
>>>>> /NumWaves       3
>>>>> /NumPoints      40
>>>>> /PPheights              1.000000e+00    1.000000e+00    1.000000e+00
>>>>>
>>>>> /Matrix
>>>>> 1.000000e+00    0.000000e+00    0.000000e+00
>>>>> 1.000000e+00    0.000000e+00    0.000000e+00
>>>>> 1.000000e+00    0.000000e+00    0.000000e+00
>>>>> 1.000000e+00    0.000000e+00    0.000000e+00
>>>>> 1.000000e+00    0.000000e+00    0.000000e+00
>>>>> 1.000000e+00    0.000000e+00    0.000000e+00
>>>>> 1.000000e+00    0.000000e+00    0.000000e+00
>>>>> 1.000000e+00    0.000000e+00    0.000000e+00
>>>>> 1.000000e+00    0.000000e+00    0.000000e+00
>>>>> 1.000000e+00    0.000000e+00    0.000000e+00
>>>>> 1.000000e+00    0.000000e+00    0.000000e+00
>>>>> 1.000000e+00    0.000000e+00    0.000000e+00
>>>>> 1.000000e+00    0.000000e+00    0.000000e+00
>>>>> 1.000000e+00    0.000000e+00    0.000000e+00
>>>>> 1.000000e+00    0.000000e+00    0.000000e+00
>>>>> 0.000000e+00    1.000000e+00    0.000000e+00
>>>>> 0.000000e+00    1.000000e+00    0.000000e+00
>>>>> 0.000000e+00    1.000000e+00    0.000000e+00
>>>>> 0.000000e+00    1.000000e+00    0.000000e+00
>>>>> 0.000000e+00    1.000000e+00    0.000000e+00
>>>>> 0.000000e+00    1.000000e+00    0.000000e+00
>>>>> 0.000000e+00    1.000000e+00    0.000000e+00
>>>>> 0.000000e+00    1.000000e+00    0.000000e+00
>>>>> 0.000000e+00    1.000000e+00    0.000000e+00
>>>>> 0.000000e+00    1.000000e+00    0.000000e+00
>>>>> 0.000000e+00    1.000000e+00    0.000000e+00
>>>>> 0.000000e+00    1.000000e+00    0.000000e+00
>>>>> 0.000000e+00    1.000000e+00    0.000000e+00
>>>>> 0.000000e+00    1.000000e+00    0.000000e+00
>>>>> 0.000000e+00    1.000000e+00    0.000000e+00
>>>>> 0.000000e+00    0.000000e+00    1.000000e+00
>>>>> 0.000000e+00    0.000000e+00    1.000000e+00
>>>>> 0.000000e+00    0.000000e+00    1.000000e+00
>>>>> 0.000000e+00    0.000000e+00    1.000000e+00
>>>>> 0.000000e+00    0.000000e+00    1.000000e+00
>>>>> 0.000000e+00    0.000000e+00    1.000000e+00
>>>>> 0.000000e+00    0.000000e+00    1.000000e+00
>>>>> 0.000000e+00    0.000000e+00    1.000000e+00
>>>>> 0.000000e+00    0.000000e+00    1.000000e+00
>>>>> 0.000000e+00    0.000000e+00    1.000000e+00
>>>>>
>>>>>
>>>>>                                                                  BPD
>>>>> 1          BPD 2         HC
>>>>> /ContrastName1  BPD 1 > BPD 2         1                   -1
>>>>>       0
>>>>> /ContrastName2  BPD 1 < BPD 2        -1                    1
>>>>>       0
>>>>> /ContrastName3  BPD 1 > HC               1                    0
>>>>>         -1
>>>>> /ContrastName4  BPD 1 < HC              -1                    0
>>>>>         1
>>>>> /ContrastName5  BPD 2 > HC               0                    1
>>>>>         -1
>>>>> /ContrastName6  BPD 2 < HC               0                   -1
>>>>>         1
>>>>> /NumWaves       3
>>>>> /NumContrasts   6
>>>>> /PPheights              1.000000e+00    1.000000e+00    1.000000e+00
>>>>>   1.000000e+00    1.000000e+00    1.000000e+00
>>>>> /RequiredEffect         1.577   1.577   1.763   1.763   1.763   1.763
>>>>>
>>>>
>>>>
>>>
>>
>>
>