Print

Print


Hi Jon,
Yes, include an intercept in the model, or demean data and design (which is what option -D in randomise does).
All the best,
Anderson


On 31 October 2014 18:24, [log in to unmask] <[log in to unmask]> wrote:
Hi Anderson,

Thanks for the reply. So does that mean either or will model the intercept as well? Including an EV3 with everybody as 1 OR using the -D flag?

Thanks,
Jon


On October 31, 2014 5:10:37 AM CDT, "Anderson M. Winkler" <[log in to unmask]> wrote:
Hi Jon,

As long as there are no repeated measurements, this design is correct, i.e., EV1 for age (continuous), EV2 for memory performance (continuous). However, you also need to add a column full of ones to model the intercept (EV3), or, if you are using randomise, include the option -D.

The contrasts shown in your second email are also correct (but add the extra zeros for the EV3).

All the best,

Anderson


On 28 October 2014 21:58, Siegel, Jonathan Todd <[log in to unmask]> wrote:
Sorry, that contrast was obviously incorrect and would be as follows:

Title                   EV1     EV2
PosAgeEffect            1       0
NegAgeEffect            -1      0
PosMemEffect            0       1
NegMemEffect            0       -1

Apologies for having to make a second email,
Jon

-----Original Message-----
From: FSL - FMRIB's Software Library [mailto:[log in to unmask]] On Behalf Of Siegel, Jonathan Todd
Sent: Tuesday, October 28, 2014 4:57 PM
To: [log in to unmask]
Subject: Re: [FSL] fslstats vs fslmeants

Hi Mark,

I'm sorry it's taken so long to respond, but there is actually no discrepancy as I had described before between the fslmeants and the fslstats functions, it was merely a difference due to the formatting using Gedit and as soon as I copy and pasted it into excel/spss, everything matched up perfectly. I did have another question regarding the glm setup, though.

 I have three groups of participants that we have collected: young, middle, and old. If I choose to look at age as a continuous variable instead of breaking them into groups, does this look correct for the glm setup? In this case everybody would be in group 1 and age is my EV1, memory performance is my EV2 and I want to look at both the age-related changes in WM tracts when controlling for performance and the performance-related changes in WM tracts when controlling for age. I have mean centered all of the variables and for ease of demonstration/example I'll just post 6 values (2 for each 'group', even though I'm looking at age as a continuous variable and therefore removing 'group'), which are made-up.
Group   EV1     EV2

1       -10     1.2
1       -8      1.5
1       -2      .8
1       2       -.8
1       10      -1.2
1       8       -1.5

Then on the bottom I click the orthogonalization and select the appropriate boxes for EV1 and EV2, respectively. Contrasts would simply be as follows (looking at the positive/negative linear effect of age/memory) in WM FA values since it's DTI data:
Title                   EV1     EV2
PosAgeEffect            1       0
NegAgeEffect            0       -1
PosMemEffect            1       0
NegMemEffect            0       -1

Does this seem correct at all or do I need to add 'group' as EV1 and give everybody a value of 1 since they're all in the same group? I should note that with the sample we have collected it is appropriate to look at age as a continuous variable due to the ages of participants and the lack of gaps.  Thank you so much for your help.
Best,
Jon

-----Original Message-----
From: FSL - FMRIB's Software Library [mailto:[log in to unmask]] On Behalf Of Mark Jenkinson
Sent: Friday, October 03, 2014 3:58 AM
To: [log in to unmask]
Subject: Re: [FSL] fslstats vs fslmeants

Hi,

Are you sure that this couldn't be caused by zero values being ignored in one case and not the other?
If you are sure, then please email an example of the txt file for a case with a large difference, and also include the results of using -V and -v with fslstats.

All the best,
        Mark


On 2 Oct 2014, at 21:47, "Siegel, Jonathan Todd" <[log in to unmask]> wrote:

> Hi Mark,
>
> Thanks for your reply. I was looking through the source code of the two functions, but being that I'm fairly new to coding/programming, it was pretty difficult to see exactly what the differences between the two functions were. To answer your question, yes, all of the mask are binarized. The differences seem to be almost identical for some participants, only a couple hundredths difference for others, but then there are some that have a difference as big as a couple tenths (ie. .57 to .75). I can attach the .txt outputs if you'd like to see them.
>
> Thanks,
> Jon
>
>
> On Oct 2, 2014, at 2:36 PM, Mark Jenkinson <[log in to unmask]> wrote:
>
>> Hi,
>>
>> Are your masks binary masks?
>> There is some difference between how certain tools work with non-binary masks and we recommend always using binary masks to avoid such problems.
>>
>> If that is not the problem, can you give us some more information on how big the differences are?
>>
>> All the best,
>>      Mark
>>
>>
>> On 30 Sep 2014, at 16:03, Jon Siegel <[log in to unmask]> wrote:
>>
>>> Hi All,
>>>
>>> I just had a quick question regarding the differences b/w the fslstats function and the fslmeants function and how the calculations are done. Per Mark's recommendation, I used the call fslmeants -i all_FA_skeletonised.nii.gz -m roimask -o meants_roi, but then out of curiosity and to get the amount of voxels included in the mask, I tried running fslstats -t all_FA_skeletonised.nii.gz -k roimask -M -V > meants2_roi. I know the -M and -V flag in the fslstats call is supposed to exclude nonzero voxels while the fslmeants call includes all voxels, but my masks were made by finding the intersection of the JHU ROIs and the mean_FA_skeleton_mask and thus I wouldn't expect to find massive differences between the two. Unfortunately, there is a major discrepancy b/w the two results/outputs and I'm curious if there's a different formula and/or calculation used (perhaps they round differently)? Just for sanity, I checked to see if perhaps there were different numbers of voxels included in the calculation (since one includes all voxels and one includes only nonzero voxels), but they all seem to match up. Any ideas on this?
>>>
>>> Thanks for your help,
>>> Jon