Dear Gullaume,
Yes! Thanks a lot for the answer!
I once changed the defaults.stats.maxres value to investigate
residual time course on a 1st-level analysis, but somehow the possible
effect of the setting on 2nd-level analyses didn't occur to me.
Now I reran the estimation with the new maxres setting, and
confirmed the results (as well as the resel count) from the two models
with changed subject order being the same.
Thanks again for your great help!
Best,
Taka
2014-02-28 21:16 GMT+09:00 Guillaume Flandin <[log in to unmask]>:
> Dear Taka,
>
> thanks for sending me the files.
>
> The difference is due to the smoothness estimation: the resel count
> varies between 572.9 and 625.9, as displayed in the footnotes. The
> reason is that, for computational expediency, SPM uses at most 64
> residual images (by default) evenly spaced out from your input 108 images.
> round(linspace(1,108,64))
> So depending of the order of your input images, different ones are going
> to be used when estimating smoothness. This was already observed by
> Donald McLaren.
> To enforce obtaining the same results, you can increase the default
> value 64 to 128 in spm_defaults.m:
> defaults.stats.maxres = 128;
> If SPM is already running, you can temporarily change this setting like
> this before model estimation:
> spm_get_defaults('stats.maxres',128)
>
> Best regards,
> Guillaume.
>
>
> On 28/02/14 11:56, Takayuki Nozawa wrote:
>> Dear Gullaume,
>>
>> Thank you very much for looking into the issue!
>> The attached ZIP file includes two SPM.mat with only the order of
>> subjects were changed, along with the printed results for some contrasts
>> on those two.
>>
>> Please let me know if any further clarification is needed.
>>
>> Best,
>> Taka
>>
>>
>> 2014-02-28 20:20 GMT+09:00 Guillaume Flandin <[log in to unmask]>:
>>> Dear Taka,
>>>
>>> when you say that the results are a little different, could you give an
>>> example of what that is?
>>> It would be useful for me if you could send me the estimated SPM.mat
>>> where you changed the order of the subjects.
>>>
>>> Many thanks,
>>> Guillaume.
>>>
>>>
>>> On 28/02/14 08:56, Takayuki Nozawa wrote:
>>>> Dear Cyril,
>>>>
>>>> Thanks for your response.
>>>>
>>>> For the independence option of the factors, we set
>>>> "Yes" for the subject factor and "No" for the other two within-subjects factors.
>>>>
>>>> And for the variance option, we tried
>>>> - "Equal" for all the factors,
>>>> - "Equal" for the subject factor and "Unequal" for the other two
>>>> within-subjects factors,
>>>> - "Unequal" for all the factors,
>>>> and observed the subject ordering effect for each choice.
>>>> (The effect seemed more conspicuous for lower choices with more "Unequal",
>>>> probably due to larger complexity in estimating the whitening matrix,
>>>> as you suggested. )
>>>>
>>>> Any comments are appreciated.
>>>>
>>>> Best,
>>>> Taka
>>>>
>>>> 2014-02-28 17:03 GMT+09:00 Dr Cyril Pernet <[log in to unmask]>:
>>>>> Hi Taka
>>>>>
>>>>> it shouldn't really matter but maybe the whitening matrix differs a bit --
>>>>> what option of independence etc have you set?
>>>>>
>>>>> Cyril
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>> Dear SPM experts,
>>>>>>
>>>>>> Conducting 2nd-level factorial analysis, I found that a change in the
>>>>>> order of
>>>>>> subjects (scans) specified in the design leads to small difference in
>>>>>> statistical results.
>>>>>>
>>>>>> In more detail, I specified a 2x2 within-subjects design using
>>>>>> flexible factorial.
>>>>>> When I exchange the order of subjects
>>>>>> (e.g. using the "Subjects" option for specification of scans & factors,
>>>>>> and
>>>>>> by "Replicating" a subject and then deleting the original),
>>>>>> results for a specific contrast are a little different.
>>>>>> I'm using SPM8 rev.5236.
>>>>>>
>>>>>> So my questions are:
>>>>>> (1) Is this what is generally expected?
>>>>>> I think such effect of subject order is possible (e.g. when the
>>>>>> modeled data are
>>>>>> not so uniform in scaling), considering that estimation procedure involves
>>>>>> some Gram-Schmidt-like matrix operations which would accumulate
>>>>>> rounding error differently depending on the order of modeled data.
>>>>>> But I'd like to make sure that the difference in results are not by
>>>>>> some mistake
>>>>>> in the design specification.
>>>>>> (2) If this kind of specification order-effect is genuine, and when it
>>>>>> affects on
>>>>>> the significance of a result, what is a recommended way to address this?
>>>>>> Just sticking to the order of obtaining scans?
>>>>>>
>>>>>> Best,
>>>>>> Taka
>>>>>>
>>>>>> --
>>>>>> Takayuki Nozawa
>>>>>> Assistant Professor, Smart Ageing International Research Center
>>>>>> Institute of Development, Aging and Cancer (IDAC), Tohoku University
>>>>>> [log in to unmask]
>>>>>>
>>>>>>
>>>>>
>>>>> --
>>>>> Dr Cyril Pernet,
>>>>> Academic Fellow
>>>>> Brain Research Imaging Center
>>>>> Neuroimaging Sciences
>>>>> University of Edinburgh
>>>>>
>>>>> Western General Hospital
>>>>> Division of Clinical Neurosciences
>>>>> Crewe Road
>>>>> Edinburgh
>>>>> EH4 2XU
>>>>> Scotland, UK
>>>>>
>>>>> [log in to unmask]
>>>>> tel: +44(0)1315373661
>>>>> http://www.sinapse.ac.uk/
>>>>> http://www.sbirc.ed.ac.uk/cyril
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> The University of Edinburgh is a charitable body, registered in
>>>>> Scotland, with registration number SC005336.
>>>>
>>>>
>>>>
>>>
>>> --
>>> Guillaume Flandin, PhD
>>> Wellcome Trust Centre for Neuroimaging
>>> University College London
>>> 12 Queen Square
>>> London WC1N 3BG
>>
>>
>>
>
> --
> Guillaume Flandin, PhD
> Wellcome Trust Centre for Neuroimaging
> University College London
> 12 Queen Square
> London WC1N 3BG
--
Takayuki Nozawa
Assistant Professor, Smart Ageing International Research Center
Institute of Development, Aging and Cancer (IDAC), Tohoku University
[log in to unmask]
|