Print

Print


Concatenation comes with a number of issues: filtering, constant terms,
AR(1) regressor, BOLD response regressors running over between runs, etc.

If you really want to concatenate the data, then look at the gPPI toolbox
function spm_estimate_PPI.m and pull out the sections of code related to
concatenation. The code utilizes a single subject model to create the
correct concatenated model with the filtering and regressors built on a per
run basis. You should be able to adapt it for your analysis. Getting the PM
built correctly might be the hardest part as it just pulls the values from
the prior analysis.

If you compute the mean of each PM for each session and use that a a
scaling factor (we'll call it S), then you recover the non-demeaned PM by
adding S*taskregressor. Then across all runs, you could create a new
scaling factor that is the mean of all PM values (we'll call this S2), then
subtract S2*taskregressor in each run to get the multi-session adjusted
values.

I'll work on adding this to the PPI code in the future, but it shouldn't be
too hard to code this by hand.

Best Regards, Donald McLaren
=================
D.G. McLaren, Ph.D.
Research Fellow, Department of Neurology, Massachusetts General Hospital and
Harvard Medical School
Postdoctoral Research Fellow, GRECC, Bedford VA
Website: http://www.martinos.org/~mclaren
Office: (773) 406-2464
=====================
This e-mail contains CONFIDENTIAL INFORMATION which may contain PROTECTED
HEALTHCARE INFORMATION and may also be LEGALLY PRIVILEGED and which is
intended only for the use of the individual or entity named above. If the
reader of the e-mail is not the intended recipient or the employee or agent
responsible for delivering it to the intended recipient, you are hereby
notified that you are in possession of confidential and privileged
information. Any unauthorized use, disclosure, copying or the taking of any
action in reliance on the contents of this information is strictly
prohibited and may be unlawful. If you have received this e-mail
unintentionally, please immediately notify the sender via telephone at (773)
406-2464 or email.


On Tue, Sep 2, 2014 at 9:32 PM, Andy Yeung <[log in to unmask]> wrote:

> Dear Donald and all,
>
> I wish to do parametric modulation as a whole, that's why I try to
> concatenate.
>
> Best,
> Andy
>
>
> On Wed, Sep 3, 2014 at 4:17 AM, MCLAREN, Donald <[log in to unmask]>
> wrote:
>
>> You should not concatenate the sessions together. Just put 3 sessions
>> into the model.
>>
>> Best Regards, Donald McLaren
>> =================
>> D.G. McLaren, Ph.D.
>> Research Fellow, Department of Neurology, Massachusetts General Hospital
>> and
>> Harvard Medical School
>> Postdoctoral Research Fellow, GRECC, Bedford VA
>> Website: http://www.martinos.org/~mclaren
>> Office: (773) 406-2464
>> =====================
>> This e-mail contains CONFIDENTIAL INFORMATION which may contain PROTECTED
>> HEALTHCARE INFORMATION and may also be LEGALLY PRIVILEGED and which is
>> intended only for the use of the individual or entity named above. If the
>> reader of the e-mail is not the intended recipient or the employee or
>> agent
>> responsible for delivering it to the intended recipient, you are hereby
>> notified that you are in possession of confidential and privileged
>> information. Any unauthorized use, disclosure, copying or the taking of
>> any
>> action in reliance on the contents of this information is strictly
>> prohibited and may be unlawful. If you have received this e-mail
>> unintentionally, please immediately notify the sender via telephone at
>> (773)
>> 406-2464 or email.
>>
>>
>> On Thu, Aug 21, 2014 at 9:16 PM, Andy Yeung <[log in to unmask]>
>> wrote:
>>
>>> Dear all,
>>>
>>> May I ask when we concatenate 3 sessions together, at specifying onset
>>> (in terms of scans) of conditions, is the first scan of sessions 2 & 3
>>> counted as scan zero too?
>>>
>>> I mean, when I have 282 scans per session and first onset each session
>>> at the 7th scan.
>>> Should I input onset as [6, 288, 570] or [6, 289, 572]?
>>>
>>> Please help! Thank you very much!
>>>
>>> Best,
>>> Andy
>>>
>>
>>
>