Print

Print


microtime bins allow you to specify the duration of your events at a
temporal resolution higher than the TR. If you couldn't specify durations
less than a TR, then our models wouldn't fit the data as well.

You then need to somehow convert the microtime resolution to TR. In SPM,
you select the microtime bin at the microtime bin offset specified when
building the model. This offset should be based on your slice-timing
correction.

Best Regards,
Donald McLaren, PhD


On Sat, Feb 13, 2016 at 5:01 AM, fMRI <[log in to unmask]> wrote:

> Thanks Donald,
>
> Why sampling the TR matter? Or doing the microtome or bins sampling is
> important and what does it actually do? I tried to look around and could
> not find references about this.
>
> Regards,
>
> AS
>
> On 13 Feb 2016, at 01:56, "MCLAREN, Donald" <[log in to unmask]>
> wrote:
>
> Convolution and then sampling each TR as is done in the gPPI toolbox. If
> you only use the VOI, you end up with a smoothed version of the VOI
> timeseries.
>
> Best Regards,
> Donald McLaren, PhD
>
>
> On Fri, Feb 12, 2016 at 4:49 AM, Aser A <[log in to unmask]> wrote:
>
>> Thanks Donald.
>>
>> How can I go back from microtome resolution to the scan units or time? Or
>> in other words indexing the microtime resolution to the number of volumes.
>>
>> Many thanks
>>
>> Aser
>>
>> On Thu, Feb 11, 2016 at 10:13 PM, MCLAREN, Donald <
>> [log in to unmask]> wrote:
>>
>>> So you want to use the neural activity in the VOI as the PM? This would
>>> be a gPPI analysis.
>>>
>>> What were you using before as your PM?
>>>
>>> For PM analysis, you need 1 value per trial.
>>>
>>>
>>> Best Regards,
>>> Donald McLaren, PhD
>>>
>>>
>>> On Thu, Feb 11, 2016 at 4:55 PM, fMRI <[log in to unmask]> wrote:
>>>
>>>> Thanks Donald,
>>>>
>>>> the problem with the xn (neuronal activity) is that it is samples with
>>>> the time resolution or bins and instead of having it (200 time - scans) I
>>>> have it 9200. The dt is 0.54. However I am not sure how I can convert the
>>>> xn values to be re sampled to the scans. Any clue regarding this? i would
>>>> appreciate to find a way to solve this or make the x axis the units of
>>>> scans or time.
>>>>
>>>> I am wondering if it is the PM results is the same with and without
>>>> convolution.
>>>>
>>>> Kind Regards,
>>>>
>>>> Aser
>>>>
>>>> On 11 Feb 2016, at 20:31, "MCLAREN, Donald" <[log in to unmask]>
>>>> wrote:
>>>>
>>>> The deconvolution provides an estimate of the neural activity. It does
>>>> not use any of the stimulus information in the process as neural activity
>>>> is continuous.
>>>> I would run the gPPI process without estimating the model. Then you
>>>> will get the deconvolved variables, I believe. You need to modify the
>>>> PPPI.m function to save the xn variable, which is the neural activity.
>>>>
>>>> The deconvolution process provides you the neural signal, if you want
>>>> something as a function of scans, then you need to convolve the neural
>>>> activity. This basically gives you a smoothed version of the ROI BOLD
>>>> signal.
>>>>
>>>> Why are you trying to do deconvolution?
>>>>
>>>>
>>>>
>>>> Best Regards,
>>>> Donald McLaren, PhD
>>>>
>>>>
>>>> On Thu, Feb 11, 2016 at 9:38 AM, Aser A <[log in to unmask]> wrote:
>>>>
>>>>> Hi all and Donald
>>>>>
>>>>> If I would like to de-convole a ROI signal (say I extract this using
>>>>> the VOI or from a voxel), how can I do this ? I know that deconvolution is
>>>>> implanted in the PPI process but I can not reach to it. Also Does the de -
>>>>> convolution accounts for the changes in the amplitude of the stimuli? or in
>>>>> other words I wanted to account for this ? For example if some stimuli are
>>>>> more difficult (to be performed) than the others, then these stimuli should
>>>>> have higher height than the simpler one. Can this be accounted in the de
>>>>> -convolution step.
>>>>>
>>>>> I also think or noticed that the PPI de -convultion re sampled the
>>>>> data such that the signal is no longer a function of scans. how I can
>>>>> return to the function of scans using the PPI.dt value.
>>>>>
>>>>> Many thanks
>>>>>
>>>>> Aser
>>>>>
>>>>
>>>>
>>>
>>
>