Print

Print


Thanks very much for your feedback.


Setting a dummy regressor at the last volume does not really work, as
Helmut suggests.


I have figured out a way to set-up session-specific contrasts, but am
unsure about the scaling issues that arise.


Assume my task has 3 sessions, and each session has 4 conditions of
interest, coded with unique regressors, A, B, C and D.


Lets say I am interested in the B > C contrast.


Lets also say that there are no events for condition C in session 2.


At the 1st level, I don’t replicate the contrast over sessions and I am
able to specify the B>C contrast in sessions 1 and 3 but not for session 2.
E.g., my contrast vector looks like this:

[0 .5 -.5 0 0 0 0 0 0 .5 -.5 0]


In comparison to someone who has events for all conditions, where the
contrast vector would look like this:

[0 .33 -.33 0 0 .33 -.33 0 0 .33 -.33 0]



I’m a little concerned that these contrasts may be scaled differently and
thus not appropriate for comparison at the 2nd level because of the
different number of sessions involved in each contrast.


I have read somewhere that this can be addressed by ensuring the contrast
weights add to [1 and -1] as I have done.


I was wondering whether this is correct or whether I need to perform some
additional scaling?


SPM8 only allows scaling of contrasts that replicate over sessions.


Thanks again,

Ari Pinar

*PhD Candidate*
*Monash Clinical & Imaging Neuroscience*
*School of Psychology and Psychiatry &*
*Monash Biomedical Imaging*
*Monash University*

*A: 770 Blackburn Rd, Clayton, 3168, **Vic, Australia*
*E: [log in to unmask]
*




On 19 October 2013 03:37, H. Nebl <[log in to unmask]> wrote:

> A few comments.
>
>
> Concerning contrasts: If there are no trials of a particular type then
> don't use this session for any related contrast. There's simply nothing to
> look at. Some dummy regressor might reflect anything but the assumed brain
> activation for this condition. When averaging across sessions you will also
> contaminate the estimates for these sessions. Leaving this aside, I would
> look at differences between error types only if you have at least a few
> trials for each of the categories. Otherwise it's probably comparing noise
> with noise. It might also be problematic because some subjects are prone to
> mistakes (many trials, better to estimate) and others aren't (few trials,
> noisy, possibly bad estimate).
>
>
> Concerning the proposed dummy regressor, does this work? I just tried with
> a data set consisting of a single run with 234 volumes, TR = 2 s. For the
> single-subject model I used the default microtime resolution of 16 and a
> microtime onset of 8.
>
> When the onset of the dummy regressor is identical to the end of the
> experiment = 468 s with a duration of 0 s, then the regressor can't be
> properly estimated (beta not uniquely specified) and one cannot define
> corresponding contrasts. However, the voxels of the beta image are not 0 or
> NaN but have very small values +/- 10^-13.
>
> The earliest onset that results in a unique beta for parameter
> estimability is 466.8124, but 466.8125 does not work - why? With 466.8124
> the values in the design matrix range between ~ +/- 10^-9 after high-pass
> filtering (default 128 Hz, seems like it introduces some riples), the last
> volume has a value of 10^-7 (BOLD response starts). The beta image is full
> of extreme values with a range of approx. +/- 10^6. So definitely nothing
> meaningful. In comparison the beta zero has values of 100 - 250. The beta
> estimates of the other regressors are somewhat different compared to the
> model without the dummy regressor, but it's the same pattern.
>
>
> Now this is a single observation of course. In this particular case it
> seems the dummy regressor is a nonsense regressor. Probably it's much
> better to have one df less/more instead of anything like that. But I would
> be interested in Karl's message quoted above.
>
>
>
> Best,
>
> Helmut
>