Dear Bill,
> Thanks very much for your advice, couldn't get the code you sent to
> identify the lost onset to run but discovered that the problem is related
> to the nature of the onsets sequence I had been using.
>
> Esentially, the onset vectors for my single condition anlaysis were
> non-contiguous as I had ignored all onsets referencing baseline epochs.
> This meant there were 'gaps' in my time series and when I re-ran the
> analysis using a contiguous onsets vector I no longer had the problem of
> missing onset(s).
>
> In discussing this with others I now understand that whether or not
> non-contiguous vectors can be used in SPM, these may have been
> innapropriate anyway, independently of the problems I had with SPM, so I
> wonder if you could provide advice regarding these issues.
I don't quite understand what you mean by a 'contiguous' onset vector.
In SPM99, you model your measured response by convolving stick functions
placed at the onsets with a basis function. After discretization, these
regressors are placed into the design matrix. Usually, baseline is not
modelled by these condition-specific regressors, but rather by the
session mean so you don't need onsets for your baseline.
>
> Firstly, will not including all epochs in my onsets vector cause problems
> with the high-pass filtering/autocorrelation in SPM? (even though all scans
> were initially selected when setting up the analysis design).
Your model should always be a good description of what you expect to
measure. If you don't model effects induced by your experimental
paradigm, these will end up in the residuals and decrease your
sensitivity. The usual way is to select all scans of a session (a
contiguous time series) and model them appropriately. If you leave out
scans, you have gaps in your time-series and should theoretically model
these mini-time-series by one session each.
>
> Also, does removing some epochs from the analysis effectively change the
> design of my experiment? Due to the long TR(12sec), clustered volume
> acquisition(3sec) and nature of the stimulation (random) I have no reason
> to believe that there are any dependencies between successive epochs so
> 'conceptually' wouldn't have thought this would be a problem but I guess I
> could model this using the AR(1) basis function to test for this effect.
I would certainly assume that removing some epochs from your data
changes the model you should use. In SPM99, the autocorrelation is
modelled by an AR(1) model that attempts to estimate the error
covariance matrix. This doesn't involve a basis function in the design
matrix. Also note that the AR(1) model does not model correlations
between your signal, but models the autocorrelation of your noise
process.
>
> Finally, if I do need to include all onsets in the time series regardless
> of whether they were baseline or not in order to do a parameteric analysis,
> should I have two conditions rather than one in my analysis design
> (baseline and stimulus) and just include the stimulus condition/trial in
> the parametric analysis? If so what parameter values should I enter for the
> baseline condition? Zero is one of the parameter values I require in the
> stimulus condition..although i guess they are mean corrected anyway so
> maybe this doesn't matter?
I would model only your activation condition. The reason why this works
is that the baseline is estimated by your session mean (the constant
regressor). You don't need to worry about onsets of your baseline.
Stefan
--
Stefan Kiebel
Functional Imaging Laboratory
Wellcome Dept. of Cognitive Neurology
12 Queen Square
WC1N 3BG London, UK
Tel.: +44-(0)20-7833-7478
FAX : -7813-1420
email: [log in to unmask]
|