Dear Dr Clark,
Since noone else has piped up yet, I thought that I would give this one a go.
>I'm trying to track down a problem with my experiment design where
>occasionally I get a grey bar (actually, 2 grey bars adjacent to one another,
>one corresponding to the hrf and one to the derivative) in the parameter
>estimability bar. I know this usually results from specifying the same onset
>for two conditions, but I have verified multiple times that this is not what
>is happening (and moreover, I am generating the design matrices automatically
>and most of these matrices have given me no grief at all).
So, it sounds as though you are doing an event-related analysis in
fMRI. Usually it's pretty hard to get two covariates co-linear in an
event-related design matrix (unless they are identical), so to see
grey bars in the parameter estimability bar is indeed a bit of a
surprise. By contrast, in a block design it would be very easy to
accidentally over-specify the model, resulting in grey bars (there's
an example of this later in this message).
>So, I have one theory. In my experiment, bins are determined partly by a
>subsequent behavioral (memory) task. This unfortunately results in bins with
>very few (sometimes 0 or 1) members. The grey bars have only appeared twice,
>and each time they have corresponded to a condition with only one member
>(those with 0 are tossed out by the program generating the design matrix).
>
>Could small bins be preventing the SPM package from deconvolving the
>hemodynamics? And could this be why I'm seeing the grey bars?
SPM doesn't attempt to deconvolve the data (as one might do if one
wanted to get back to the original neural responses starting from the
BOLD signal). It simply uses multiple regression to try to fit the
data with a series of covariates, which often consist of the train of
expected neural responses convolved with a standard hrf to try to
model the expected haemodynamic response.
However, if you have only one event, which occurs within a couple of
seconds of the end of the experiment, then this might generate a
covariate which in fact is just a column of zeros for the whole
experiment, and a 1st temporal derivative which is therefore also a
column of zeros (appearing in mid-grey). This is because although
the event occurs before the end of the experiment, the response lags
sufficiently that there is no response at all in the modelled
covariate, which assumes a delay of a couple of seconds before the
response starts to rise. A quick glance at your design matrix should
tell you if this is the problem.
Obviously a column of zeros will have a parameter estimate which
cannot be estimated - any parameter estimate from minus infinity to
plus infinity would model the data equally well (i.e. it would
contribute nothing to the model).
>I'm usually just going to throw out these conditions for subjects that have so
>few trials anyway, but I want to make sure that I'm not letting something
>slide by.
If you are throwing out columns of zeros, then clearly you lose
nothing - they contribute nothing to the model anyway.
>Also, there are some cases where out of 4 scans, only one has this problem
>with parameter estimability, and the rest of the scans have sizable bins. So,
>although scan 1 might only have 1 exemplar, the rest might have a total of 20.
> Should I be averaging in that 1 exemplar from the first scan, or should I
>toss it?
What you decide to do probably won't make much difference. However,
one possibility would be to assume that the size of the response in a
given subject doesn't change from one session to the next. If you
were prepared to do this, then you could model the events from all
four sessions in one column of the design matrix. This way you
needn't lose the contribution made by sessions in which very few
events occurred. You certainly wouldn't expect to run into any
problems with parameter estimability if you did this. Also you may
make your analysis more sensitive (if this assumption turns out to be
a reasonable one) by reducing the degrees of freedom.
However, if you want to do this, you probably won't want to assume
that there is no change in baseline between session since this is
unlikely to be true. It would therefore be sensible to add, for
each subject, three user-specified covariates consisting of columns
looking like this (for an example in which there are only five scans
in each of the sessions): 3 3 3 3 3 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1
-1 -1 -1 -1
and -1 -1 -1 -1 -1 3 3 3 3 3 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 and -1 -1
-1 -1 -1 -1 -1 -1 -1 -1 3 3 3 3 3 -1 -1 -1 -1 -1. (In fact the 3s
should be 1s, and the -1s should be minus 1/3, but it would have
taken too long to write it out like that!).
Note that you don't want to add a 4th column like this: -1 -1 -1 -1
-1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 3 3 3 3 3, because if you did, then
these 4 columns would sum to zero, and you would end up with even
more unwanted grey in your parameter estimability bar! Note also
that if you ask SPM to high-pass filter the data, it will also
high-pass filter the design matrix, so your user-specified columns
won't come out looking quite like the ones that you entered.
> Thanks!
>Dav Clark
No problem. Hope it has at least given you some food for thought,
Best wishes,
Richard.
--
from: Dr Richard Perry,
Clinical Lecturer, Wellcome Department of Cognitive Neurology,
Institute of Neurology, Darwin Building, University College London,
Gower Street, London WC1E 6BT.
Tel: 0207 679 2187; e mail: [log in to unmask]
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|