Dear Dominic,
I hope that all is well with you.
>May I ask your advice about designing a new fMR experiment.
>I envisage 4 trial types (a1, a2, b1, b2), each spaced by an ISI of 3 seconds.
>Two of the trial types (a1, a2) would be performed in one response domain
>(say auditory) and would be pseudorandomised in order in a block (A), and
>similiarly b1 and b2 would be in the another domain (say visual) and
>pseudorandomised in a block (B). Blocks A and B would be ordered with an
>appropriate control block C in a pseudorandom or palindromic order.
>My principal interest would be comparing blocks A and B against themselves
>and against block C, and this I feel would constitute a valid 'cognitive
>subtraction' for an epoch-based analysis.
Sure, provided you are happy in this analysis to treat a1 and a2 as
being equivalent; this analysis won't differentiate between them
obviously.
>However, since I hope to be able to guarantee timing exactly (ie. known
>event SOAs) I would also be interested in trying to apply an event-related
>analysis to the within-block trials.
...which you would have to do to get at differential activation
between a1 and a2? Is this your motivation? In what follows I have
kind of assumed that it is, so it may not make all that much sense if
you are not interested in the a1 vs a2 comparison.
>For this I suspect I am at risk of
>low-statistical power, having fallen between the simplicity of an block
>design and the more 'stochastic' ordering of events. Obviously by
>pseudorandomising the trial order within blocks I hope to have derived some
>statistical power.
Surely pseudorandomizing the events can only LOSE you statistical
power (compared with blocking them into little sub-blocks within the
main blocks, for example)? But that's OK. If you did
pseudorandomize the events, even with a SOA as short as 3 seconds,
during a reasonable length of experiment you might well still be able
to pick up differential activity.
But you have to ask yourself 'why pseudorandomize?'. The average
time between one onset of a1 and the next onset of a1 will be (during
block A) only 6 seconds. This is a bit quick compared to the
time-course of the hrf, which only reaches its peak at around 6
seconds. Consequently, most of the power in your experiment will
come from periods when there happen, by chance, to be runs of one
stimulus type (e.g. a1 a1 a1 a1....).
If your reason for wishing to pseudorandomize is because it is
important that subjects don't know whether it is a1 or a2 that is
coming up next, there is another option. During 'A' blocks, you
could change the probability of a1 occurring across the block, so
that there is an increased likelihood of runs of a1 a1 a1... etc.
occurring, but that this is sufficiently subtle that the subject
doesn't really notice.
So, you might imagine that you have an 'A' block of 30 sec duration
(so that the onset of these occurs, on average, about once every 90
sec), and within this, for the first five events the probability of
a1 is 70% and for the second five events the probability of a1 is
30%). This adds a component to the 'a1 vs a2' contrast with a cycle
length of about 30 sec. This would be a rather crude way of doing
it; a more fancy method might involve changing the probability
continuously between 30 and 70% with a sinusoidal profile.
This is obviously a compromise, though. You are extending your block
length to be a little bit longer than you would normally, in order to
be able to pack some relatively low-frequency a1 vs a2 differential
signal within it.
>My questions are then -
>1. The relative poor power notwithstanding, is this sort of within-block
>event-related analysis feasible ?
Yes. It slightly complicates the analysis, in that there is likely
to be a great deal of shared variance between the regressor for the
block and the sum of the regressors for the events within the block.
How you deal with this will depend on your exact question. You
might, for example, just not model the whole block with a box-car,
but in comparing block A with block B, use a 1 1 -1 -1 contrast
applied to the event trains a1 a2 b1 and b2. After all, the sum of
a1 and a2 will look just like the convolved box-car.
>2. Should I be weaving 'null' events pseudorandomly into my trial sequence
>over and above the control blocks C as a baseline ?
If your interest is in the comparison of a1 with a2, then no. The
more occurrences of each of these you have the more power you can
potentially gain, even with short SOAs. If you are interested in the
'simple main effect' of a1 against 'block A baseline', then yes, you
can use 'null events'. If you wanted to use the 'Dale and Buckner'
approach of event-related analysis, essentially using the
post-stimulus time histogram to do the analysis, then you would also
need null events.
If, however, you actually want to be able to distinguish between
brain areas which respond to a continuous mental 'set' during block
A, and other brain areas which respond to the specific events, then
this is more difficult. You would need the events to be quite
sparse, so that the sum of a1 and a2 DOESN'T look much like a box-car
for block A. Then you could orthogonalize the 'event' regressors
with respect to the box-car, and you might still have some
statistical power (I think that there is a Chawla & Friston paper in
which they did something a bit like this).
>Thanks you for your time and thoughts on these questions.
You're welcome. I may not have entirely understood what you are
trying to do, in which case maybe get back to me (or the helpline) by
e mail or give me a ring.
Best wishes,
Richard.
--
from: Dr Richard Perry,
Clinical Lecturer, Wellcome Department of Cognitive Neurology,
Institute of Neurology, Darwin Building, University College London,
Gower Street, London WC1E 6BT.
Tel: 0207 679 2187; e mail: [log in to unmask]
|