The critical issue is the power spectrum of the CONTRAST of
conditions, relative to the power spectrum of the haemodynamic
response (in order to maximise the area under the product).
For a simple [activation - rest] contrast in an on-off design,
the optimal blocked design is one with a fundamental frequency
close to the dominant frequency of the haemodynamic response
(from the signal perspective; not necessarily from the
psychological perspective), ie a block length of 10-12s, with
a fundamental period of 20-24s. With longer blocks than this,
measurable signal will be greater for higher frequency contrasts.
In Kalina's A1 B A2 B... design for example, even with equal
block lengths, the [(A1+A2)-B] contrast will have more power at
higher frequencies, to which fMRI is more sensitive (given the
low frequency noise), than a [A1-A2] contrast. Note the
direction of the contrast is not relevant to optimal
block length; the different results from Kalina's [(A1+A2)-B]
and [B-(A1+A2)] contrasts I assume reflect real effects - ie greater
activity in the B condition than average of A1 and A2 conditions.
If one desires similar sensitivities to both (A1 - B) and
(A1 - A2) contrasts, then the permuted design suggested by
Kalina is good (though not as good as a fixed order B A1 A2
B A1 A2, apart from associated problems of counterbalancing).
As for the question of different block lengths, the
answer depends on the average block length. For shortish
block lengths (eg 20s), equal block lengths are probably
better because they produce more power at the fundamental
modulation frequency close to the dominant haemodynamic
response frequency. For long activation block lengths (eg 60s),
then making rest (null) conditions shorther than activation
conditions is desirable, in order to move more experimental
power into frequencies closer to the haemodynamic response
and away from the low frequency noise.
Kalina Christoff wrote:
> We ran an almost identically designed study, with two semantic task
> conditions (say and A1 and A2) and a baseline (say B). There were
> 14-second blocks of baseline task interspersed in between 28-second blocks
> of semantic task, where every other semantic block was one of the two
> semantic task conditions. The order was basically: A1 B A2 B A1 B A2 B....
> (counterbalanced, as one might expect, across subject).
> The problem I found with this though (of which I wish I had though
> ealier) is that the frequency structure of the baseline compared to
> that of each of the experimental conditions is very different, giving a
> strong advantage to the baseline (which has twice the fundamental
> frequency of the exp. task). I would be really interested to hear people's
> comments on the theoretical side of this, and whether theoretically one
> would expected significant differences between power in A1 and A2 compared
> to B. The practical side of it was that we did observed much stronger
> activations in the B vs. A(1&2) comparison than in the A(1&2) vs. B
> comparison, and in fact the typical activations that one would expect to
> see in the A(1&2) vs. B comparison were only very weakly showing up.
> As a suggestion, it seems to me that a possible solution would be to have
> a mini-block, pseudorandomized design, with blocks A1, A2, and B having
> equal durations, and appearing in a randmozed fashion, e.g.:
> A1 B A1 A2 B A2 B etc...
> I would be curious to see what other have say about this issue.
> Best wishes,
> On Tue, 25 Apr 2000, Russ Poldrack wrote:
> > hi - I wanted to run a question by the group about an issue related to
> > blocked designs. for a study I'm designing I have two conditions that I
> > would like to compare to each other (different types of auditory
> > stimuli) and to a silent baseline (stimuli are presented during gaps in
> > the scanner noise - TR=3s, acquisition time = 2s with 1s for stimulus
> > presentation). The question that I have regards how much null time I
> > need in order to be able to faithfully model the response for each
> > condition compared to the null baseline. In building an event-related
> > study I would usually want to have an amount of null-event time roughly
> > equal to the time spent on each condition of interest - this rationale
> > would lead me to use 12-second blocks of silence in between my 24-second
> > blocks of stimulation (where every other block is from one of the two
> > conditions). However, I worry that this may not be enough time to get a
> > good baseline estimate. Any advice on this issue would be greatly
> > appreciated.
> > Thanks,
> > russ
> > --
> > Russell A. Poldrack, Ph. D.
> > Assistant Professor of Radiology, Harvard Medical School
> > MGH-NMR Center
> > Building 149, 13th St.
> > Charlestown, MA 02129
> > Phone: 617-726-4060
> > FAX: 617-726-7422
> > Email: [log in to unmask]
> > Web Page: http://www.nmr.mgh.harvard.edu/~poldrack
> Kalina Christoff Email: [log in to unmask]
> Office: Rm.478; (650) 725-0797
> Department of Psychology Home: (650) 497-7170
> Jordan Hall, Main Quad Fax: (650) 725-5699
> Stanford, CA 94305-2130 http://www-psych.stanford.edu/~kalina/
DR R HENSON EMAIL [log in to unmask]
Wellcome Department of
Cognitive Neurology TEL (work1) +44 (0)20 7833 7483
12 Queen Square TEL (work2) +44 (0)20 7833 7472
London, WC1N 3BG FAX +44 (0)20 7813 1420