We ran an almost identically designed study, with two semantic task
conditions (say and A1 and A2) and a baseline (say B). There were
14-second blocks of baseline task interspersed in between 28-second blocks
of semantic task, where every other semantic block was one of the two
semantic task conditions. The order was basically: A1 B A2 B A1 B A2 B....
(counterbalanced, as one might expect, across subject).
The problem I found with this though (of which I wish I had though
ealier) is that the frequency structure of the baseline compared to
that of each of the experimental conditions is very different, giving a
strong advantage to the baseline (which has twice the fundamental
frequency of the exp. task). I would be really interested to hear people's
comments on the theoretical side of this, and whether theoretically one
would expected significant differences between power in A1 and A2 compared
to B. The practical side of it was that we did observed much stronger
activations in the B vs. A(1&2) comparison than in the A(1&2) vs. B
comparison, and in fact the typical activations that one would expect to
see in the A(1&2) vs. B comparison were only very weakly showing up.
As a suggestion, it seems to me that a possible solution would be to have
a mini-block, pseudorandomized design, with blocks A1, A2, and B having
equal durations, and appearing in a randmozed fashion, e.g.:
A1 B A1 A2 B A2 B etc...
I would be curious to see what other have say about this issue.
On Tue, 25 Apr 2000, Russ Poldrack wrote:
> hi - I wanted to run a question by the group about an issue related to
> blocked designs. for a study I'm designing I have two conditions that I
> would like to compare to each other (different types of auditory
> stimuli) and to a silent baseline (stimuli are presented during gaps in
> the scanner noise - TR=3s, acquisition time = 2s with 1s for stimulus
> presentation). The question that I have regards how much null time I
> need in order to be able to faithfully model the response for each
> condition compared to the null baseline. In building an event-related
> study I would usually want to have an amount of null-event time roughly
> equal to the time spent on each condition of interest - this rationale
> would lead me to use 12-second blocks of silence in between my 24-second
> blocks of stimulation (where every other block is from one of the two
> conditions). However, I worry that this may not be enough time to get a
> good baseline estimate. Any advice on this issue would be greatly
> Russell A. Poldrack, Ph. D.
> Assistant Professor of Radiology, Harvard Medical School
> MGH-NMR Center
> Building 149, 13th St.
> Charlestown, MA 02129
> Phone: 617-726-4060
> FAX: 617-726-7422
> Email: [log in to unmask]
> Web Page: http://www.nmr.mgh.harvard.edu/~poldrack
Kalina Christoff Email: [log in to unmask]
Office: Rm.478; (650) 725-0797
Department of Psychology Home: (650) 497-7170
Jordan Hall, Main Quad Fax: (650) 725-5699
Stanford, CA 94305-2130 http://www-psych.stanford.edu/~kalina/