Dear David,
I assume the same contrasts were being looked at across the analyses? For
instance, if you are comparing Go to NoGo trials, in the initial analyses
the contrasts would simply be either [1] or [-1], depending on the
direction your interested in. Assuming you are only really interested in
correct trials, that contrast would change to either [-1 0 1 0] or [1 0 -1
0] assuming an order of correct, incorrect, correct, incorrect. Is that
right? I realise there may be group comparisons as well but I'm only
asking about the simple main effects within group, for the moment. By
modeling all of these trials, your design presumably became rank deficient,
whereas it wouldn't have been previously. So in the second analyses, it is
important to use contrasts which add to zero rather than things like [1] or
[-1] -- that could certainly lead to strange z-values.
There is another issue in the design you mentioned -- namely if the ISI
matches the TR then there is an intrinsic (and strong) bias when sampling
the HRFs that can dramatically affect the results. By changing the
contrast to an actual comparison on EVs, this can either over or
underestimate effects. In other words, the new contrasts may underestimate
the actual BOLD differences between conditions.
Finally, depending on the number of errors subjects made, the modeling
could be telling you that most of the results in the original analysis were
driven by errors (presumably this is what you reviewers are worried
about). Were there significant error rates and did these differ across
groups?
Joe
--------------------
Joseph T. Devlin, Ph. D.
FMRIB Centre, Dept. of Clinical Neurology
University of Oxford
John Radcliffe Hospital
Headley Way, Headington
Oxford OX3 9DU
Phone: 01865 222 738
Email: [log in to unmask]
|