Dear Marco,
> I would like an opinion on the following issue: we have submitted a
> paper on a PET experiment, where we collected data from 11 volunteers
> and performed different comparisons between 3 experimental conditions
> and a baseline with SPM96. During the experiment, we collected on-line
> reaction times and accuracy scores. In the paper we report that the
> behavioral performances (both reaction timies and accuracy) of the 11
> subjects were significantly different from each other (p=0.0001).
> However, we didn't include the behavioral scores in any way in the SPM
> analysis (one reason was that we had no hypothesis about any possible
> physiological impact of the behavioral scores). We now have been asked
> from a referee to include the behavioral data for each single subject
> in the SPM analysis as a confound. Are you aware of any publications,
> where as a similar approach was adopted? And, more in general, do you
> think such an approach would be correct?
This is my opinon:
This is a difficult question to answer in a general way. There are an
enourmous number of instances where behavioural or psychophyisical data
are entered into the design matrix as explanatory variables. The
objective here is generally to find the neurophysiological corrrelates
of the data in question. On the other hand it would be ridiculous to
include a behavioural response variable as a confound if its variance
was caused by an experimental factor already in the design matrix (for
example including visual analogue scores of pain as a confound in a
pain study would be silly).
The question you have to address is whether the RT and accuracy data
contain information that is independent of the effect you are
interested in and, if so, would the analysis be better if you used the
RT and accuracy data as surrogate markers for this confounding effect.
You then have to think carefully about the orthogonalization scheme you
would adopt in the case of collinearity.
If the RT and accuracy data represent measures of the process you are
interested in, then you could include the RT and/or accuracy data as
regressors of interest and report these effects.
In general reviewers who specify that a particular statistical model
should be adopted before the report will be considered for publication
are, in my opinion, in danger of over-stepping their brief. On the
other hand, reviewers are, generally, only trying to help you present
your ideas in a valid and clear fashion. It may be that the reviewer
thinks you are mis-attributing activations to one experimental cause
when there is another more parsimoniius explanation that is reflected
in the RT and accuracy data.
I hope this helps - Karl
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|