This question has some relevance to the one posed by Andrew Lawrence.
I have conducted a learning experiment in a blocked design using fMRI.
Learning proceeds across 6 blocks alternating with baseline. RT data were
collected.
I want to explore brain regions changing as a function of time and am faced
with a number of possibilities. A linear or exponential change can be
modelled. Alternatively I could use the RT data as a behavioural measure of
learning. The problem with the former approach is that my model is
arbitrary. Who is to say what is the most appropriate model for learning in
this study? The problem with the latter is that RT data with small numbers
(an average RT per epoch (x6) per subject (x12)) is noisy and I want to
model my BOLD responses with as little noise as possible.
In summary, I'm faced with a choice between a clean but artificial (and
possibly incorrect model) and a noisy but real model. Currently I plan to
take up a suggestion made recently by Rik Henson to produce a combination
of the two. This would hopefully mean that subjects' actual performance
contributed to the modelling but that the contribution of the noise
inherent in such data would be minimised. Does anyone have any suggestions
about the best way to do this? At present I suppose that I will fit the RT
data to an exponential function and hope for the best. Of course, as well
as getting the best of both worlds I may be getting the worst of both. If
anyone has a better idea that they are willing to share I would be ever
grateful.
Thank you for considering this problem.
Paul Fletcher
-----------------------------------------------------------------
Paul Fletcher,
Research Department of Psychiatry,
University of Cambridge,
Addenbrooke's Hospital,
Hills Road,
Cambridge,
UK
CB2 2QQ
Tel 01223 336 988
Fax 01223 336 581
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|