Hi All,
I've been collecting some questions. Any help would be greatly appreciated:
1. I have a high res 2D scan. When I scroll through the slices I clearly see movement between
slices. This is a bit disturbing because the subject (me!) is well practiced and was using a bite bar.
This suggests to me that when I acquire 3D scans, this same motion is present, but is invisible
because it gets averaged across voxels. In other words the brain gets smoothed and distorted.
So, my questions are: First, is it possible to do motion correction using mcflirt within a volume?
Adjacent slices are very similar so it seems to me that if you limit the motion correction to 2 dof,
this should work. Second, if this type of motion correction works, then wouldn't a 2D scan
produce less blurring/distortion than a 3D scan?
2. How can I change the defaults in the FEAT gui? Specifically, I want the registration to use
mutual info rather than the default corr ratio.
3. I use a set of basis functions for modeling my data. By default, I get three activation maps per
contrast. The three models that generate these maps are orthogonal to each other. The F-test is
a linear combination of the three models. What do you get if you switch to Real EV and set the
contrast to (1 1 1) for EV1, EV2, and EV3? Is this also a linear combination or is it a model that
weights the three functions equally?
4. Last but not least: One of the biggest problems that I'm having with my data analysis is that
subjects get drowsy during the scan. I'm doing an event-related design in which reaction time is
very important. When subjects get drowsy, their RTs become really long (e.g. 2-10 sec). When
you model this, you get small activations for short RTs but huge activations for the long, drowsy
RTs. The first problem this creates is that I frequently get a "rank deficient" model. It also
appears to deweight the effect of the shorter trials and more heavily weight the effects of the
longer trials.
I can solve this in two ways. One is to simply delete all trials longer than some upper limit. But, I
then become afraid that I'm weakening my model because there are model relevant events that I
am ignoring. The second way is to model those trials as if they are short trials that end when the
subject pressed the button. For example, if the trial started at t=0 and the subject responded at
t=10, then I would model this as an event that started at t=9 and ended at t=10. But, this makes
me afraid that I am modeling this event just like all the normal, short events even though it is
clearly not a normal event and therefore may be screwing up my model.
Does anyone have any opinions on how to deal with bad trials?
thanks a lot,
jack
|