Hi Jack,
Sorry I haven't answered this before, I've been out of the lab quite a
lot.
As for motion correction:
1 - we've only written motion correction to deal with a minimum of 6 dof
properly for 3D data, but it should only use 3 dof for 2D data.
2 - one slice will be picked as a reference and all other slices aligned
with the slice in your case (which slice can be selected using
-refvol).
This isn't really what you want. I think you'd be better off
scripting up
flirt calls rather than trying to do this with mcflirt, but this
problem is
still very difficult and I don't know that you'll have a lot of
success.
For modelling, the second option does make the most sense to me.
All the best,
Mark
On Wednesday, April 28, 2004, at 12:13 am, Jack Grinband wrote:
> Hi Mark,
> Thanks for answering all my questions!!!
>
>> How much is the motion anyway? It seems surprising/disturbing that
>> there is a visible amount.
>
> There appears to be 1-3 voxel difference between adjacent slices (1
> vox = ~1 mm). It gets worse
> toward the top of the head. The scan is 18 min long with 100 slices,
> in other words it takes about
> 11sec per slice. Maybe that's too long. I'm still trying to
> determine whether this is an isolated
> problem (me and another subject both had the same problem) or whether
> long scans are always
> corrupted by motion.
>
> Anyway, I'm trying to do motion correction to try to correct the
> problem. So I concatenated all the
> slices into a time series and I ran mcflirt but it didn't work quite
> right.
> 1. Is there any way to restrict the motion correction to two
> dimensions? Mcflirt doesn't seem to
> like anything less than 6dof.
> 2. I just want to confirm that the motion correction is done between
> adjacent slices. So if I make
> slice 0 the reference volume, then slice 1 will be registered to slice
> 0, and then slice 2 will be
> registered to (the now registered) slice 1. Is this correct?
>
>> For truly "bad trials" I would model them as a separate EV
>> and then ignore them in the contrasts as this is the best way to
>> "throw away"
>> trials.
>> However, it doesn't sound like your events are truly "bad", just very
>> long and hence containing more signal power than the short ones. In
>> this case
>> I'd still try the full model without modifications.
>
> The problem is that I expect that the subject's decision only takes
> 500-1000ms, but if the subject
> falls asleep, then the trial can take, say, 30 sec. In other words,
> the subject has his eyes closed for
> 29 sec and does not have any processing related to the trial. At t=29
> sec, the subject opens his
> eyes and takes 1 sec to press a button. I think it would be
> problematic to model this as a 30sec
> event.
>
> I like the idea of modeling them as a separate EV and then ingoring
> them in the contrasts. I think
> there are two options for doing this. One is to model slow and fast
> trials as independent events as
> you mentioned above. The other is to model the 30 sec trial as a 1
> sec "fast" trial that starts at
> t=29 and ends at t=30s, keeping it part of the "fast" EV, then add a
> second EV of only these
> special "corrected" trials and ignoring those in the contrasts. Does
> this second method make
> sense?
> thanks a lot,
>
> jack
|