On Fri, 24 Jul 2009 14:20:05 +0200, Rozemarijn Houtkamp
<[log in to unmask]> wrote:
>Two very naive questions:
>In my experiment subjects only rarely made a mistake (2-5% errors)
>and therefore I'm not interested in analyzing error related activity.
>But how should I treat these trials when specifying the first level
>design? At the moment I just don't specify any events in case of an
>error but perhaps it's more sound to have an error regressor?
If you don't have an error regressor, those scans will become part of an
implicit baseline. The advantage of having an error regressor is that it will
perhaps explain some variance that would otherwise go into the error term.
The "cost" is one degree of freedom, which for fMRI isn't too much.
>Furthermore I know that people tend to discard "the first few scans"
>to allow for saturation effects. Am I correct in assuming that this
>means the first few scans of each run (of about 10-12 minutes)? And
>does it make a difference whether the discarding is done before or
To add to Marko's good advice:
Yes, though sometimes scanners can be set up to do the discard for you. To
be sure whether or not that happens, and how many scans you should
discard, you should ask your local MRI physicist.
You can also try to get a feel for how many scans is needed by plotting the
global signal (I mean, in the entire volume, not just in the brain) versus time.
The most important reason I can think of why it has to be done before
preprocessing is that if you do slice timing correction, the intensity in the to-
be-discarded scans will "leak" across into the non-discarded scans, because
the correction uses sinc interpolation.
>Any advice is much appreciated!