that's a tricky problem with no good answer. IMHO the best way of
dealing with all these types of 'structured noise' is to remove their
effect during pre-processing (motion-correction etc.)
If you actually want to remove certain TRs from your measurements then
I'd recommend not doing it from the filtered functional data - if the
data is so bad that you decide to throw it away it certainly shouldn't
be used during filtering, e.g. if there is a large intensity dropout at
some TR this will have a negative impact on temporal filtering etc. In
such cases you might even be better off by replacing the data by the
average of neighbouring TRs prior to pre-processing.
On 13 Apr 2004, at 17:23, Stephen Wilson wrote:
> Thanks for your replies, Jen and Christian. But I want to exclude some
> volumes from the middle of the middle of the run. This means they still
> need to be there when FEAT calculates the convolution with the
> response, whereas if I manipulate the input prior to running FEAT, then
> FEAT will have no way of knowing that volumes were removed, so no way
> knowing how to predict the response.
> There are several reasons I can think of why one might want to do this:
> (1) to exclude volumes in which face/mouth motor tasks are performed
> is my present goal); (2) to exclude volumes with scanner artifacts;
> (3) to
> exclude volumes where there are saccades, or swallowing, or such, if
> can monitor these.
> It would be nice if there was an option to input a file of time points
> be excluded...
Christian F. Beckmann
Oxford University Centre for Functional
Magnetic Resonance Imaging of the Brain,
John Radcliffe Hospital, Headington, Oxford OX3 9DU, UK
Email: [log in to unmask] - http://www.fmrib.ox.ac.uk/~beckmann/
Phone: +44(0)1865 222782 Fax: +44(0)1865 222717