Dear FSL list,
I have a simple question. I am trying to analyze a block design fMRI experiment. I have a
question about using Vince Calhoun's method for bias correction.
"Calhoun et al (2005), fMRI analysis with the general linear model: removal of latency-
induced amplitude bias by incorporation of hemodynamic derivative terms. NeuroImage 22
(2004) 252– 257"
It seems to me that this approach (signed F-test) allows to correct for amplitude biases for
delays upto 3s (~1 timepoint for 3sec TR) in the fMRI response.
Is it safe to use this technique if the delay is around 2 timepoints (upto 6 sec delay for a TR
of 3 sec). If not, what is the best way to proceed?
Thanks so much!
Hans.
|