Print

Print


Hi SPMers,

I'm working on my own GLM/ttest code and I was wondering if you could help
me understand how SPM does it.

So it looks like SPM is splitting every TR in 16 subpoints and interpolating
in between*. This allows having some jitter in the stimuli. So, my first
question is how do they do this? Do they linearly interpolate the voxel or
do they use sinc functions? Or other ways?

Also, once we do that, do we have to modify the sensitivity of the t-test?
For example, let's say that we have a voxel with an intensity of 42 (all the
time, not realistic, but...) + some white noise. We scanned for 100 TRs but
by interpolating we get 1600 sample points. The original t-test operates on
100 samples, when the other one on 1600. It looks like we get different
p-values (see Matlab code attached). Am I wrong? If not, we need to correct
for that, right?

Thanks,
Tony

---------
x       = 42  * ones(100, 1);
z       = 2e2 * randn(size(x));

y1      = x + z;
y2      = resample(y1, 16, 1); % this might not be SPM's way

[h,  p] = ttest(y1, 0)
[h,  p] = ttest(y2, 0)
---------

* Thanks to John O'Doherty

--
Contact info:
http://antoine.caltech.edu