Dear Helmut
I think polynomial regressors could work equally well, or better than the DCR HP filter. The reason I personally stick to the DCT highpass filter is that if you know at which frequencies the scanner drift is more pronounced than the white noise level it is easy to pick the cutoff, and after that you also know what the minimal frequency of your paradigm should be. The order of the DCT filter is calculated from the specified cutoff, taking the duration of the experiment into account. I think the order of polynomial regressors regressors should be found in a similar way. The edge effects of the DCT filter will depend on the white-noise to drift ratio, and how the cutoff compares to the total duration of the scan. If the white noise is pronounced then the edge effects will be hard to observe on real data, similarly if the total duration of your experiment is long compared to the chosen cutoff period the effects will also be harder to observe. Have a look at the code below to verify this. In our 2006 Neuroimage paper we demonstrated, using the SPMd toolbox, that the highpass filter does what it is supposed to do, it leaves the frequency in the passband white and normal distributed.
Best
Torben
Torben Ellegaard Lund
Associate Professor, PhD
Center of Functionally Integrative Neuroscience (CFIN)
Aarhus University
Aarhus University Hospital
Building 10G, 5th floor, room 31
Noerrebrogade 44
8000 Aarhus C
Denmark
Phone: +45 7846 4380
Fax: +45 7846 4400
http://www.cfin.au.dk
[log in to unmask]
[log in to unmask]
clear K
% Set the level of white noise. Chose WhitenoiseLevel=0.01 to observe a clear edge
% effect of DCT highpass filter
WhitenoiseLevel=0.05;
% Set length of time course. Decrease this value to se a pronounced edge effect, and increase to se a reduced effect:
l=500;
% Create a timecourse with a linear drift:
y=(1:l)'/l;
y=y-mean(y);
% Add som noise to the time series
yn=y+WhitenoiseLevel*randn(l,1);
% put the two timeseries in one matrix
yTot=[y yn];
%Define the standard 128s DCT HP-filter
K.row=1:l;
K.RT=1;
K.HParam=128;
% Filter our timeseries
yTotFiltered=spm_filter(K,yTot);
% Create the matrix X0 in the K structure:
K=spm_filter(K);
% Find filter betas
beta=K.X0\y;
figure(1)
% plot the fitted time course (without noise)
subplot(1,2,1),plot(0:l-1,K.X0*beta,'r.',0:l-1,y,'b-')
title('128s DCT fit of linear drift')
% plot the filtered timecourse with noise
subplot(1,2,2),plot(0:l-1,yTotFiltered(:,2),'r-',0:l-1,yTotFiltered(:,1),'b-')
title('Linear drift with noise filtered with 128s DCT filter')
> Den 16/06/2015 kl. 16.56 skrev H. Nebl <[log in to unmask]>:
>
> Dear everyone,
>
> I've recently looked at the literature on accounting for low frequency noise/scanner drifts based on polynomial regressors added to the design matrix. Several papers suggest to go with this or that adaptive procedure that estimates the drift from the data, with this or that method concluded to be superior, which is difficult to interpret without extensive replications. Now turning to the default approaches, actually there's not much on combination of cosine functions / the DCT in SPM vs. polynomial regressors (Skudlarski et al., 1999, Neuroimage, Tanabe et al., 2002, Neuroimage, 10.1006/nimg.2002.1053 , Friman et al., 2004, 10.1016/j.neuroimage.2004.01.033 ). For example Worsley et al. (2002, Neuroimage, 10.1006/nimg.2001.0933 ) conclude:
>
> The cosine transform basis functions used in SPM'99 have zero slope at the ends of the sequence, which is not realistic, so we use a polynomial drift.
>
> (as e.g. illustrated in a book chapter "The general linear model" by Kiebel & Holmes), but also see https://www.jiscmail.ac.uk/cgi-bin/webadmin?A2=spm;97150730.03 by Bas Neggers and Will Penny's reply. Now, assuming one wants to turn to polynomial regressors, there are different ways to implement, e.g. Legendre polynomials with symmetrical properties (e.g. https://en.wikipedia.org/wiki/Legendre_polynomials#/media/File:Legendrepolynomials6.svg ), one could also just go with y = x^m with x = 1, ... n reflecting the no. of the TR and m reflecting the order. Sometimes people seem to use orthogonalization, sometimes they do not. This will result in different sets of regressors, accounting for different effects. Now, are there any (good) reasons to prefer one approach over the other?
>
> Best
>
> Helmut
|