I've recently looked at the literature on accounting for low frequency noise/scanner drifts based on polynomial regressors added to the design matrix. Several papers suggest to go with this or that adaptive procedure that estimates the drift from the data, with this or that method concluded to be superior, which is difficult to interpret without extensive replications. Now turning to the default approaches, actually there's not much on combination of cosine functions / the DCT in SPM vs. polynomial regressors (Skudlarski et al., 1999, Neuroimage, Tanabe et al., 2002, Neuroimage, 10.1006/nimg.2002.1053 , Friman et al., 2004, 10.1016/j.neuroimage.2004.01.033 ). For example Worsley et al. (2002, Neuroimage, 10.1006/nimg.2001.0933 ) conclude:
The cosine transform basis functions used in SPM'99 have zero slope at the ends of the sequence, which is not realistic, so we use a polynomial drift.
(as e.g. illustrated in a book chapter "The general linear model" by Kiebel & Holmes), but also see https://www.jiscmail.ac.uk/cgi-bin/webadmin?A2=spm;97150730.03 by Bas Neggers and Will Penny's reply. Now, assuming one wants to turn to polynomial regressors, there are different ways to implement, e.g. Legendre polynomials with symmetrical properties (e.g. https://en.wikipedia.org/wiki/Legendre_polynomials#/media/File:Legendrepolynomials6.svg ), one could also just go with y = x^m with x = 1, ... n reflecting the no. of the TR and m reflecting the order. Sometimes people seem to use orthogonalization, sometimes they do not. This will result in different sets of regressors, accounting for different effects. Now, are there any (good) reasons to prefer one approach over the other?