Dear George,
First of all, contrast vectors like [-3 -1 1 3] do not test for linear changes. You just combine the weighted beta images in a certain way and then test whether this combination -3 * A - B + C + 3 D differs from 0. This is probably not that much meaningful.
Concerning your hypothesis, it depends on whether you want to know whether it's a gradual increase or a linear increase, and how you define "gradual" in the first place.
1) For the former, you would indeed have to contrast condition A and B, B and C, C and D. If A is sig. smaller than B, B sig. smaller than C, and C sig. smaller than D, then activation is increasing significantly from A to D (sort of "strictly increasing", when borrowing that term from calculus). In case some of the comparisons don't produce any sig. difference (e.g. A sig. different from B, B from C, but not C from D) then it's up to you how to interpret that. The pairwise comparisons could be conducted within the ANOVA model. Adjusting for multiple comparisons as suggested by Andy is preferable, if you're just interested in increases e.g. .05/3 FWE or .001/3 uncorrected for three tests, otherwise /6. Note that actually, hardly ever do we control for the number of constructed statistical parametric maps that reflect different contrast vectors (in contrast to controlling for multiple comparisons on voxel level or cluster level to control for the number of tests within a statistical parametric map). With two conditions A and B we usually just go with two one-sided t-tests A > B and B > A thresholded at e.g. .001 uncorrected plus cluster correction instead of 2x .001/2 uncorrected plus cluster correction.
2) Testing whether a change is linear means testing whether the differences between levels are equal. Corresponding contrast images [-1 1 0 0], [0 -1 1 0], [0 0 -1 1] could easily be constructed on single-subject level and then be tested with e.g. separate paired t-tests or within a One-way within subject ANOVA with three levels, but note that no significant difference between pairs does NOT mean they are equal, you just fail to reject the null hypothesis (= no difference), meaning you don't have any support for them being sig. different. You might want to search for "equivalence tests", they are pretty common when it comes to e.g. comparing different pharmaceutical drugs. Usually they deal with two different groups but the principle can be used for repeated measurements as well, and they could be calculated on a voxel-by-voxel basis or for certain a-priori regions of interest by averaging across a certain amount of voxels. However, typically you have to pre-specify a certain margin/value which defines the range for equivalence. This might be very difficult for fMRI though (leaving aside whether the equivalence tests like the TOST = "two one-sample t-tests" are that useful at all).
3) A mixture of 1) and 2), instead of testing for B > A (corresponding to B - A > 0) you could test whether B is larger than A to a certain extent d, corresponding to B > A + d, followed by C > B + d, D > C + d. While this is not possible within SPM you can simply adjust the beta images by adding a certain value. This way you wouldn't test linear increases as in 2), but you would be able to conclude whether the changes exceed a certain size, which might be more interesting than "sig. different" as in 1). However, again it is going to be very difficult to define such a d.
Now, having said that it's probably rather difficult to test for linear changes, maybe you should turn to Bayesian inference. As it seems you want to find support for a null hypothesis (linear change = identical differences between pairs of conditions = no sig. differences between differences), the paper "Bayesian t tests for accepting and rejecting the null hypothesis" by Rouder et al. (2009, Psych Bull Rev) might be interesting to get a first overview.
Hope this helps a little,
Helmut
|