Thanks for the clarification. Things are finally starting to sink in. A
couple of issues still are not clear, however.
1) Why was the transformation of the co-variety needed in SPM96/97, but not
in SPM99 ? More precisely, since there is only a single covariate
condition, there is only a single contrast weight (e.g., +1). How does SPM99
end up subtracting the baseline condition from the active condition as
stated in your model given the absence of specifying contrast weights of +1
and -1, as is done for condition effects.
In other words, a difference between conditions could be coded as a single
covariate of -1 (baseline) and +1 (active) in order to have an active -
baseline subtraction. However, if both covariates are positive, then how is
the subtraction achieved. Wouldn't you have to multiply the baseline
covariate by -1 and the active covariate by +1 ? Or is this done
automatically in SPM99, but was not performed in SPM96/97 ?
2)If the baseline covariate is = 0 for all subjects, does this impact the
model ? What if the baseline covariate is non-zero, but the same for all
subjects ?
3) If one used the covariate difference transformation with a
covariates-only design, would this give the same answer as the models you
have now described ?
sg
-----Original Message-----
From: Andrew Holmes [mailto:[log in to unmask]]
Sent: Monday, June 05, 2000 5:00 AM
To: Grant, Steven (NIDA)
Cc: SPM discussion list
Subject: RE: please help..
Dear Steven,
Thanks for your note - reminded me that I'd forgotten to discuss the
interpretation of the main difference effect in David's model:
At 18:44 04/06/2000 -0400, Grant, Steven (NIDA) wrote:
| In previous postings, you stated that when you wish to do a
| regression on a covariate associated with a baseline and an active
| condition, one must transform the covariate so that SPM will
| properly correlate the difference in the covariate with the
| difference in the images. This transformation consisted of
| 1) computing the differences in the covariate
| 2) mean centering the difference
| 3) halving the mean centered difference
| 4) multiplying the baseline covariate by -1 and the active covariate by
+1
|
| This procedure was required in SPM99 so that the +1 contrast on the
| covariate would produce subtraction of model [1] for each scan and
| thereby result in model [2] :
|
| [1] Y_iq = A_q + C * s_iq + B_i + error
|
| [2] (Y_i2 - Y_i1) = D + C(s_i2-s_i1) + error
|
| You do not mention doing this covariate transformation in your reply
| to David Keator. Is this because SPM99 does not require such a
| transformation ?
The transformation is not necessary if you're only interested in the
covariate effect. Mean centering the difference (across subjects) and
applying minus (half) and plus (half) of each subjects difference after
mean centering to the baseline and active scans gives these models:
[1'] Y_iq = A_q + C * 0.5*(s_i2-s_i1 - mean(s_i2-s_i1)) + error
[2'] (Y_i2 - Y_i1) = D + C(s_i2-s_i1 - mean(s_i2-s_i1)) + error
= D' + C(s_i2-s_i1) + error
...where
D' = D - C*mean(s_i2-s_i1)
So, the models differ only in the constant (intercept) term, through mean
correction of the difference covariate in model [2']. D is the difference
at a covariate difference of zero, D' the difference at the mean covariate
difference across subjects.
The covariate slope C is the same, as is it's estimate, which is unique,
and both models give the same inference for the effect of covariate, which
is what David was asking for.
| Also, one would expect that the baseline/active effect would highly
| correlated with the covariate effect since presumably the covariate
| would be affected by the drug administration.
True, but we're interested in the difference of the covariate.
| Therefore, wouldn't model [2] would greatly underestimate the
| correlation between the difference in the covariate and the
| difference in the scan.
No. Remember that the Pearson correlation coefficient discounts overall
differences between the two variates (by mean correcting the two variates).
Recall that a test of non-zero correlation is equivalent to a test of
non-zero slope in a simple linear regression (which is what [2] and [2']
are).
| Since the covariate and the scan condition are not orthogonal (i.e.
| the drug induces a change in both the scan and the covariate), it
| is not appropriate to try and partition these two effect
| independently of each other.
Lack of orthogonality does not necessarily imply that two effects cannot be
looked at separately.
We *want* to look at the covariate effect after removing the condition
effect common to all subjects, since this gives us a test equivalent to a
test of non-zero correlation of difference of covariate with difference of
scan scores. Mean correction (or not) of the covariate (i.e. the difference
covariate of models [2] & [2']) makes no difference to this.
Mean correction of the covariate does affect the test of the difference
effect, since this is testing the difference after removing effects due to
the covariate. (Contrasts [-1 +1 0] & [+1 -1 0] for models [1] & [1'].)
Different centering schemes will give different interpretations. With no
mean centering we're looking at D in model [2], i.e. we're looking at the
difference at a covariate difference of zero. With mean centering (of the
difference) we're looking at D' in model [2'], i.e. the difference at the
mean covariate difference.
----------------
Hope this clarifies the situation, and thanks for raising it.
-andrew
PS: See also comments by Jon Raz, who originally pointed out to me that the
transformation of the covariate into +/- half centered differences was not
required:
http://www.mailbase.ac.uk/lists/spm/1999-01/0139.html
in response to:
http://www.mailbase.ac.uk/lists/spm/1998-09/0102.html
| -----Original Message-----
| From: Andrew Holmes [mailto:[log in to unmask]]
| Sent: Thursday, June 01, 2000 12:47 PM
| To: Keator, David
| Cc: [log in to unmask]
| Subject: Re: please help..
|
|
| Dear David,
|
| At 17:25 16/05/2000 -0700, Keator, David wrote:
| | I'm trying to do simple correlations with SPM99..will someone please
| | help me, this should be very simple.
| |
| | I have 2 PET scans per subject, one at baseline and one on drug. I
| | have 2 clinical rating scores, one at baseline and one after drug.
| | I want to look at increases in GMR after drug correlated with
| | increases in the clinical rating. I also want to look at negative
| | correlations. What model should I use and how do I define the
| | contrasts??
|
| PET/SPECT models: Multi-subject, conditions and covariates. For each
| subject, enter the two scans as baseline and then drug. One covariate,
| values are the clinical rating scores in the order you selected the
scans,
| i.e. baseline score for subject 1, drug score for subject 1, baseline
| score for subject 2, drug score for subject 2, &c. No interactions
| for the covariate. No covariate centering. No nuisance variables. I'd
| use proportional scaling global normalisation, if any. (You could use
| "straight" Ancova (with grand mean scaling by subject), but SPM99 as only
| offers you AnCova by subject, which here would leave you with more
| parameters than images, and a completely unestimable model).
|
| Your model (at the voxel level) is:
|
| [1] Y_iq = A_q + C * s_iq + B_i + error
|
| ...where:
| Y_iq is the baseline (q=1) / drug (q=2) scan on subject i
| (i=1,...,n)
| A_q is the baseline / drug effect
| s_iq is the clinical rating score
| C is the slope parameter for the clinical rating score
| B_i is the subject effect
|
| ...so the design matrix has:
| 2 columns indicating baseline / drug
| 1 column of the covariate
| n columns indicating the subject
|
| You will have n-1 degrees of freedom.
|
| Taking model [1] and subtracting for q=2 from q=1, you get the equivalent
| model:
|
| [2] (Y_i2 - Y_i1) = D + C(s_i2-s_i1) + error
|
| ...where D = (A_2 - A_1), the difference in the baseline & drug main
| effects.
|
| (Note that this only works when there are only two conditions and one
scan
| per condition per subject!)
|
| I.e. a simple regression of the difference in voxel value baseline to
drug
| on the difference in clinical scores, exactly what you want.
|
| ----------------
|
| Entering [0 0 1] (or [0 0 -1] as an F-contrast will test the null
| hypothesis that there is no covariate effect (after accounting for common
| effects across subjects), against the alternative that there is an effect
| (either positive *or* negative. I.e., the SPM{F} will pick out areas
where
| the difference baseline to drug is correlated with the difference in
| clinical scores.
|
| [0 0 +1] and [0 0 -1] as t-contrasts will test against one sided
| alternatives, being a positive & negative correlation (respectively) of
| baseline to drug scan differences with difference in clinical scores.
| Since you're interested in both, you should interpret each at a halved
| significance level (double the p-values). This will give you the same
| inference as the SPM{F} (which is the square of the SPM{t}'s), but with
| the advantage of separating +ve & -ve correlations in the glass brain
| for you.
|
| ----------------
|
| Incidentally, the variance term here incorporates both within and between
| subject variability, and inference extends to the (hypothetical)
| population from which you (randomly!) sampled your subjects from.
|
| ----------------
|
| Hope this helps,
|
| -andrew
|
|
|
| + - Dr Andrew Holmes mailto:[log in to unmask]
| | Robertson Centre for Biostatistics ( ,) / _)( ,)
| | Boyd Orr Building, University Ave., ) \( (_ ) ,\
| | Glasgow. G12 8QQ Scotland, UK. (_)\_)\__)(___/
| + - http://www.rcb.gla.ac.uk/
+ - Dr Andrew Holmes mailto:[log in to unmask]
| Robertson Centre for Biostatistics ( ,) / _)( ,)
| Boyd Orr Building, University Ave., ) \( (_ ) ,\
| Glasgow. G12 8QQ Scotland, UK. (_)\_)\__)(___/
+ - http://www.rcb.gla.ac.uk/
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|