Dear James,
I understand that you find this result counter-intuitive, but I would
like to re-assure you that the maths in spm_dcm_average is
correct. For brevity, I won't go into details of the equations here
- these are just the standard equations for computing the mean and
precision (inverse variance) of a Gaussian posterior when combining a
Gaussian prior with a Gaussian likelihood density. You find these
equations in spm_dcm_average in lines 92-100 and in any textbook on
Bayesian stats (e.g. sections 2.2 and 2.3 in Lee (1997), "Bayesian
Statistics" ). The basic idea is that the posterior mean is a
weighted combination of your prior mean and the likelihood (datum),
where the weighting is given by the relative precisions (inverse
variances) of the prior and the data. When you deal with
multivariate Gaussians (as in DCM), the covariances between different
parameters are critical. I attach an example using 2D Gaussians that
is computed with the following code:
% prior
m_prior = [1,1]' % mean
sigma_prior_inv = [10 3; 3 1]; % precision matrix (inverse of
covariance matrix)
sigma_prior = inv(sigma_prior_inv);
% likelihood
m_data = [6,1]' % mean
sigma_data_inv = [10 -3; -3 1]; % precision matrix
sigma_data = inv(sigma_data_inv);
% Compute posterior
sigma_post = inv(sigma_data_inv+sigma_prior_inv);
m_post = sigma_post*(sigma_data_inv*m_data+sigma_prior_inv*m_prior)
If you run this code, you will see that even though both prior and
likelihood have positive means in both dimensions, the posterior's
mean in the second dimension (which would correspond to the MAP of
your second parameter in a DCM) now has a strongly negative
value. The attached figure (where I have plotted the contours of the
2D Gaussians) shows that the particular covariance structure of the
two Gaussians in this example is responsible. This result may appear
counter-intuitive since we are used to "arithmetic" averages in one
dimensions, but it is perfectly correct.
I hope this helps? With very best wishes,
Klaas
At 15:57 14/09/2006, James Rowe wrote:
>Dear Karl, Klaas and colleagues,
>
>I am puzzled by the output from spm_dcm_average. I understand from
>your message 024483 that it does not give the average of models in
>the sense of the arithmetic means of each of the
>connections/modulations in matrices A, B and C (although this
>approach has been advocated on the list by yourself and Will in the
>past, suggesting that one perform one-sample t-tests on the non-zero
>values of interest in matrices A,B,C).
>
>In contrast, spm_dcm_average uses a Bayesian FFX analysis across the
>group, to estimate the overall posterior mean for each
>connection/modulation. But, how does this explain the following
>discrepancy, in a very simple model (two regions X and Y, connected
>reciprocally and with intrinsic self connections. Area X receives
>input from an external visual stimulus. No moderator variables)
>
> From matrices C, in 18 subjects, we estimated the strength of the
> influence of the visual input on the area X. For all subjects, pC
> were 1.000. The actual value in C (for input onto area X) ranged
> from -0.06 to +0.40, mean +0.07. Positive values in 15 / 18
> subjects for this connection.
>
>But, spm_dcm_average value for this connection was -0.03, pC 1.000 .
>I do not understand how the model average could have suggested a
>negative value for this connection, when the individual subjects
>nearly all showed positive effects (and the nature of the task and
>simple model would lead one to also expect a positive value).
>
>Any ideas?
>
>thanks in advance,
>
>James Rowe
>
>(PS, this question arose with a more realistic complex model with
>bilinear inputs etc, but these are more tedious to describe in text
>- the issue is still seen with the most basic model outlined above)
|