Hi Klaas, Will & other experts
A question about Bayesian model comparison (in SPM2). Over a number
datasets, we have observed the following phenomenon: in the attempt to
compare forwards and reciprocal modulatory connectivity between two
regions (for simplicity, say V1 and V4, with reciprocal instrinsic
connections specified, and driving input to V1 only), we observe a
significant forward (V1 -> V4) connection for the forward model, and
significant bidirectional connections for the bidirectional model - but,
Bayesian model comparison suggests that the forwards model is much better
than the bidirectional model. How do we interpret this finding?
By analogy with the attention dataset example described in the 2004 paper
on Bayesian Model Comparison, one explanation is that the addition of an
extra connection - albeit 'significant' - increases the regional cost
associated with one of our ROIs in the bidirectional model, and
consequently the data don't fit so well. However, when I look at the
regional costs across subjects, they are remarkably similar for each model
at each region/subjects, and the difference among models is driven almost
entirely by the penalty (for example, raw regional cost from in the
structure 'evidence' differs between models by values <0.1; in bits,
regional costs seem to be very low (all <1, and for many subjects, <0.1).
Looking at the residuals, they are remarkably similar for both models;
values in DCM.R differ on the order of 1-2% for the two models. What can
we assume from the fact that both models - conceptually rather different -
fit the data so well? Would one propose - as in structural equation
modelling - that the additional, backwards connection is unnecessary and
adds nothing to the fit, and should thus be discarded?
So the results of BMC are largely driven by the penalty. I know that AIC
and BIC are supposed to give opposing penalties for simple and complex
models, but in the simple DCMs described above, the reciprocal model
carries both a larger bic penalty (54.8 > 51.3) and aic penalty (16 > 15).
Why is this the case? (although I have to say that clearly the penalty
seems to be doing the right thing, because all subjects show positive
evidence for the forwards > reciprocal models by the BIC criterion, but by
the AIC, only 4 do).
Finally, a different case: now let's imagine a model with 2 experimental
conditions. Again, 2 models: in both models, V1->V4 is modelled by
condition 1, e.g. all visual input (photic) ; but in model 1, the
backwards connection is also modulated by photic, whereas in model 2, the
backwards connection is moduated by the condition of interest (test).
Model 1 seems to be a nice control to test the hypothesis that backwards
feedback is specific to our condition of interes. When I compare these
models, however, the positive evidence ratio is zero:zero; all Bayes
factors, either by AIC or BIC, barely differ from one; regional costs are
consistently low. Penalties are identical for both models, so BMC is
utterly unable to tell them apart. Any ideas about how to interpret this?
Many thanks,
Chris
Christopher Summerfield
[log in to unmask]
|