Hi Anthony,
1. Given that it is unlikely that a single model will be a clear 'winner' by the BMS implemented at the end of the post-hoc/Bayesian model reduction procedure, would it be valid to apply Bayesian model averaging over a large set (perhaps even all 256) of the surviving/reduced models for all subjects? I've done this with my data, and it seems to reveal a sensible and theoretically meaningful model.
Yes. Glad you got a nice result!
2. The DCM that I ran through the model reduction procedure has 13 VOIs. I've recently got reviews back on the paper, and one of the main criticisms was this large number of VOIs and thus the complexity of the model, which in the view of one reviewer "leads to lots of problems". That reviewer also said "DCM is a better technique when applied to a fairly small number of regions (3-5 perhaps)." Is this criticism warranted, do you think? If so, should I really be trying to limit my model to 8 or fewer VOIs, as is implied by what is allowable via the GUI and the code that produces the graphs?
No, I don't think there are inherent problems in having large DCMs. Thoughts on this:
- You could ask your reviewer to clarify the specific problems that concern them and go from there - this will make it easier to provide reassurance.
- In recent versions of DCM, a dimensionality reduction will automatically be performed if you have more than DCM.options.nmax regions. This specifically is designed to enable large models to be estimated. This was implemented for this paper http://www.sciencedirect.com/science/article/pii/S1053811912011780
- An important consideration is the number of parameters - i.e. the number of connections between your regions. You need your full model estimation to converge before you can do the post-hoc reduction.
- Post-hoc reduction will remove parameters which are too highly correlated, and thus don't contribute to the free energy.
Best,
Peter
|