Print

Print


Dear Shamil,

This variability from run to run is because SPM is using a sampling method (Gibbs sampling - implemented in the function spm_BMS_gibbs and described in [1]).

I don't think it’s a good idea to look for the best model in large model spaces.
This is because of the problem of dilution - that probability mass attributed to model X in a small model space becomes shared/diluted among similar models in a larger model space.

Perhaps the best way forward is Bayesian model averaging within subject and then t-tests at the group level - to look for connections that are significantly non-zero over the group, or different between groups (eg patients versus controls, or one condition versus another).

Alternatively, one can place models into families (see eg [1]) and make inferences about model families (eg does input go to region X or Y, are there modulatory connections between hemispheres ?)

Best, Will.

[1] W Penny, K Stephan, J. Daunizeau, M. Rosa, K. Friston, T. Schofield and A Leff. Comparing Families of Dynamic Causal Models. PLoS Computational Biology, Mar 2010, 6(3), e1000709.

> -----Original Message-----
> From: SPM (Statistical Parametric Mapping) [mailto:[log in to unmask]]
> On Behalf Of Shamil Hadi
> Sent: 05 March 2012 15:21
> To: [log in to unmask]
> Subject: [SPM] BMS:DCM:rfx
> 
> Hello
> 
> I have 64 models and I tried to find the best model out of the
> alternative models. Every time I run BMS.DCM.rfx, I get a new best
> model, i.e. in the first run, model 50 is the best. In the second run,
> model 6 is the best. In the third run, model 3 is the best. In the
> fourth run, model 59 is the best and so on. Why I am getting different
> models?
> 
> 
> Thanks,
> Shamil