Dear Csaba,
The software and algorithms are being improved all the time, so getting numerically identical results across versions is unlikely. (Just a quick correction - the priors were indeed changed a while ago, but that doesn't make the parameters "scaled".)
- Regarding why you're not getting significant results now. Does this conflict with your prior beliefs about the data? Were you expecting certain parameters to be significantly different to zero?
- You may wish to run some diagnostics on your models. To do this, load your DCM into the workspace and run spm_dcm_fmri_check(DCM). Check out the documentation in the source code for this function to see what each plot means, and feel free to ask if you need clarification.
Best,
Peter
-----Original Message-----
From: SPM (Statistical Parametric Mapping) [mailto:[log in to unmask]] On Behalf Of Csaba Aranyi
Sent: 09 March 2015 16:11
To: [log in to unmask]
Subject: [SPM] DCM: Validity of inference from different DCM versions
Dear DCM Experts,
Recently I compared the DCM12 algorithm with the previous one, DCM10. I just wanted to see, how changes in DCM changed the fitted parameters. In short, I used 4 VOIs, extracted from the same 4D data, containing the same time-series. I made a fully connected model in DCM10, and DCM12, with the same settings, and here are my results:
DCM10.Ep.A
-0.436731942083027, 0.0710886195216859, 0.334361802727328, 0.161125121831073
0.152978456469344, -0.447070285549794, 0.168819667763104, 0.0938044792889746
0.306400216965812, 0.0729095938924621, -0.441435264748550, 0.151498784100942
0.207215883507947, 0.0660651733853810, 0.217689850913018, -0.446527067408264
DCM12.Ep.A
-0.0511296431393717, 0.0564420904156900, 0.0653811825478968, 0.0595386173114529
0.0495651796231309, -0.0358345091568968, 0.0491832692894838, 0.0441107117936550
0.0620335522214321, 0.0533878132316380, -0.0466023086145177, 0.0556331218091404
0.0543853634808567, 0.0459387385105940, 0.0531135344536163, -0.0395593347900290
DCM10.F
-11308
DCM12.F
-33020
DCM10.Pp.A
0.998253453937451, 0.590532059012988, 0.929832526214948, 0.671405353495882
0.728849093740951, 0.998673864590656, 0.702881412394965, 0.666327221106568
0.967613981853047, 0.592613787135726, 0.997353266484143, 0.668069996315705
0.775508096297969, 0.639002439147583, 0.744212407090645, 0.997657161103303
DCM12.Pp.A
0.711857366232859, 0.708014270572940, 0.804431685399367, 0.756805446336315
0.779523320336037, 0.624020371338705, 0.744033723857407, 0.690782318278164
0.835684349723654, 0.702260676151567, 0.678361199333290, 0.740984340107922
0.804163055804240, 0.668320395415702, 0.754443741340329, 0.645811566975427
Some of the results from this test is not unexpected for me, but makes me think. I know, that the parameter priors on connection strength have changed, to reduce the chance of flat-lining the estimated signal, so the values from DCM12.Ep are actually scaled parameters. It is also clear, that with different priors, the estimation procedure results in different free-energy, computed from the log evidence of the model, so these models cannot be compared with each other. And all of these also change the exceedence probability of connections.
But while there are some connections that exceeds the 0.9 threshold in DCM10, none of the connections qualify in DCM12. My question is, that how can I decide whether I can rely on the fitted model or not, and also, how can I interpret the changes of model parameters from one software version to another.
Greetings, and have a nice day:
Csaba
|