Dear Brian,
This is quite subjective (and certainly not specific to DCM). I am sure that Karl is right to suggest 10% as a rule of thumb. Personally, if someone showed me a model (be it a simple linear regression, or an SPM or DCM) that only explained 5% of a signal, I might think that model isn't tremendously interesting... Especially given that with DCM, we do a lot of cleaning of the timeseries first. Of course, 10% of explained variance shows that there is an awful lot about that signal which isn't understood. And as I mentioned, variance explained doesn't control for model complexity.
So I'm afraid I don't have a good answer as to what % variance explained gives a "good" model - perhaps others have an opinion?
Best,
Peter.
-----Original Message-----
From: briannh [mailto:[log in to unmask]]
Sent: 24 June 2014 09:01
To: Zeidman, Peter
Cc: [log in to unmask]
Subject: Re: [SPM] Variance explain by DCM
Dear Peter,
when you use the %-variance explained as a sanity check, do you have some rules of thumb on how much this "should" be?- Friston writes 10% in the spm_dcm_fmri_check script, but our experience is rather 5-10 % (and a few above; it's also quite dependent on region, but that makes sense that the noise level vary across different regions). Is it primarily to show that the models didn't flatline?
Best regards, Brian
On 2014-06-24 08:33, Zeidman, Peter wrote:
> Dear Tali,
> Regarding your first question - how, in general, additional parameters
> could reduce the variance explained by a model. It's easy to think of
> a case where adding a parameter could reduce explained variance. Say I
> add an A-matrix connection parameter to a DCM which is not there in
> the biological system, and this additional connection disrupts the
> dynamics of the whole network. Call this the 'full'
> model. It will give lower explained variance than a nested model
> without this erroneous connection.
>
> Keep in mind that explained variance is just the correlation between
> your predicted timeseries and the observed timeseries. It is not a
> robust measure of model fit as it doesn't take the complexity of the
> model (effective number of parameters) into account. I like to use
> explained variance as a sanity check - to confirm that my predicted
> timeseries "look like" my observed data rather than, say, having
> flat-lined. But we don't use it for model comparison for the reason I
> explained (free energy deals with this by reflecting the accuracy
> minus complexity).
>
> As for post-hoc DCM. It's not clear whether the 16 models on which you
> performed full estimation were the same models as those on which you
> used post-hoc? Post-hoc DCM disables parameters (by setting their
> prior variance to 0) where doing so won't make much difference to the
> model evidence. That is, where the evidence of the full and reduced
> model is approximately the same. If you're finding a difference
> between full estimation and post-hoc on the same models, it could be
> that the model estimation in one approach is falling into a local
> minimum and not finding the optimal solution.
>
> Best,
> Peter.
>
> Peter Zeidman, PhD
> Methods Group
> Wellcome Trust Centre for Neuroimaging
> 12 Queen Square
> London WC1N 3BG
> [log in to unmask]
>
>
>
> -----Original Message-----
> From: SPM (Statistical Parametric Mapping) [mailto:[log in to unmask]]
> On Behalf Of Tali Bitan
> Sent: 23 June 2014 20:35
> To: [log in to unmask]
> Subject: [SPM] Variance explain by DCM
>
> Dear DCM experts
>
> Previous posts suggested to look at the variance explained by models
> in DCM (DCM.R - in using DCM Diagnostics or spm_dcm_fMRI_check) as an
> indication for model convergence in DCM_post_hoc. As far as I
> understand this assumes that the variance explained by the full model
> is the upper limit for the the variance explained by any reduced
> model.
>
> We have now specified and inverted 16 DCM models to use in a standard
> BMS (not post-hoc). When looking at the variance explained by these
> models we find that in some individuals, there are models that explain
> more variance than the full model.
>
> My question is:
> 1) How can that be possible? How can the inclusion of additional
> parameters reduce the variance explained by the model?
> 2) If this is possible - how can this criterion (variance explained by
> the full model) be used as exclusionary criterion for participants
> from DCM_post_hoc?
>
> Thanks
> Tali Bitan
|