Dear Chris
The main issue with large numbers of nodes in a DCM is that it generally also involves a large number of parameters. If you only have a limited number of measurements, then you run out of degrees of freedom, so it is challenging to get confident estimates about those parameters. One solution is to place additional constraints on the connectivity parameters - e.g. using the functional connectivity parameters as priors. This is done automatically in DCM if the number of number of nodes exceeds DCM.options.maxnodes (I think the default is 8).
With PEB, the model is a General Linear Model, where the number of parameters is the number of included DCM connections multiplied by the number of covariates. So if your DCMs have 10 nodes, and you include all connections from the A-matrix in the PEB, then adding one more covariate will add 100 more parameters. This may be a lot if you haven't got many observations (subjects). Also, you can get dilution of evidence effects when performing model comparisons. So we recommend only putting the minimal set of DCM parameters needed into the PEB.
Best
Peter
-----Original Message-----
From: SPM (Statistical Parametric Mapping) [mailto:[log in to unmask]] On Behalf Of Chris Pae
Sent: 08 September 2018 07:25
To: [log in to unmask]
Subject: [SPM] A question on PEB with large number of nodes
Dear DCM experts,
I’m working on a large-scale DCM.
I have been able to get several DCM models with 20-30(or more) nodes.
According to previous DCM studies, it is known that the more nodes gives more difficulties to accurate calculations of DCM.
I wonder if PEB also has this problem.
Any comments and advices would be really appreciated.
Thank you very much for your time, and wish you have a great day! :)
Chris
|