Print

Print


Dear Haoran,



Many thanks for your e-mail.  In general, one can see that a difference in evidence is not evidence for a difference using classical statistics. For example, if you wanted to compare the mean of two groups, you would use a two- sample t-test to produce one t-statistic. This provides evidence for a difference. Contrast this with doing two one-sample t-tests in each group and comparing the two t-statistics (a difference in evidence). In this case, one does not know whether the differences in the t-statistics are due to differences in the mean or differences in the standard error.



This means that to compare the connectivity of two groups one has to test the null hypothesis that there are no differences. Usually, people use the summary statistic approach to random effects inference. In DCM, this simply means performing classical t-tests or multivariate ANOVAs on the parameter estimates encoding connection strengths for each subject. Here, the null hypothesis is that all subjects have the same architecture (or model) but the groups differ quantitatively in terms of the connection strengths. Here, the inference is in relation to random effects between subjects on the strength of the connections.



One could also use the posterior distribution over connection strengths based on the Bayesian parameter average from each group; however, this is not used so frequently.



The alternative approach would be to pool all the data from each group to form grand averages (if this is possible). In other words, reduce the data from both groups into two time series. One then treats the group effect as a condition effect in a standard DCM analysis and examines the effect of group on the parameter estimates (as encoded by B parameters). This is a fairly common approach in DCM for EEG or MEG using ERPs. Here, the null hypothesis is that the (B) parameters encoding a group effect on connection strengths are zero.  One can then use Bayesian model selection in the usual way to test this hypothesis.



Note that these approaches to inferences about group differences cast the differences in terms of quantitative changes in connection strengths 每 as opposed to qualitative changes in the connectivity architecture. To make inferences about different architectures or models it is, in principle, possible to add a hierarchical level to the random effects Bayesian model selection, to test the hypothesis that models were selected at random from two different distributions for two groups. We have not actually implemented this but I believe that Jean Daunizeau and colleagues are currently working along these lines.



With very best wishes 每 Karl





________________________________
From: 滄纏 [[log in to unmask]]
Sent: 29 May 2013 07:09
To: Friston, Karl
Cc: E-mail list
Subject: Re:Re: [SPM] A question of DCM for model selection

Dear Karl,
﹛﹛The issue you are talking about are interesting. You said *&The difference in the evidences for each model in the two groups is not interpretable. This is because a difference in evidence is not evidence for a difference.*&  However, it may be ture that different people (such as patients with mental disease vs normal controls) own different network patterns. Then if two groups prefer different models, can I make  plausible inference that their information processing patterns may be different?  If not, I wonder whether DCM can give some contributions on this issue?

﹛﹛Thanks a lot!

--
Haoran LI (M.Sc)
Brain Imaging Lab,
Research Center for Learning Science,
Southeast University, Nanjing, P.R.China

At 2013-05-27 19:31:37,"Friston, Karl" <[log in to unmask]<mailto:[log in to unmask]>> wrote:

Dear Lingling Hua,



Do not worry about the different Bayesian model selection results from the control and patient groups. When comparing two groups 每 using the parameter estimates from DCM 每 you have to use the same model in both groups. This should be the model with the greatest evidence when pooling the data over both groups (or the Bayesian parameter average over the same models).  This ensures that any subsequent group comparisons are unbiased (because the Bayesian model selection does not know about the two groups).



The difference in the evidences for each model in the two groups is not interpretable. This is because a difference in evidence is not evidence for a difference.



I hope that this helps.



With best wishes 每 Karl





________________________________
From: 豪鍍鍍 [[log in to unmask]<mailto:[log in to unmask]>]
Sent: 27 May 2013 08:04
To: Friston, Karl
Subject: A question of DCM for model selection


Dear expert:



I have some questions during I use the method of Dynamic causal model (DCM) for MEG. I separated the subjects into two groups: one is patients (22 people), the other one is controls (20 people). After construct models, I got the best superior model through bayesian model selection (BMS). However, the results may be a little arguable. For example, to all subjects, the best model was model 2, the patient group may be model 2 while the control group was model 1. Under this circumstance, I still selected the model 2 as a common model to compare parameters of effective connectivity between the two groups. Due to the best model of controls wasn*t the same with patients*, my question is whether it will influence the results. Then, because the number of two groups wasn*t the same, my second question is if I increase the number of controls, whether the best model of all subjects will become model 1.



Thank you in advance for your help.



Lingling Hua

5.27.2013