Dear Victoria
> First thank you Peter for putting together your recent PEB tutorial and papers, they have been enormously helpful for me to better understand and actually implement DCM.
You're welcome! These are all good questions.
> Now that I have gone through the tutorials and am working on my own task and resting state fMRI data I have a number of remaining questions I would appreciate help from the community on.
>
>1. Low Explained Variance
> Does explained variance depend on the model? Would the full and reduced PEB models have different explained variance outcomes for subjects. For my event related task design I specified a full DCM first using some of the conditions of interest from the task. Should I expect >10% variance for subjects even for the full model (i.e. A=ones(x,y) B=ones(x,y,z), C=ones(y,z))
>
> Also, in the case of resting state DCM would I still expect 10% to be good threshold for explained variance?
The goodness of the model is the model evidence p(y|m) - the probability of observing the data y given the model m. The free energy is an approximation of the log of the model evidence, which can be decomposed into accuracy minus complexity. You can think of the accuracy part as the explained variance, and the complexity part is (effectively) how many parameters there were in the model. More specifically, it's how far the parameters had to move away from their prior values to explain the data. So yes, explained variance does depend on the model.
You asked about explained variance of the PEB model. There are no tools for calculating that at the moment - so I think you mean explained variance of the individual subjects' DCMs? In which case, you should just be checking the explained variance of the full model from each subject. This is a useful diagnostic to make sure that something hasn't gone wrong with the model fit. How much explained variance is enough to be interesting in the task DCM? There's no right answer - so as a rule of them, we say less than 10% isn't interesting and might suggest a problem. (You can imagine times where that criteria wouldn't be fair - e.g. with a sparse design with occasional brief events, where most of the timeseries is expected to be noise.)
With resting state DCM, I would expect a high level of explained variance. A better diagnostic might be how much of the variance is ascribed to the neural part of the model - i.e. how big are the neural parameters. This is shown in the bottom left of spm_dcm_fmri_check(GCM) - the title goes red if all neural connections are very small.
> 2. Probability of Model
> I'm a bit confused on the outputs being produced when I run spm_dcm_peb. How would I interpret example output below:
> VL Iteration 1 : F = -3877623.13 dF: 0.0000 [-3.75]
> VL Iteration 2 : F = -3876869.71 dF: 753.4163 [-3.50]
>
> Also is there a significance to the the number of iterations? If it reaches 256 - max iterations does that mean it didn't converge. And it doesn't converge does that mean I should treat the BMC results differently or that I need to increase the max iterations?
This is the model fitting process - the variational Laplace algorithm. It's a gradient ascent on the free energy. The key thing to look at is the difference in free energy between the current step and the previous (dF: 753.4163). That change in free energy should be big to start with and get smaller. If it's still getting smaller when it reaches 256, then you might want to increase the max number of iterations, to give it more time. You can set this with M.maxit.
> 3. Interpreting Significant Results
> I do have results for a significant association with co-variate to the BMA of PEB (using Free Energy - 95%).
> What do parameters larger than 1 mean - I see on the connectivity matrix the results are maxed at 1 but some of my parameters have values greater than abs(1), though I know from tutorial Hz is supposed to be units? Is there a different way I am supposed to interpret the correlation matrix?
This is covered in detail in the tutorial, but to summarise the key points, I have just added a section to the Wiki - please see https://en.wikibooks.org/w/index.php?title=SPM/Parametric_Empirical_Bayes_(PEB)&wteswitched=1#Interpreting_the_output . Note that in (one-state) DCM for fMRI, the self-connections are log scaling parameters that multiply up or down a default value of -0.5Hz, whereas the between-region connections are in units of Hz. Please see part 1 of the tutorial paper for details on this.
> How ought I factor in diagnostics (low explained variance/parameter correlation) and the info on model probabilities that outputs when I run spm_dcm_peb_bmc for accepting significant results?
Accuracy and complexity are accounted for automatically when performing Bayesian model comparison, because they are the two terms making up the free energy.
> In the case of parameters that are very small effect sizes (<.05) is that a sign these results are likely erroneous and should these be ignored?
No, there is no reason why a small parameter should be erroneous. It might be that you consider them too small to be interesting though :-)
> 4. A matrix in task vs RS DCM
> Using the same ROIs and mostly same subjects, I did a stochastic RS DCM and task based DCM and got different results for the BMA of PEB_A and am wondering how to think of endogenous connectivity of a task? (My fast event related task is an associative learning task of visual stimuli with trials of consisting of prediction stage, feedback stage, and fixation).
You may will get different results from these different models and estimation schemes. I recommend using DCM for CSD, as it has been shown to be more reliable than stochastic DCM (see the paper by Razi et al.).
> Also, in your PEB DCM tutorial part2 - it says average connectivity across experimental conditions (A-matrix) - does it mean across only included first level conditions or the whole fMRI task?
All the conditions included in the model.
> 5. Choosing which connections to include in the B matrix In the PEB tutorial only the self-modulations and a few connections were chosen. Is it inappropriate to estimate a full PEB on the the connected B matrix for each condition and then pursue the BMR/BMA?
Yes that's fine.
> 6. DCM-Specification: Centre Input
> Does selecting yes for this option center your regressors or do you still manually have to change the values?
It centres in the regressors at the first level - i.e. the columns from your first level GLMs. For the second level (i.e. PEB covariates, e.g. age, gender etc), you need to mean-centre these manually at present.
> 7. Group Differences
> Are group differences able to captured well in the PEB framework? In tutorial I saw it says it assumes same architecture - I study psychosis so wasn't sure how good that assumption is Also wist to confirming categorical/dummy coding co-variates are the proper way to test group differences in PEB DCM?
Yes, the PEB framework is equally applicable to discrete and continuous between-subject effects. For examples of dummy coding, please see https://en.wikibooks.org/w/index.php?title=SPM/Parametric_Empirical_Bayes_(PEB)&wteswitched=1#Example_design_matrices
Best
Peter
|