Print

Print


Dearr DCM experts, dear Peter

as mentioned in previous posts, I am trying to implement DCM analyses in our lab (using mainly post-hoc DCM). I read previous posts on too little explained variance in DCM analyses but I am still not sure what to do with my data.In order to run post-hoc DCM, I inverted a "fully connected" model and find that the explained variance is, well, extremely low.

My paradigm was scanned on an 1.5T with a TR of 2.5, TE=45ms. I performed slice-time correction in the preprocessing.

I have an event-related paradigm with two conditions: 1. Stimulus, 2. Stimulus with extra feature. Every subject performs 40 trials of each condition (value-based decisions). The contrast 1 vs. 2 reveals a broad and robust (FWE-corrected) activation in several reward-related and attention-related brain areas (the extra feature is supposed to be more rewarding, but also increases attention and stimuli of this condition may be more salient). I already ran several PPI analyses and found interesting results. However, in order to analyze the whole network and assess directionality, I want to implement DCM.

I am interested in the intrinsic (endogenous) connectivity of this network and, more importantly, its modulation by condition 2 (as I found several interesting task-related functional connectivities using PPI).

I extracted the VOIs as described in previous post (very carefully, and also included the F-contrast) from the relevant contrast and get 40-70% of explained variance per VOI per subject.

The GLM set up specifically for the DCM included (as suggested by Peter) onset all pictures condition 1 AND 2 (as driving input); as well as onset pictures condition 2 (as modulating input), I included time derivatives and nuisance regressors. I am using a batch script, but as a check I also used the GUI for one subject and get the same results - therefore, I expect my batch script to be OK.

DCM.a is a fully connected matrix (only 1's); DCM.b's first matrix is full of zeroes, and the second matrix is full of 1's except for the self-modulations (the diagonal). I tried one- and two-state, and found that two-state actually decreased explained variance (opposed to some posts I found on the mailinglist)). 

I use mean-centring, and let the driving input enter at one, two, three, four or all of the regions. However, no matter what I do, the explained variance is very low (ranging from 0-5%, mostly rather at the lower end). Maybe this is due to the design (i dont have a nice 2x2 design), but maybe I am also missing something. Stochastic DCM of course improves the explained variance (to 35-50%), but then decreases intrinsic/extrinsic connection strength. Further, Peter suggested to first find out why the deterministic DCM does not explain variance.

Another thing that is interesting to me - when using spm_dcm_fmri_check, the left lower panel shows the intrinsic and extrinsic connection strength. the "largest connection strength" only uses the positive values - I am therefore wondering if negative values here are less important or not good.

Please let me know if you have any suggestions, I would be grateful for any replies. If you need any further infos - please let me know, I really appreciate your help.

Kind regards
Laura

PhD Student
University of Bonn







Von: "Zeidman, Peter" <[log in to unmask]>
An: Laura Enax <[log in to unmask]>
Gesendet: 15:34 Freitag, 26.Juni 2015
Betreff: RE: [SPM] DCM post-hoc Output

Dear Laura,
The explained variance is a sanity check, and I agree that 0-5% is not very good. I would recommend trying to work out why you get such low explained variance before switching to stochastic DCM. Would you like to share more details of your experimental paradigm and model to try to diagnose this? Please CC the SPM mailing list J
 
Best,
P
 


From: Laura Enax [mailto:[log in to unmask]]
Sent: 26 June 2015 07:53
To: Zeidman, Peter
Subject: Re: [SPM] DCM post-hoc Output
 
Dear Peter,
 
thank you again for your very helpful response.
 
If I understood correctly, variance explained in the fully connected model is very important - before actually running a post-hoc routine . My VOIs all look ok I think (mean of ~60% variance explained in individual VOIs), and I was very careful with the F- contrast and the setting up the DCM-GLM (as I read from previous posts), but the DCM model is fitting very poorly (0-5% variance explained) - even if I switch around one-state/two-state and less/more regions. Now I tried to use stochastic DCM, and the variance explained (as expected) increased a lot. I did not use driving input for this model (I also read that from a different post).
 
Now I read that I should do a "sanity check" with spm_dcm_fmri_check(DCM) - and I have a question concerning intrinsic and extrinsic connection strengths: sometimes it is written above the graph that the largest connection strength is about 0.08 or so. This is too low (as I understand from previous post, below 1/8Hz is not good and printed in red) - but if I look at the graph, the bars to the negative axis clearly go to 0.2 or 0.3. Is this sufficient enough, or are negative connection strenghts not good?
Thank you so much
 
best
Laura

Von: "Zeidman, Peter" <[log in to unmask]>
An: Laura Enax <[log in to unmask]>; "[log in to unmask]" <[log in to unmask]>
Gesendet: 17:06 Mittwoch, 24.Juni 2015
Betreff: RE: [SPM] DCM post-hoc Output
 
Dear Laura (CC’d mailing list),
All good questions. To be clear, here’s how the spm_dcm_post_hoc script works. In each iteration of the algorithm, 8 connections are identified which contribute the least to the model evidence. All possible combinations of disabling these parameters are evaluated, forming 2^8=256 models. The script picks the one model with the greatest evidence, then repeats. This continues until no more connections can be pruned. The process leads to three outputs:
 
1. Graphs showing the 256 models from the final iteration of this algorithm (in other words, the final ‘model space’).
2. An optimal model for each subject  (DCM_opt.mat). This is formed by taking the best model from the process above and pruning the relevant connections from each subject’s full model.
3. An average model over subjects (DCM_BPA.mat). This is an average of the optimal models over subjects (FFX). See below for more detail on this.
 
1. Hillebrand et al reported.
The winning model was the full model that had all connections and all modulations (Fig. 2B). The next most probable model's probability was 0.27.
Where do I find this information ? I can see it in the graph only. Are there cutoffs? The paper mentioned above said that if there is a big model space, these values may be low - but probably there is still a cutoff?
 
The ‘model posterior’ graph compares the posterior probability of each model in the final model space. These values are not stored, but you can go to Tools -> Data Cursor then click on the relevant bar to get the value. Note that you would not generally look for a ‘winning’ model in this plot. Indeed, as the authors you quote mentioned, evidence tends to be diluted across the models, meaning that it’s hard to get a single ‘winning’ model. Rather, you should split the model space into families of models and compare the evidence for each family. There are two ways to define the families:
 
1. All models containing a given connection (family 1) versus all models not containing that connection (family 2). This is what’s stored in the BPA.mat in DCM.Pp.
 
2. You can define own custom family function, which is a bit more involved and is described in the help text of the script.
 
(Note that in the BPA.mat file, DCM.Pp stores the results of a family comparison. Whereas, in the optimal model from each subject (DCM_opt.mat), DCM.Pp is the probability that each connection has deviated from its prior. A little confusing, I know.)
 
2. you said: In BPA.mat, the variable DCM.Pp relates to the fraction of models which have this connection.
Not sure if I understood - does this mean that DCM.Pp (of the BPA.mat file) stores the posterior probabilities of each connection/modulation ?  - is there an accepted cutoff (i.e., >0.7 or >0.9)? If this value is low, can I "drop" the connection?  If there is a 1, then this means that this specific connection or modulation has a high posterior probability, right?
 
It’s like any other probability – there’s no strict cutoff as to what to believe for the BPA’s DCM.Pp. You might want to take 0.95 as strong evidence and a lower value as a trend. For a value of 0.9 you would write “the posterior probability of models containing connection X over models not containing connection X was 0.9”. A value of 1 probably means that all models in the final model space contained the connection.
 
3. If the probabilities for intrinsic connections ar low, is it still valid to look at the modulations? For example, would it be possible to say that the intrinsic, stimulus-driven connections are low, but that the connections are modulated by the additional feature input?
 
Yes.
 
4. In the SPM8 version of the post-hoc routine, it was possible to see a graphical representation of the winning model (with different colors for each region and the thickness represented the strength) - is this possible?
 
If you call spm_dcm_post_hoc with no output arguments, or use the GUI, you’ll get this.
 
Well, I hope my questions are not too easy - I just want to make sure that everything that I report is correct.
 
None of that was easy! Let me know if you need any further clarification. This will all be made simpler in a future version of the software.
 
Best,
 
Peter
 
 

Von: "Zeidman, Peter" <[log in to unmask]>
An: Laura Enax <
[log in to unmask]>; "[log in to unmask]" <[log in to unmask]>
Gesendet: 8:18 Mittwoch, 17.Juni 2015
Betreff: RE: [SPM] DCM post-hoc Output
 
Dear Laura,
 
The winning model is (graphically) the first model (when looking at the model evidence graph that is produced). However, where can I see the structure of this model (which intrinsic connections, which modulations)? Is the winning model stored somehow in here: DCM.xY,A ? Or is it in Pp.(a,b,c) where there is a value > 0.5?
 
The individual models are not saved by default. Rather, an optimal model is calculated for the group, with individual subjects’ connection estimates stored in the DCM_opt .mat file for each subject as well as the average model (Bayesian Parameter Average) saved for the group (BPA.mat).
 
Your query led me to revisit the code, and I can see that in the latest version, the BPA.mat is not saving correctly. If you can’t see this file, please use the attached version of the script instead (copy it into your SPM folder).
 
If it’s important for you to identify the individual models, these can be saved by running spm_dcm_post_hoc manually – if you look in the source code you’ll see there’s an option to save all models from the last iteration of the search.
 
Where can I get the posterior probability of the winning model? I think it is stored in p, is that correct?
% DCM.Pp - Model posterior over parameters (with and without)
What does "with and withou" mean?
Which values should the model posterior have so that it is meaningful? I guess for example 0.5 is chance level, but how about 0.501 for example?
 
% DCM.Ep - Bayesian parameter average under selected model
Are these values meaningful if DCM.Pp values are at chance level (all around 0.5)?
 
The posterior probability of each model is not stored. Rather the function splits the models from the last iteration of the search into two families – those models with connection X and those without connection X (where X is any connection from the model). In BPA.mat, the variable DCM.Pp is a relates to the fraction of models which have this connection.
 
DCM.Pp in individual DCM_opt models (the model for each subject) is simply how much each connection has deviated from zero (its prior). In this case, posterior probability around 0.5 wouldn’t give me much confidence that the value has deviated from zero.
 
I hope that helps, let me know if you have any more questions.
 
 
Best,
Peter