Thanks for the extra detail Brian. The rationale behind the design sounds sensible. Yes, it could be a power issue given that it's an event-related design. It's not wrong to use stochastic DCM, but here are some suggestions for trying to get deterministic to work first.
To ensure I have understood correctly: you have three regions, which I'll call A,B and C. Regions A and B both receive driving input of prediction error. Each of these has a connection to region C. The connection from A->C and B->C are each modulated by valence and stimulus type (block regressors). I have a couple more questions:
- You say prediction error is a driving input. I assume this is a timeseries with an event (stick function) for each trial which showed a strong prediction error? How many events does this typically give you per subject? If too few, it would explain why the DCM won't estimate.
- The occasional driving input into regions B and C may not be enough to sustain the dynamics. Is there any way to strengthen this input? It would be good to reduce the number of parameters too. Here are two examples of how this could be achieved:
1. Create a regressor called Task, with events for every trial. Use this to drive region A. Now create two regressors to use as modulatory inputs: valance x prediction error and stim type x prediction error. (Each of these is an interaction formed by pair-wise multiplying the existing regressor vectors.) Use these to modulate the A->C connection. If you do this, it's probably best to ensure that DCM.options.centre = 0 without stochastic DCM. And if it makes sense, leave off region B for now - simplifying is good. (This approach is elegant but the risk is that the two modulatory inputs could become collinear.)
2. Alternatively, use non-linear DCM, although not exactly as you suggest. Create a regressor called Task, with events for all trials. Use this to drive region A. Have a non-linear connection from Region A to the A->C connection (Prediction Error). As before, also have the A->C connection modulated by valence and stimulus type. Of course, this is only if it matches your hypotheses as to Region A's role.
I'd be interested to hear if anyone else has an opinion on the shorter integration of non-linear DCM - I'm not convinced that will make a difference.
Hope that helps, good luck!
Peter.
-----Original Message-----
From: Brian Numelin Haagensen [mailto:[log in to unmask]]
Sent: 02 June 2014 12:23
To: Zeidman, Peter; [log in to unmask]
Subject: Re: [SPM] stochastic DCM
Dear Peter,
thanks for the reply.
In this case, it's an event-related reversal-learning paradigm, divided into 4 sessions according to a 2*2 design with varying stimulus-type (dimension 1) and valence of feedback (dimension 2). We model Prediction Errors as parametric modulation on the outcome onsets.
We find that the Prediction Error effect changes sign in one region, according to stimulus-type and valence. We then seek out the source of this change in sign by comparing connectivity between this region and two cortical regions that show constant effect of Prediction Error across all dimensions. So, we have Prediction Error as direct input to these cortical regions and a block corresponding to stimulus-type and valence as the modulatory input. Based on 2nd level parametric effects we would say that the observed neural activity is due to the experiment.
Concerning ROIs: we extract using a mask based on aal and the 2nd level SPM{t} - typically around 80 voxels. We include all voxels per subject within this mask (threshold set to 1). The % variance explained by the 1st PC was always between 70 - 90. The inclusion of all voxels might be a problem wrt. noise- but we've also done DCM in another event-related paradigm, where we used 0.05 uncorrected at individual SPM{F} maps of the effects of interest and spheres on the peak, and also in this case we had flat-lines (or when changing hyper-priors we could avoid flat-lines, but still only modest variance explained around 5-15 %).
So, we think the issue might more be related to the application of DCM on event-related designs- the GLM result may average noise out looking at the entire time-series, but when trying to model trial-wise dynamics, there might be a problem for deterministic DCM, as noise is also very much present in the trial-wise dynamics? This was our rationale behind using the stochastic approach.
Concerning the use on non-linear DCM, our thought was to exploit the shorter integration time step- so we just have a D-matrix of zeros, and would still primarily be interested in the B-matrix.
Best regards, Brian
Den 02-06-2014 11:16, Zeidman, Peter skrev:
> Dear Brian,
> I think it would be good to try to diagnose why your model estimation isn't getting off the ground before turning to stochastic DCM. Some questions:
>
> - What kind of task are your participants performing, and what's the experimental design? Do you think the neural activity you observe is caused by your experimental manipulations, or by endogenous activity? These considerations will have a big impact on the success of your models. E.g. if you had an autobiographical memory recall task over many seconds, it might be fair to argue that most activity is caused endogenously rather than by your cue, and thus stochastic DCM would be favourable.
>
> - You say you get robust 1st level main effects. How are you defining your ROIs? Based on single-subject activation clusters? Or anatomically?
>
> I'm not sure about if you'll get an advantage from non-linear DCM - it depends on whether you hypothesise a region modulates a connection. Give it a try if you think it makes sense. You could also try 2-state DCM, which has richer dynamics and so might stand a better chance of fitting your data.
>
> Best,
> Peter.
>
> Peter Zeidman, PhD
> Methods Group
> Wellcome Trust Centre for Neuroimaging
> 12 Queen Square
> London WC1N 3BG
> [log in to unmask]
>
>
>
>
> -----Original Message-----
> From: SPM (Statistical Parametric Mapping) [mailto:[log in to unmask]]
> On Behalf Of Brian Haagensen
> Sent: 31 May 2014 14:04
> To: [log in to unmask]
> Subject: [SPM] stochastic DCM
>
> Dear DCM experts,
>
> we apply bilinear DCM for event-related fMRI data, but often see that the model flat-lines, also with more tight hyper-priors than the default ones. This is the case also for quite robust 1st level main effects of the inputs.
>
> What is the opinion among you on the use of stochastic DCMs in a case like this, ie. where we also have known inputs to the system? My own thoughts would be that if we're interested in inference on model space, this approach would be ok, but I'm more in doubt concerning inference on model parameters- because noise explains so much of data, the posterior parameter estimates are very small and in our case typicaly having posterior probabilities around 0.5. So I guess stats and correlations on these would be problematic?
>
> A more preferable option if we're interested in the parameters might be to use the nonlinear integration scheme with its shorter time step, because our modulatory input (multiplied on B) is event-related (more like neural activity multiplied on D) - do you have some thoughts on this ?
>
> Thanks for your time!
>
> Best regards, Brian Haagensen
>
|