Print

Print


Dear Peter,

thanks for the reply.
In this case, it's an event-related reversal-learning paradigm, divided 
into 4 sessions according to a 2*2 design with varying stimulus-type 
(dimension 1) and valence of feedback (dimension 2). We model Prediction 
Errors as parametric modulation on the outcome onsets.
We find that the Prediction Error effect changes sign in one region, 
according to stimulus-type and valence. We then seek out the source of 
this change in sign by comparing connectivity between this region and 
two cortical regions that show constant effect of Prediction Error 
across all dimensions. So, we have Prediction Error as direct input to 
these cortical regions and a block corresponding to stimulus-type and 
valence as the modulatory input. Based on 2nd level parametric effects 
we would say that the observed neural activity is due to the experiment.

Concerning ROIs: we extract using a mask based on aal and the 2nd level 
SPM{t} - typically around 80 voxels. We include all voxels per subject 
within this mask (threshold set to 1). The % variance explained by the 
1st PC was always between 70 - 90. The inclusion of all voxels might be 
a problem wrt. noise- but we've also done DCM in another event-related 
paradigm, where we used 0.05 uncorrected at individual SPM{F} maps of 
the effects of interest and spheres on the peak, and also in this case 
we had flat-lines (or when changing hyper-priors we could avoid 
flat-lines, but still only modest variance explained around 5-15 %).

So, we think the issue might more be related to the application of DCM 
on event-related designs- the GLM result may average noise out looking 
at the entire time-series, but when trying to model trial-wise dynamics, 
there might be a problem for deterministic DCM, as noise is also very 
much present in the trial-wise dynamics? This was our rationale behind 
using the stochastic approach.

Concerning the use on non-linear DCM, our thought was to exploit the 
shorter integration time step- so we just have a D-matrix of zeros, and 
would still primarily be interested in the B-matrix.

Best regards, Brian






Den 02-06-2014 11:16, Zeidman, Peter skrev:
> Dear Brian,
> I think it would be good to try to diagnose why your model estimation isn't getting off the ground before turning to stochastic DCM. Some questions:
>
> - What kind of task are your participants performing, and what's the experimental design? Do you think the neural activity you observe is caused by your experimental manipulations, or by endogenous activity? These considerations will have a big impact on the success of your models. E.g. if you had an autobiographical memory recall task over many seconds, it might be fair to argue that most activity is caused endogenously rather than by your cue, and thus stochastic DCM would be favourable.
>
> - You say you get robust 1st level main effects. How are you defining your ROIs? Based on single-subject activation clusters? Or anatomically?
>
> I'm not sure about if you'll get an advantage from non-linear DCM - it depends on whether you hypothesise a region modulates a connection. Give it a try if you think it makes sense. You could also try 2-state DCM, which has richer dynamics and so might stand a  better chance of fitting your data.
>
> Best,
> Peter.
>
> Peter Zeidman, PhD
> Methods Group
> Wellcome Trust Centre for Neuroimaging
> 12 Queen Square
> London WC1N 3BG
> [log in to unmask]
>
>
>
>
> -----Original Message-----
> From: SPM (Statistical Parametric Mapping) [mailto:[log in to unmask]] On Behalf Of Brian Haagensen
> Sent: 31 May 2014 14:04
> To: [log in to unmask]
> Subject: [SPM] stochastic DCM
>
> Dear DCM experts,
>
> we apply bilinear DCM for event-related fMRI data, but often see that the model flat-lines, also with more tight hyper-priors than the default ones. This is the case also for quite robust 1st level main effects of the inputs.
>
> What is the opinion among you on the use of stochastic DCMs in a case like this, ie. where we also have known inputs to the system? My own thoughts would be that if we're interested in inference on model space, this approach would be ok, but I'm more in doubt concerning inference on model parameters- because noise explains so much of data, the posterior parameter estimates are very small and in our case typicaly having posterior probabilities around 0.5. So I guess stats and correlations on these would be problematic?
>
> A more preferable option if we're interested in the parameters might be to use the nonlinear integration scheme with its shorter time step, because our modulatory input (multiplied on B) is event-related (more like neural activity multiplied on D) - do you have some thoughts on this ?
>
> Thanks for your time!
>
> Best regards, Brian Haagensen
>