Dear Kate,
On Thu, Nov 13, 2008 at 9:58 PM, <[log in to unmask]> wrote:
> Vladimir,
>
> Thanks for your explanation of the individual head models in spm8.
>
> When I run the multimodal faces demo dataset in spm5, I don't get
> exactly the fig 33.9 result, but I do get a location with about 95%
> variance explained.
> When I run in spm8, I only get 69.9% variance explained.
> Can you think of what might have changed to account for this ?
>
The major change is that SPM8 uses a different head model (BEM) and
different code for computing the lead fields. There were also some
changes in the Bayesian routines. We would expect this to improve the
results, but we never actually got to doing the comparison you are
doing. It'd be interesting to investigate this. Are the two solutions
similar in general?
> Also, is there an easy method to take a mm location, such as
> (47, -49, -8) and generate the estimated response ? I didn't see
> the 'virtual electrode' button in my spm8 render window, as I did in spm5.
> There may be some problem with my GUI window, but I am not sure.
>
The render window is undergoing some changes so I'm not sure how well
it works at the moment. Another way is to put your coordinates in the
small input box in the 3D GUI (below the 'Invert' button) and press
the 'mip' button. If you put a single number it'll be interpreted as
time in ms. This is documented in the SPM8 manual as far as I
remember.
> Finally, when the co-register is done to the template brain, are the
> locations of the co-registered sensors saved ? I tried
> using the D.sensors('EEG') function, after co-regostration, but
> those seemed to be the original sensor locations.
>
Since there is a possibility to do the coregistration differently for
each inversion the coregistered sensors are stored separately every
time in D.inv{k}.datareg.sensors where k is the number of the inv cell
(1 if you only did one inversion).
Vladimir
|