Dear all,
Below is a summary of our correspondence with Jérôme Courtemanche who
asked off the list some very important questions about using SPM8b to
analyse CTF MEG data. Some of the issues are also relevant for other
MEG systems and for EEG.
Enjoy,
Vladimir
-------------------------------------------------------------------
On Sat, Nov 1, 2008 at 7:14 PM, Jérôme Courtemanche wrote:
> Hello Dr Litvak,
>
> I thought I was dreaming when I read the message below. That is because I
> have almost finished my analysis with CTF-acquired data. Before importing
> them in SPM, I used the 3rd gradient noise reduction option in a basic CTF
> software (DataEditor). I am doing source reconstruction and ... the solution
> seems fine (more posterior activation in early visual ERF components).
> Please give me info about all this. Thank you very much.
>
> "Another problem we are presently discussing internally is that SPM does
> not represent MEG gradiometers correctly and SPM5 source reconstruction
>
> is only correct for CTF data recorded with 'raw' setting. I'm not sure
> what kind of montage you have in BTi, but if you have things like 3rd
> gradient or planar gradiometers I'd be extra-careful. Feel free to play
>
> around but please send us the details before you try to publish
> anything. Most of these problems will hopefully be resolved in SPM8."
>
>
On Sun, Nov 2, 2008 at 11:57 PM, Vladimir Litvak wrote:
> Dear Jerome,
>
> SPM5 indeed does not compute the leadfields for 3rd gradient data
> correctly. The fact that your results make sense is not surprising. On
> the example we checked there was some difference but it was not so
> big. However, I don't think anyone can quantify for you what can be
> the worst case and how much error you have so the best thing is to do
> what's right. SPM8b which you can now download can handle 3rd gradient
> correctly. You can convert your preprocessed SPM5 files to SPM8 and
> the only thing you'll have to do is re-read the sensor definitions
> from the original CTF files. There is a function for that in MEEG
> tools toolbox that comes with SPM5 (spm_eeg_copygrad). You might
> encounter some errors due to recent changes in the code, but I'll
> recheck everything as soon as I can because this functionality is also
> required for the people here and there is no problem to make it work
> smoothly. You'll have to redo the source reconstructions and source
> level analysis if you've done any. Some things in SPM8 are similar to
> SPM5 and some are quite different, but there is a detailed manual to
> get you started.
>
> Best,
>
> Vladimir
>
On Mon, Nov 3, 2008 at 5:34 PM, Jérôme Courtemanche wrote:
> Dear Vladimir,
>
> I converted using /external/fileio/private, doing so:
> define settings yes
> How to read continuous
> read everything yes
> read across trial yes
> what channels meg
>
> and the only message I got in matlab (not red) is:
> converting units from 'cm' to 'mm'
> converting units from 'cm' to 'mm'
> checkmeeg: no channel type, assigning default
> checkmeeg: no units, assigning default
> checkmeeg: transform type missing, assigning default
> checkmeeg: data scale missing, assigning default
> checkmeeg: data type is missing, assigning default
> creating layout from cfg.grad
> creating layout for ctf275 system
> undoing the G3BR balancing
>
> note: I have the anatomics and I want to use them. I have 9 subjects with 4
> runs.
>
> Many many thanks,
>
> Jérôme
>
On Mon, Nov 3, 2008 at 6:33 PM, Vladimir Litvak wrote:
> Dear Jerome,
>
>
> The messages you got are OK. They do not indicate a problem.
> You can use individual MRIs. What I suggest you to do is first
> coregister and reslice your images using EEGtemplates/smri.nii as the
> template. Then you can use the function Fieldtip-SPM MEG head
> modelling in MEEG tools that will ask you for your SPM8 dataset and
> the MRI image as inputs and will generate a BEM model. At the end it
> has some options about whether to use SPM or Fieldtrip variant of the
> head and inner skull meshes. I presently use the FT option for both,
> but you can try and see what looks best. Make sure though that the
> cortical mesh doesn't stick out the inner skull mesh. Perhaps then
> it'd be better to use FT for the head mesh and SPM for inner skull
> mesh. You can then load the MEEG dataset into SPMs 3D GUI and proceed
> with coregistration.
>
>
> Vladimir
>
On Tue, Nov 4, 2008 at 4:10 PM, Jérôme Courtemanche wrote:
> There we go it is converted I can see it! I guess this message is ok:
>
> Warning: Could not obtain electrode locations automatically.
>> In spm_eeg_convert at 388
> In spm_eeg_convert_ui at 79
> Warning: Could not obtain fiducials automatically.
>> In spm_eeg_convert at 395
> In spm_eeg_convert_ui at 79
> checkmeeg: no channel type, assigning default
> checkmeeg: no units, assigning default
> checkmeeg: transform type missing, assigning default
> checkmeeg: data scale missing, assigning default
> checkmeeg: data type is missing, assigning default
On Tue, Nov 4, 2008 at 4:17 PM, Vladimir Litvak
<[log in to unmask]> wrote:
> Yes that's exactly the point that there is no sensors and fiducials in
> that file and you should add them using spm_eeg_copygrad.
>
> Vladimir
On Tue, Nov 4, 2008 at 5:19 PM, Jérôme Courtemanche wrote:
> Dear Vladimir,
>
> 1) The conversion of my averaged spm5 file is correct. I compared the data
> of a sensor across spm versions and it is the same.
> 2) I used the spm_eeg_copygrad.
> We can see that:
> a) the MRT53 sensor is not at the same place in 2D
> b) in 3D, sensors pass through the head.
>
> Thanks for your great help
>
> Jerome
On Tue, Nov 4, 2008 at 5:30 PM, Vladimir Litvak wrote:
>
>> We can see that:
>> a) the MRT53 sensor is not at the same place in 2D
>
> Do you know what is actually the right position? I'd bet that SPM8
> does the right thing as it uses what is read from your dataset
> directly. SPM5 used a standard template which might not be correct for
> your CTF machine. But if you can compare with the layout of your
> sensors that you probably have somewhere perhaps you should do that.
>
>> b) in 3D, sensors pass through the head.
>>
>
> There might be several reasons. First that display in the reviewing tool that you are using
> might not work correctly. You should trust the display that you get after coregistration
> in the graphics window which is uglier but more representative of the
> right thing. I suspect you haven't done the coregistration at all and
> just pressed at the button in GUI. In that case what you see is not
> very meaningful. You should do the coregistration properly (see the
> manual - 3D source imaging chapter) and then see what you get.
>
> Best,
>
> Vladimir
>
On Tue, Nov 4, 2008 at 6:52 PM, Jérôme Courtemanche wrote:
>
> I followed your steps (coregister and reslice to eegtemplate, fieldtrip,
> FT, FT)
> However when I load the spm8 file and go for coregistration, I am not asked
> about the headshape (as the manual says). It goes straight on nas,lpa,rpa.
>
> I would like to use the headshape if possible.
>
>
> Best regards,
>
> Jerome
On Tue, Nov 4, 2008 at 7:34 PM, Vladimir Litvak wrote:
> Do you actually have a headshape measurement? You would only have one
> if you've done it with Polhemus outside the scanner. If you have it
> then there is explanation on the manual about how to use (it's in the
> preprocessing chapter in the part that explains 'Prepare'). If you
> don't then the fiducials is all you have.
>
> Vladimir
On Thu, Nov 6, 2008 at 1:42 AM, Jérôme Courtemanche wrote:
> Dear Dr Litvak,
>
> I could do a source reconstruction! Below there is some
> questions. Just answer what you can.
>
> 1) As you suggested, I checked out the 2D sensor position according to the
> CTF software, and it appears that SPM8 is right.
>
>
> 3) The matlab message after clicking forward model was:
>
> computing surface normals
> Warning: Matrix is close to singular or badly scaled.
> Results may be inaccurate. RCOND = 1.088715e-017.
>> In forwinv\private\meg_ini>getcoeffs at 94
> In forwinv\private\meg_ini at 36
> In forwinv\private\prepare_vol_sens at 170
> In forwinv_prepare_vol_sens at 11
> In spm_eeg_inv_forward at 46
> In spm_eeg_inv_forward_ui at 28
> In spm_eeg_inv_imag_api>Forward_Callback at 88
> In spm_eeg_inv_imag_api at 54
> Foward model complete - thank you
>
> 4) About filtering. MEG studies are largely exploratory, so I am not prone
> to use any filter. However in inversion we are forced to chose a filter.
> a) Is it mandatory for source reconstruction?
> b) If not, what should I comment-out exactly?
> c) Do you think it is wrong not to use filters in preprocessing?
>
> 5) here is a though question about a simple ERF source analysis:
> ----------------------------------------------------------
> a) Say my meg stimuli last from -100 to 3000ms and my wois are [190 to
> 230] & [2700 2750].
>
> I see 3 options:
> -invert the window [0 3000] and generate images on the wois [290 to 330] &
> [2700 2750]
> -invert the window [290 to 330] & [2700 2750] and generate their images.(not
> recommended in the manual)
> -invert a shorter window e.g. [0 500] & [2500 3000] and generate the wois
> [290 to 330] & [2700 2750], respectly.
>
> Maybe there is no way yet to take an objective decision but the results are
> very different (image attached). May I recall the fact that there is a
> limited number of dipoles in the solution but many processes involved in
> 3000ms.
> b) A major sub-question: is there a minimum number of samples to be
> included in a window for a meaningful source reconstruction?
>
> 6) To be sure about analysis is sensor space. If I have:
> -"prepared" the file including the shape structure,
> -coregistered as seen in attached file
> -return to prepare and clicked "project 3D" in 2D projection tab.
> -saved it all
> Will a generated image using mat-to-3D-images be in a common head position
> to allow meaningful group analysis?
>
> Gratefully,
>
> Jerome
On Thu, Nov 6, 2008 at 8:39 AM, Vladimir Litvak wrote:
> Dear Jerome,
>
>> I could do a source reconstruction! You saved me. Below there is some
>> questions. Just answer what you can.
>>
>
> Congratulations!!! As far as I know you are the first external user of
> SPM8b who managed to get this far. I've actually just got an e-mail
> from another user who also successfully did source reconstruction so
> it's a tie.
>
>
>> 1) As you suggested, I checked out the 2D sensor position according to the
>> CTF software, and it appears that SPM8 is right.
>>
>
> Good.
>
>
>> 3) The matlab message after clicking forward model was:
>>
>> computing surface normals
>> Warning: Matrix is close to singular or badly scaled.
>> Results may be inaccurate. RCOND = 1.088715e-017.
>>> In forwinv\private\meg_ini>getcoeffs at 94
>> In forwinv\private\meg_ini at 36
>> In forwinv\private\prepare_vol_sens at 170
>> In forwinv_prepare_vol_sens at 11
>> In spm_eeg_inv_forward at 46
>> In spm_eeg_inv_forward_ui at 28
>> In spm_eeg_inv_imag_api>Forward_Callback at 88
>> In spm_eeg_inv_imag_api at 54
>> Foward model complete - thank you
>>
>
> That's normal. It has something to do with mesh vertices being too close.
>
>> 4) About filtering. MEG studies are largely exploratory, so I am not prone
>> to use any filter. However in inversion we are forced to chose a filter.
>> a) Is it mandatory for source reconstruction?
>> b) If not, what should I comment-out exactly?
>> c) Do you think it is wrong not to use filters in preprocessing?
>>
>
> You are only forced to choose a filter if you do 'custom' inversion.
> In the standard you are not. I think there should be an option not to
> filter at this stage. However as we recently discussed with Prof.
> Friston, his inversion is only sensitive to changes from baseline and
> not to static topographies. Thus I don't think that filtering between
> 1 and 256 Hz will affect your results in most cases. The only good
> reason not to filter is when you have a sharp stimulus artefact and
> you don't want it to leak to other time frames.
>
>
>> 5) here is a though question about a simple ERF source analysis:
>> ----------------------------------------------------------
>> a) Say my meg stimuli last from -100 to 3000ms and my wois are [190 to
>> 230] & [2700 2750].
>>
>> I see 3 options:
>> -invert the window [0 3000] and generate images on the wois [290 to 330] &
>> [2700 2750]
>> -invert the window [290 to 330] & [2700 2750] and generate their images.(not
>> recommended in the manual)
>> -invert a shorter window e.g. [0 500] & [2500 3000] and generate the wois
>> [290 to 330] & [2700 2750], respectly.
>>
>> Maybe there is no way yet to take an objective decision but the results are
>> very different (image attached). May I recall the fact that there is a
>> limited number of dipoles in the solution but many processes involved in
>> 3000ms.
>> b) A major sub-question: is there a minimum number of samples to be
>> included in a window for a meaningful source reconstruction?
>>
>
> I'm CCing this to Karl as this is already a level of discussion he
> might want to take part in. Remember that there is not much practical
> experience here either with these tools so my answers are not to be
> taken as the supreme truth. As I wrote in the manual I would think
> that you should invert the whole time period (option a). If you just
> invert sub-windows you can indeed get different results as the
> algorithm does not see all the sources well, but I'd predict that the
> results from option a will be better. The number of dipoles in the
> model is so huge that it's definitely not a limiting factor. I know
> that for the reason I mentioned above SPM would not be able to invert
> a single topography (unlike standard LORETA) but I don't know what the
> minimum is (perhaps Karl can tell us). In any case I don't think it's
> an interesting question as you shouldn't limit your time window.
>
>
>> 6) To be sure about analysis is sensor space. If I have:
>> -"prepared" the file including the shape structure,
>> -coregistered as seen in attached file
>> -return to prepare and clicked "project 3D" in 2D projection tab.
>> -saved it all
>> Will a generated image using mat-to-3D-images be in a common head position
>> to allow meaningful group analysis?
>
> Here it's a matter of taste. The image will be in coordinates that are
> defined relative to the head. Thus in two subjects you might get the
> same sensor in different places. But perhaps it's actually the right
> thing because the head was in different places so this sensor was
> above different brain areas.Since you export to images and the images
> don't care about sensors there is no problem to do statistics on this
> kind of data. Alternatively you can use the same template for all
> subjects. You can for instance project one subject, save the result as
> a template and load it to all others. That'll have exactly the
> opposite pro's and con's. So I'd recommend you to play around (if you
> have time) and see what works best for you. Also the more you smooth
> the images the less important the difference will be so you might want
> to use smoothing to make the activations coincide better.
>
>
> Best,
>
> Vladimir
>
On Thu, Nov 6, 2008 at 8:58 AM, Vladimir Litvak wrote:
> One thing I forgot to mention. You might want to use group inversion
> (see the manual) to get results more suitable for group statistics.
>
> Vladimir
On Thu, Nov 6, 2008 at 12:22 PM, Karl Friston wrote:
> Dear Jérôme,
>
>> 5) here is a tough question about a simple ERF source analysis:
>> ----------------------------------------------------------
>> a) Say my meg stimuli last from -100 to 3000ms and my wois are [190 to
>> 230] & [2700 2750].
>>
>> I see 3 options:
>> -invert the window [0 3000] and generate images on the wois [290 to 330] &
>> [2700 2750]
>> -invert the window [290 to 330] & [2700 2750] and generate their
>> images.(not
>> recommended in the manual)
>> -invert a shorter window e.g. [0 500] & [2500 3000] and generate the wois
>> [290 to 330] & [2700 2750], respectly.
>>
>> Maybe there is no way yet to take an objective decision but the results
>> are
>> very different (image attached). May I recall the fact that there is a
>> limited number of dipoles in the solution but many processes involved in
>> 3000ms.
>> b) A major sub-question: is there a minimum number of samples to be
>> included in a window for a meaningful source reconstruction?
>
> I think Vladimir is absolutely right, there is now correct answer. The
> source reconstruction will be optimal for the data (window) used and this will
> change with the window. Having said this, the precision of the estimates will
> increase with the amount of data and therefore the windows should not be too short.
> In principle, it would be possible to invert two time bins (giving one degree of freedom
> to differences the fixed topography). However, I would try to capture the transients of
> interest (i.e., capture the experimentally evoked variance).
>
> I hope this helps - Karl
>
On Fri, Nov 7, 2008 at 7:04 PM, Jérôme Courtemanche wrote:
> Dear Dr Litvak,
>
> 1) Dr Friston and you seem to agree, so I will follow the advice to invert a
> woi of [0 3000].
>
> 2) I'd like to understand more about 2D projection again. You say that the
> mat-to-3d images will be in coordinates that are defined relative to the
> head. I wonder if it is only based on a rigid transformation using fiducials
> or headshape though.
>
> a) if I coregistered using my anatomic and the 3D reconstruction
> dialog, does the calculation take into consideration any normalization? In
> principle, will a person with a little head placed exactly the same way
> under the sensors as a person with a big head get a similar 2D projection?
> &
> b) if in the prepare dialog I use my headshape to coregister to the
> eegtemplate, I see a stretching of my headshape. Should the mat-to-3D images
> be in coordinates calculated using the parameters of the stretching?
>
> Those questions are important, but in practice I didn't see any change in
> the mat-to-3D images (attached file). note: I took care of saving, and
> clicking apply as mentioned in the manual.
>
> Thanks again!
>
> Jerome
On Fri, Nov 7, 2008 at 8:45 PM, Vladimir Litvak wrote:
> Dear Jerome,
>
> In MEG case coregistration indeed does not affect the way sensors are
> projected to 2D as you found empirically. The sensor positions are
> read from the raw dataset in 'head coordinates' i.e. their positions
> will be different if the head position at the beginning of a run is
> different. However, coregistration does not affect that. What MEG
> coregistration does is just brings the head volume model in the same
> coordinate system as the sensors. The sensors stay unchanged. So
> 'sensor level analysis' is just what its name implies, it does not
> take into account any head model. It's different for EEG as mentioned
> in the manual.
>
> Best,
>
> Vladimir
|