JiscMail Logo
Email discussion lists for the UK Education and Research communities

Help for SPM Archives


SPM Archives

SPM Archives


SPM@JISCMAIL.AC.UK


View:

Message:

[

First

|

Previous

|

Next

|

Last

]

By Topic:

[

First

|

Previous

|

Next

|

Last

]

By Author:

[

First

|

Previous

|

Next

|

Last

]

Font:

Proportional Font

LISTSERV Archives

LISTSERV Archives

SPM Home

SPM Home

SPM  November 2008

SPM November 2008

Options

Subscribe or Unsubscribe

Subscribe or Unsubscribe

Log In

Log In

Get Password

Get Password

Subject:

MEG analysis in SPM8b

From:

Vladimir Litvak <[log in to unmask]>

Reply-To:

Vladimir Litvak <[log in to unmask]>

Date:

Sun, 9 Nov 2008 10:54:53 +0000

Content-Type:

text/plain

Parts/Attachments:

Parts/Attachments

text/plain (489 lines)

Dear all,

Below is a summary of our correspondence with Jérôme Courtemanche who
asked off the list some very important questions about using SPM8b to
analyse CTF MEG data. Some of the issues are also relevant for other
MEG systems and for EEG.

Enjoy,

Vladimir

-------------------------------------------------------------------



On Sat, Nov 1, 2008 at 7:14 PM, Jérôme Courtemanche  wrote:
> Hello Dr Litvak,
>
> I thought I was dreaming when I read the message below. That is because I
> have almost finished my analysis with CTF-acquired data. Before importing
> them in SPM, I used the 3rd gradient noise reduction option in a basic CTF
> software (DataEditor). I am doing source reconstruction and ... the solution
> seems fine (more posterior activation in early visual ERF components).
> Please give me info about all this. Thank you very much.
>
> "Another problem we are presently discussing internally is that SPM does
> not represent MEG gradiometers correctly and SPM5 source reconstruction
>
> is only correct for CTF data recorded with 'raw' setting. I'm not sure
> what kind of montage you have in BTi, but if you have things like 3rd
> gradient or planar gradiometers I'd be extra-careful. Feel free to play
>
> around but please send us the details before you try to publish
> anything. Most of these problems will hopefully be resolved in SPM8."
>
>


On Sun, Nov 2, 2008 at 11:57 PM, Vladimir Litvak  wrote:
> Dear Jerome,
>
> SPM5 indeed does not compute the leadfields for 3rd gradient data
> correctly. The fact that your results make sense is not surprising. On
> the example we checked there was some difference but it was not so
> big. However, I don't think anyone can quantify for you what can be
> the worst case and how much error you have so the best thing is to do
> what's right. SPM8b which you can now download can handle 3rd gradient
> correctly. You can convert your preprocessed SPM5 files to SPM8 and
> the only thing you'll have to do is re-read the sensor definitions
> from the original CTF files. There is a function for that in MEEG
> tools toolbox that comes with SPM5 (spm_eeg_copygrad). You might
> encounter some errors due to recent changes in the code, but I'll
> recheck everything as soon as I can because this functionality is also
> required for the people here and there is no problem to make it work
> smoothly. You'll have to redo the source reconstructions and source
> level analysis if you've done any. Some things in SPM8 are similar to
> SPM5 and some are quite different, but there is a detailed manual to
> get you started.
>
> Best,
>
> Vladimir
>




On Mon, Nov 3, 2008 at 5:34 PM, Jérôme Courtemanche  wrote:
> Dear Vladimir,
>
>  I converted using /external/fileio/private, doing so:
> define settings yes
> How to read continuous
> read everything yes
> read across trial yes
> what channels meg
>
> and the only message I got in matlab (not red) is:
> converting units from 'cm' to 'mm'
> converting units from 'cm' to 'mm'
> checkmeeg: no channel type, assigning default
> checkmeeg: no units, assigning default
> checkmeeg: transform type missing, assigning default
> checkmeeg: data scale missing, assigning default
> checkmeeg: data type is missing, assigning default
> creating layout from cfg.grad
> creating layout for ctf275 system
> undoing the G3BR balancing
>
> note: I have the anatomics and I want to use them. I have 9 subjects with 4
> runs.
>
> Many many thanks,
>
> Jérôme
>


On Mon, Nov 3, 2008 at 6:33 PM, Vladimir Litvak wrote:
> Dear Jerome,
>
>
>  The messages you got are OK. They do not indicate a problem.
> You can use individual MRIs. What I suggest you to do is first
> coregister and reslice your images using EEGtemplates/smri.nii as the
> template. Then you can use the function Fieldtip-SPM MEG head
> modelling in MEEG tools that will ask you for your SPM8 dataset and
> the MRI image as inputs and will generate a BEM model. At the end it
> has some options about whether to use SPM or Fieldtrip variant of the
> head and inner skull meshes. I presently use the FT option for both,
> but you can try and see what looks best. Make sure though that the
> cortical mesh doesn't stick out the inner skull mesh. Perhaps then
> it'd be better to use FT for the head mesh and SPM for inner skull
> mesh. You can then load the MEEG dataset into SPMs 3D GUI and proceed
> with coregistration.
>
>
> Vladimir
>


On Tue, Nov 4, 2008 at 4:10 PM, Jérôme Courtemanche wrote:
> There we go it is converted I can see it! I guess this message is ok:
>
> Warning: Could not obtain electrode locations automatically.
>> In spm_eeg_convert at 388
>   In spm_eeg_convert_ui at 79
> Warning: Could not obtain fiducials automatically.
>> In spm_eeg_convert at 395
>   In spm_eeg_convert_ui at 79
> checkmeeg: no channel type, assigning default
> checkmeeg: no units, assigning default
> checkmeeg: transform type missing, assigning default
> checkmeeg: data scale missing, assigning default
> checkmeeg: data type is missing, assigning default


On Tue, Nov 4, 2008 at 4:17 PM, Vladimir Litvak
<[log in to unmask]> wrote:
> Yes that's exactly the point that there is no sensors and fiducials in
> that file and you should add them using spm_eeg_copygrad.
>
> Vladimir



On Tue, Nov 4, 2008 at 5:19 PM, Jérôme Courtemanche  wrote:
> Dear Vladimir,
>
> 1) The conversion of my averaged spm5 file is correct. I compared the data
> of a sensor across spm versions and it is the same.
> 2) I used the spm_eeg_copygrad.
> We can see that:
>     a) the MRT53 sensor is not at the same place in 2D
>     b) in 3D, sensors pass through the head.
>
> Thanks for your great help
>
> Jerome


On Tue, Nov 4, 2008 at 5:30 PM, Vladimir Litvak  wrote:
>
>> We can see that:
>>     a) the MRT53 sensor is not at the same place in 2D
>
> Do you know what is actually the right position? I'd bet that SPM8
> does the right thing as it uses what is read from your dataset
> directly. SPM5 used a standard template which might not be correct for
> your CTF machine. But if you can compare with the layout of your
> sensors that you probably have somewhere perhaps you should do that.
>
>>     b) in 3D, sensors pass through the head.
>>
>
> There might be several reasons. First that display in the reviewing tool that you are using
> might not work correctly. You should trust the display that you get after coregistration
> in the graphics window which is uglier but more representative of the
> right thing. I suspect you haven't done the coregistration at all and
> just pressed at the button in GUI. In that case what you see is not
> very meaningful. You should do the coregistration properly (see the
> manual - 3D source imaging chapter) and then see what you get.
>
> Best,
>
> Vladimir
>


On Tue, Nov 4, 2008 at 6:52 PM, Jérôme Courtemanche  wrote:
>
>  I followed your steps (coregister and reslice to eegtemplate, fieldtrip,
> FT, FT)
> However when I load the spm8 file and go for coregistration, I am not asked
> about the headshape (as the manual says). It goes straight on nas,lpa,rpa.
>
>  I would like to use the headshape if possible.
>
>
> Best regards,
>
> Jerome


On Tue, Nov 4, 2008 at 7:34 PM, Vladimir Litvak wrote:
> Do you actually have a headshape measurement? You would only have one
> if you've done it with Polhemus outside the scanner. If you have it
> then there is explanation on the manual about how to use (it's in the
> preprocessing chapter in the part that explains 'Prepare'). If you
> don't then the fiducials is all you have.
>
> Vladimir


On Thu, Nov 6, 2008 at 1:42 AM, Jérôme Courtemanche  wrote:
> Dear Dr Litvak,
>
> I could do a source reconstruction!  Below there is some
> questions. Just answer what you can.
>
> 1) As you suggested, I checked out the 2D sensor position according to the
> CTF software, and it appears that SPM8 is right.
>
>
> 3) The matlab message after clicking forward model was:
>
> computing surface normals
> Warning: Matrix is close to singular or badly scaled.
>          Results may be inaccurate. RCOND = 1.088715e-017.
>> In forwinv\private\meg_ini>getcoeffs at 94
>   In forwinv\private\meg_ini at 36
>   In forwinv\private\prepare_vol_sens at 170
>   In forwinv_prepare_vol_sens at 11
>   In spm_eeg_inv_forward at 46
>   In spm_eeg_inv_forward_ui at 28
>   In spm_eeg_inv_imag_api>Forward_Callback at 88
>   In spm_eeg_inv_imag_api at 54
> Foward model complete - thank you
>
> 4) About filtering. MEG studies are largely exploratory, so I am not prone
> to use any filter. However in inversion we are forced to chose a filter.
> a) Is it mandatory for source reconstruction?
> b) If not, what should I comment-out exactly?
> c) Do you think it is wrong not to use filters in preprocessing?
>
> 5) here is a though question about a simple ERF source analysis:
> ----------------------------------------------------------
>    a) Say my meg stimuli last from -100 to 3000ms and my wois are [190 to
> 230] & [2700 2750].
>
> I see 3 options:
> -invert the window [0 3000] and generate images on the wois [290 to 330] &
> [2700 2750]
> -invert the window [290 to 330] & [2700 2750] and generate their images.(not
> recommended in the manual)
> -invert a shorter window e.g. [0 500] & [2500 3000] and generate  the wois
> [290 to 330] & [2700 2750], respectly.
>
> Maybe there is no way yet to take an objective decision but the results are
> very different (image attached). May I recall the fact that there is a
> limited number of dipoles in the solution but many processes involved in
> 3000ms.
>      b) A major sub-question: is there a minimum number of samples to be
> included in a window for a meaningful source reconstruction?
>
> 6) To be sure about analysis is sensor space. If I have:
> -"prepared" the file including the shape structure,
> -coregistered as seen in attached file
> -return to prepare and clicked "project 3D" in 2D projection tab.
> -saved it all
>  Will a generated image using mat-to-3D-images be in a common head position
> to allow meaningful group analysis?
>
> Gratefully,
>
> Jerome



On Thu, Nov 6, 2008 at 8:39 AM, Vladimir Litvak  wrote:
> Dear Jerome,
>
>> I could do a source reconstruction! You saved me. Below there is some
>> questions. Just answer what you can.
>>
>
> Congratulations!!! As far as I know you are the first external user of
> SPM8b who managed to get this far. I've actually just got an e-mail
> from another user who also successfully did source reconstruction so
> it's a tie.
>
>
>> 1) As you suggested, I checked out the 2D sensor position according to the
>> CTF software, and it appears that SPM8 is right.
>>
>
> Good.
>
>
>> 3) The matlab message after clicking forward model was:
>>
>> computing surface normals
>> Warning: Matrix is close to singular or badly scaled.
>>          Results may be inaccurate. RCOND = 1.088715e-017.
>>> In forwinv\private\meg_ini>getcoeffs at 94
>>   In forwinv\private\meg_ini at 36
>>   In forwinv\private\prepare_vol_sens at 170
>>   In forwinv_prepare_vol_sens at 11
>>   In spm_eeg_inv_forward at 46
>>   In spm_eeg_inv_forward_ui at 28
>>   In spm_eeg_inv_imag_api>Forward_Callback at 88
>>   In spm_eeg_inv_imag_api at 54
>> Foward model complete - thank you
>>
>
> That's normal. It has something to do with mesh vertices being too close.
>
>> 4) About filtering. MEG studies are largely exploratory, so I am not prone
>> to use any filter. However in inversion we are forced to chose a filter.
>> a) Is it mandatory for source reconstruction?
>> b) If not, what should I comment-out exactly?
>> c) Do you think it is wrong not to use filters in preprocessing?
>>
>
> You are only forced to choose a filter if you do 'custom' inversion.
> In the standard you are not. I think there should be an option not to
> filter at this stage. However as we recently discussed with Prof.
> Friston, his inversion is only sensitive to changes from baseline and
> not to static topographies. Thus I don't think that filtering between
> 1 and 256 Hz will affect your results in most cases. The only good
> reason not to filter is when you have a sharp stimulus artefact and
> you don't want it to leak to other time frames.
>
>
>> 5) here is a though question about a simple ERF source analysis:
>> ----------------------------------------------------------
>>    a) Say my meg stimuli last from -100 to 3000ms and my wois are [190 to
>> 230] & [2700 2750].
>>
>> I see 3 options:
>> -invert the window [0 3000] and generate images on the wois [290 to 330] &
>> [2700 2750]
>> -invert the window [290 to 330] & [2700 2750] and generate their images.(not
>> recommended in the manual)
>> -invert a shorter window e.g. [0 500] & [2500 3000] and generate  the wois
>> [290 to 330] & [2700 2750], respectly.
>>
>> Maybe there is no way yet to take an objective decision but the results are
>> very different (image attached). May I recall the fact that there is a
>> limited number of dipoles in the solution but many processes involved in
>> 3000ms.
>>      b) A major sub-question: is there a minimum number of samples to be
>> included in a window for a meaningful source reconstruction?
>>
>
> I'm CCing this to Karl as this is already a level of discussion he
> might want to take part in. Remember that there is not much practical
> experience here either with these tools so my answers are not to be
> taken as the supreme truth. As I wrote in the manual I would think
> that you should invert the whole time period (option a). If you just
> invert sub-windows you can indeed get different results as the
> algorithm does not see all the sources well, but I'd predict that the
> results from option a will be better. The number of dipoles in the
> model is so huge that it's definitely not a limiting factor. I know
> that for the reason I mentioned above SPM would not be able to invert
> a single topography (unlike standard LORETA) but I don't know what the
> minimum is (perhaps Karl can tell us). In any case I don't think it's
> an interesting question as you shouldn't limit your time window.
>
>
>> 6) To be sure about analysis is sensor space. If I have:
>> -"prepared" the file including the shape structure,
>> -coregistered as seen in attached file
>> -return to prepare and clicked "project 3D" in 2D projection tab.
>> -saved it all
>>  Will a generated image using mat-to-3D-images be in a common head position
>> to allow meaningful group analysis?
>
> Here it's a matter of taste. The image will be in coordinates that are
> defined relative to the head. Thus in two subjects you might get the
> same sensor in different places. But perhaps it's actually the right
> thing because the head was in different places so this sensor was
> above different brain areas.Since you export to images and the images
> don't care about sensors there is no problem to do statistics on this
> kind of data.  Alternatively you can use the same template for all
> subjects. You can for instance project one subject, save the result as
> a template and load it to all others. That'll have exactly the
> opposite pro's and con's. So I'd recommend you to play around (if you
> have time) and see what works best for you. Also the more you smooth
> the images the less important the difference will be so you might want
>  to use smoothing to make the activations coincide better.
>
>
> Best,
>
> Vladimir
>


On Thu, Nov 6, 2008 at 8:58 AM, Vladimir Litvak wrote:
> One thing I forgot to mention. You might want to use group inversion
> (see the manual) to get results more suitable for group statistics.
>
> Vladimir


On Thu, Nov 6, 2008 at 12:22 PM, Karl Friston  wrote:
> Dear Jérôme,
>
>> 5) here is a tough question about a simple ERF source analysis:
>> ----------------------------------------------------------
>>    a) Say my meg stimuli last from -100 to 3000ms and my wois are [190 to
>> 230] & [2700 2750].
>>
>> I see 3 options:
>> -invert the window [0 3000] and generate images on the wois [290 to 330] &
>> [2700 2750]
>> -invert the window [290 to 330] & [2700 2750] and generate their
>> images.(not
>> recommended in the manual)
>> -invert a shorter window e.g. [0 500] & [2500 3000] and generate  the wois
>> [290 to 330] & [2700 2750], respectly.
>>
>> Maybe there is no way yet to take an objective decision but the results
>> are
>> very different (image attached). May I recall the fact that there is a
>> limited number of dipoles in the solution but many processes involved in
>> 3000ms.
>>      b) A major sub-question: is there a minimum number of samples to be
>> included in a window for a meaningful source reconstruction?
>
> I think Vladimir is absolutely right, there is now correct answer.  The
> source reconstruction will be optimal for the data (window) used and this will
> change with the window.  Having said this, the precision of the estimates will
> increase  with the amount of data and therefore the windows should not be too short.
> In principle, it would be possible to invert two time bins (giving one degree of freedom
> to differences the fixed topography).  However, I would try to capture the transients of
> interest (i.e., capture the experimentally evoked variance).
>
> I hope this helps - Karl
>

On Fri, Nov 7, 2008 at 7:04 PM, Jérôme Courtemanche  wrote:
> Dear Dr Litvak,
>
> 1) Dr Friston and you seem to agree, so I will follow the advice to invert a
> woi of [0 3000].
>
> 2) I'd like to understand more about 2D projection again. You say that the
> mat-to-3d images will be in coordinates that are defined relative to the
> head. I wonder if it is only based on a rigid transformation using fiducials
> or headshape though.
>
>      a) if I coregistered using my anatomic and the 3D reconstruction
> dialog, does the calculation take into consideration any normalization? In
> principle, will a person with a little head placed exactly the same way
> under the sensors as a person with a big head get a similar 2D projection?
>                                                     &
>      b) if in the prepare dialog I use my headshape to coregister to the
> eegtemplate, I see a stretching of my headshape. Should the mat-to-3D images
> be in coordinates calculated using the parameters of the stretching?
>
> Those questions are important, but in practice I didn't see any change in
> the mat-to-3D images (attached file). note: I took care of saving, and
> clicking apply as mentioned in the manual.
>
> Thanks again!
>
> Jerome


On Fri, Nov 7, 2008 at 8:45 PM, Vladimir Litvak wrote:
> Dear Jerome,
>
> In MEG case coregistration indeed does not affect the way sensors are
> projected to 2D as you found empirically. The sensor positions are
> read from the raw dataset in 'head coordinates' i.e. their positions
> will be different if the head position at the beginning of a run is
> different. However, coregistration does not affect that. What MEG
> coregistration does is just brings the head volume model in the same
> coordinate system as the sensors. The sensors stay unchanged. So
> 'sensor level analysis' is just what its name implies, it does not
> take into account any head model. It's different for EEG as mentioned
> in the manual.
>
> Best,
>
> Vladimir

Top of Message | Previous Page | Permalink

JiscMail Tools


RSS Feeds and Sharing


Advanced Options


Archives

April 2024
March 2024
February 2024
January 2024
December 2023
November 2023
October 2023
September 2023
August 2023
July 2023
June 2023
May 2023
April 2023
March 2023
February 2023
January 2023
December 2022
November 2022
October 2022
September 2022
August 2022
July 2022
June 2022
May 2022
April 2022
March 2022
February 2022
January 2022
December 2021
November 2021
October 2021
September 2021
August 2021
July 2021
June 2021
May 2021
April 2021
March 2021
February 2021
January 2021
December 2020
November 2020
October 2020
September 2020
August 2020
July 2020
June 2020
May 2020
April 2020
March 2020
February 2020
January 2020
December 2019
November 2019
October 2019
September 2019
August 2019
July 2019
June 2019
May 2019
April 2019
March 2019
February 2019
January 2019
December 2018
November 2018
October 2018
September 2018
August 2018
July 2018
June 2018
May 2018
April 2018
March 2018
February 2018
January 2018
December 2017
November 2017
October 2017
September 2017
August 2017
July 2017
June 2017
May 2017
April 2017
March 2017
February 2017
January 2017
December 2016
November 2016
October 2016
September 2016
August 2016
July 2016
June 2016
May 2016
April 2016
March 2016
February 2016
January 2016
December 2015
November 2015
October 2015
September 2015
August 2015
July 2015
June 2015
May 2015
April 2015
March 2015
February 2015
January 2015
December 2014
November 2014
October 2014
September 2014
August 2014
July 2014
June 2014
May 2014
April 2014
March 2014
February 2014
January 2014
December 2013
November 2013
October 2013
September 2013
August 2013
July 2013
June 2013
May 2013
April 2013
March 2013
February 2013
January 2013
December 2012
November 2012
October 2012
September 2012
August 2012
July 2012
June 2012
May 2012
April 2012
March 2012
February 2012
January 2012
December 2011
November 2011
October 2011
September 2011
August 2011
July 2011
June 2011
May 2011
April 2011
March 2011
February 2011
January 2011
December 2010
November 2010
October 2010
September 2010
August 2010
July 2010
June 2010
May 2010
April 2010
March 2010
February 2010
January 2010
December 2009
November 2009
October 2009
September 2009
August 2009
July 2009
June 2009
May 2009
April 2009
March 2009
February 2009
January 2009
December 2008
November 2008
October 2008
September 2008
August 2008
July 2008
June 2008
May 2008
April 2008
March 2008
February 2008
January 2008
December 2007
November 2007
October 2007
September 2007
August 2007
July 2007
June 2007
May 2007
April 2007
March 2007
February 2007
January 2007
2006
2005
2004
2003
2002
2001
2000
1999
1998


JiscMail is a Jisc service.

View our service policies at https://www.jiscmail.ac.uk/policyandsecurity/ and Jisc's privacy policy at https://www.jisc.ac.uk/website/privacy-notice

For help and support help@jisc.ac.uk

Secured by F-Secure Anti-Virus CataList Email List Search Powered by the LISTSERV Email List Manager