Print

Print


  Dear Vladimir, dear all,

On 11/15/2010 12:04 PM, Vladimir Litvak wrote:

> This sounds like something worth looking into.I can think about the following:
>
> 1) Make sure you really use exactly the same trials, just to rule out
> this factor.
I can try this in the near future and report the results, although the 
sets of trials in the two cases should be overlapping extremely

> 2) If you use robust averaging, don't use it for this testing.

No, I was not using robust averaging.
> 3) Perhaps try several different estimation methods and see if this
> phenomenon is common to all of them.

That is a good point - so, thinking naively, I assumed that it might 
have to do something with the fact that when you correct for baseline, 
the waveform in the baseline is much closer to its mean that the rest of 
the trials and that at the point that the baseline ends, the waveform is 
less restricted (not bc-corrected) and this maybe enable some higher 
frequencies to occur because of the transition between the baseline 
period and the rest of the trial. Of course this might be stupid, but I 
could not find any rigid arguments to exclude this posssibility.

So then I tried to set the baseline period to actually be the whole 
trial, that is from each trial I subtract its mean (and not just the 
mean of a prestimulus period). This way I could test if what I was 
writing before could be the reason. But when I compared the TF data of 
this (the whole-trial base-correction) to the data where no baseline 
correction had been applied at all, I still found significant 
differences. (of the order of [10^-25, 10^-24])
> 4) This might have something to do with numeric issues. The numbers
> for power in MEG become very small as you mentioned, smaller than
> Matlab's epsilon. I've been planning to start changing units to fT at
> conversion, but I'm waiting for Fieldtrip to provide better generic
> support for determining what the units are. Maybe you should try to
> multiply your data by 1e15 before computing TF. But then also change
> the units to 'fT' because otherwise you'll get really large numbers in
> the exported images.
>

I would also be worried for numerical issues, that is mainly for 
rounding / underflow problems that might appear somewhere along the 
computational chain (especially if matrix inversions are involved and 
the matrices become (close to) singular). On my machine I get the 
following concerning precision:

 >> eps(0)

ans =

   4.9407e-324

 >> eps(realmin)

ans =

   4.9407e-324

 >> realmin/eps(realmin)

ans =

   4.5036e+015

so at a first glance it looks ok - but as I said before these numbers 
are not a guarantee that numerical  errors do not propagate / get 
amplified across the computational chain.

I am still confused about this differences - I will try to test further 
whether numerical errors (as it seems to be the most possible scenario) 
are to blame.

Any insights would be highly appreciated.

Thanks and best,
Panagiotis


> Best,
>
> Vladimir
>
>
>> Dear Panagiotis,
>>
>> On Sat, Nov 13, 2010 at 4:51 AM, Panagiotis Tsiatsis
>> <[log in to unmask]>  wrote:
>>>   Hello dear Vladimir, hello dear all,
>>>
>>> Let me come back to this issue -  first of all I absolutely agree that
>>>
>>> The slow drifts will only affect the lowermost frequency bin (if it
>>> includes the DC) so baseline correction in the time domain does not
>>> rescale all the frequencies or anything of that sort.
>>>
>>>
>>>
>>> but the funny thing (and the main reason why I send the previous e-mail) is
>>> that after processing the same data once with baseline correction and once
>>> without, the Time Frequency analysis of the mean trials differ even in
>>> frequencies as high as 10 - 20Hz and this difference can be (at least) in
>>> the range (-2,2)*10^-25. ( I calculated the contrast of the means of the TF
>>> data with and without baseline correction). This is one order of magnitude
>>> less that my strongest activations in average TF (~4*10^-24) but comparable
>>> to the contrast values among conditions in TF. I understand that baseline
>>> correction affects the artifact rejection process as well but to me the
>>> effect seems far than being small and insignificant. I also have to note
>>> that I have more than 150 trials per conditions whether I apply baseline
>>> correction or not and this number is really similar in each case (+-5
>>> trials). The baseline duration that I used for testing was 100 ms.
>>>
>>> I would absolutely expect to see the very same thing that you wrote in your
>>> previous email - but this is not the case. Any intuitions?
>> Thanks and best,
>> P.
>> On 11/11/2010 6:09 PM, Vladimir Litvak wrote:
>>> Dear Panagiotis,
>>>
>>> On Thu, Nov 11, 2010 at 4:21 PM, Panagiotis Tsiatsis
>>> <[log in to unmask]>    wrote:
>>>>   'Baseline correction is no longer done automatically by spm_eeg_filter.
>>>> Use
>>>> spm_eeg_bc if necessary.'
>>>>
>>>> Dear All,
>>>>
>>>> I 've got a naive question concerning filtering and baseline correction
>>>> in
>>>> MEG data. When applying high-pass filtering in the data, the following
>>>> message appears:
>>>>
>>>> 'Baseline correction is no longer done automatically by spm_eeg_filter.
>>>> Use
>>>> spm_eeg_bc if necessary.'
>>>>
>>>> 1st Question: I suppose this means that the filtering functions does not
>>>> subtract the mean of the trial / continuous file, that is the zero
>>>> coefficient of the fourier transform, right?
>>>>
>>> Yes, the filtering function used to subtract the baseline in SPM5 so
>>> that warning is there for historical reasons.
>>>
>>>> 2nd Question: Would it be neccessary to apply Baseline Correction in MEG
>>>> data? That is, are there any DC compponent biases that might differ
>>>> across
>>>> subjects or "strong", very slow drifts in the recorded activity across
>>>> time?
>>>> I guess it should be neccessary for EEG data where there are amplifier
>>>> offset and slow conductance drifts, but I am not totally sure if this is
>>>> the
>>>> case for MEG recordings
>>>>
>>>> 3rd Question: I am mainly asking the above questions because I want to
>>>> compare the difference in activity in the Time-Frequncy domain among
>>>> conditions (difference in power across various frequncy bands in time),
>>>> and
>>>> I think that in one sense applying baseline correction in the time domain
>>>> and then transforming it to the Time - Frequency domain kind of
>>>> normalizes
>>>> the power of activity across the different frequency bands according to
>>>> the
>>>> baseline, which might eventually smear out the effect (difference in
>>>> frequency amplitude in time) that I want to see. In that sense I think
>>>> that
>>>> applying or not Baseline corrections is a matter of what I want to check
>>>> for
>>>> (relative/absolute power differences). The bottom-line question then
>>>> would
>>>> be whether or not it is absolutely neccessary to apply baseline
>>>> correction
>>>> in MEG (time / time-frequency) data because for example there would be DC
>>>> biases that would be different for different recordings.
>>>>
>>> There are slow drifts in the MEG that in most cases necessitate
>>> baseline correction of high-pass filtering if you want to look at
>>> ERFs. However, this is not relevant for your time-frequency analysis.
>>> The slow drifts will only affect the lowermost frequency bin (if it
>>> includes the DC) so baseline correction in the time domain does not
>>> rescale all the frequencies or anything of that sort. The only problem
>>> might be that large DC offsets in the data confuse some TF estimation
>>> methods so I'd at least subtract the baseline or the mean before doing
>>> TF.
>>>
>>>> 4th Question (irrelevant to the others): I know it would be
>>>> computationally
>>>> extremely heavy, but is there a way to transform continuous data in the
>>>> Time
>>>> - Frequency domain? It would be useful as then i.e.  I would not have to
>>>> apply TF every time that I reepoch the data and I would have no
>>>> "edge-effects" when converting single trials in TF. Plus, it would be
>>>> helpful in eyeballing spontaneous activity data
>>>>
>>>>
>>> This is possible in principle but SPM functions will have great
>>> difficulties handling this kind of data. If you want to do it for 275
>>> MEG channels you'll have huge data arrays and can run into memory
>>> problems. So if you want to do it you need to write your own code
>>> possibly using Fieldtrip functions and only convert to SPM format once
>>> you extract your epochs. What you can do to avoid edge effects is to
>>> pad your epochs with extra data. There is now a function called
>>> spm_eeg_crop (I think it was added after the latest public release but
>>> I can send it to you) that you can use to later remove that padding
>>> from your TF dataset.
>>>
>>> Best,
>>>
>>> Vladimir
>>>
>>>> I would really appreciate your opinion on these matters. I know that they
>>>> might be really basic questions, but I still don't feel absolutely sure
>>>> about the answers.
>>>>
>>>> Thanks and best, and apologies for the long e-mail - I tried to explain
>>>> my
>>>> questions as clearly as I could.
>>>>
>>>> Panagiotis
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Panagiotis S. Tsiatsis
>>>> Max Planck Institute for Biogical Cybernetics
>>>> Cognitive NeuroImaging Group
>>>> Tuebingen, Germany
>>>>
>>
>> --
>> Panagiotis S. Tsiatsis
>> Max Planck Institute for Biogical Cybernetics
>> Cognitive NeuroImaging Group
>> Tuebingen, Germany
>>
>>


-- 
Panagiotis S. Tsiatsis
Max Planck Institute for Biogical Cybernetics
Cognitive NeuroImaging Group
Tuebingen, Germany