Print

Print


Thanks Vladimir!

Unfortunately, the recording is 20 minutes long but the downsampling takes 1h45. I cannot epoch as we are doing TF before cutting the data. But yes I want to avoid aliasing so I'll guess it will be running during lunch break.

Thank you for your fast and helpful answers.
Best
Jessica

On 5/28/2015 2:30 PM, Vladimir Litvak wrote:
[log in to unmask]" type="cite">
10Gb and 214 channels at 10kHz sounds like a lot to me so I'd say 20 min is not bad. You can just run your pipeline overnight and work with downsampled files in the morning. Also note that there is a way to epoch at conversion so if you don't need all your data you could start with a smaller dataset from the beginning.

I usually specify something in a batch (like select a couple of files) and then save it as an m-file to see an example of what is a cell and what is a string etc.

Vladimir

On Thu, May 28, 2015 at 10:22 PM, Jessica Schrouff <[log in to unmask]> wrote:
Thank you for your help Vladimir and Christophe.

I used the spm_eeg_convert routine with the 'header' option to load the header then made 3 conversions (one for EEG channels, one for the diod and one for the mike). I was able to clear the input for the output filename in the batch. This is ECoG data, so the channel list is modified for each patient. Also, I am building a pipeline for my lab so it's easier if they can change parameters in the batches instead of in a script or history.

It runs smoothly (although understanding which inputs should be cells of cells and which should only be cells was not straightforward). The downsampling is super slow though (10,000Hz to 500Hz, 214 channels, 20 minutes = 10Gb in .dat). Is there any way of improving this? How would using the 'decimate' function affect the signal compare to SPM's downsampling?

Thank you,
Best,
Jessica


On 5/28/2015 2:16 AM, Vladimir Litvak wrote:
Dear Jessica,

I think you are over-complicating things. You could use the 'Load header' option in 'Prepare' GUI to read just the header information for your files and save a channel list that you could then use to convert just the channels you need (see p. 360 in the manual http://www.fil.ion.ucl.ac.uk/spm/doc/manual.pdf). Not sure why that list would be different every time if you record on the same system. I would expect that you'd only need to create it once.

If you specify a full path for the output file, SPM will use it. I'm not sure about <-X but you can just save your batch as an .m file and modify that field in your own code. You could also use a low-level script which is more convenient for preprocessing. This can be generated by saving the history from the reviewing tool.

Best,

Vladimir

On Thu, May 28, 2015 at 12:55 AM, Jessica Schrouff <[log in to unmask]> wrote:
Dear All,

I am playing with the edf format from our new acquisition system. Unfortunately, we will have to sample the data as high as 10kHz in some cases. Up to now, the data conversion is fine. However, in the data file, there are extra channels that need to be set aside before the data can be downsampled (e.g. microphone, photodiod).

As I don't know the names or indexes of those channels, I am converting all the channels, then cloning the object for EEG channels (based on the channel labels from the output file) and trying to fill this new DEEG object with only EEG channels. I have an out of memory error at line 24 of subsref

if this.montage.Mind==0
            varargout = {double(subsref(this.data, subs))};
        else


It appears that because I don't have a montage, accessing and writing the data is not performed block by block (which I would expect to be the case, as files are pretty big).

Apart from going myself block by block, is there any other way of doing this? E.g. How could I access the channel names before file conversion to then get the indexes of the EEG channels and not have to clone the converted data? (Although this error should probably be corrected).

Also, when converting, I would like to change the directory. I am doing a 'move' after the conversion but it is slow because my dataset is big. If I change the 'output' name, will it consider the path I put it in? How to have an <-X in the batch for later scripting in that case (as this is empty by default)?

Thank you,
Best regards,
Jessica