On 17/02/10 17:54, Daniel O'Donovan wrote:
> Hi Justin,
>
> I'm afraid I'm still not clear on what sort of system and install you have. If you could give us that, and also the trace from your error we might know more.
No, you are right I am on a 32bit Linux system with self compiled 32bit
binaries.
>
> But, if I'm getting this right (correct me), you want to open a large spectrum file ( > 2GB) on a 32 bit computer.
You are right. My fault!
>
> (Computer science bit)
>
> When your OS opens the spectrum file, it sucks in all the information in that file and notes its size for future reference. On a 32 bit computer, we're limited to 32 bit integers, that is integers of maximum size (2^32 - 1). If a file is larger than this then any reference to the end of the file will be greater than 2^32-1 and so cause a buffer overflow crash. A 2 GB file is larger than 2^32-1, but as you noticed a 1.9GB file is not (no crash).
>
> You *can* get around this limitation by using 'long integers', that is 2x32 bit integers stuck together, giving us a maximum of (2^64 - 1) - much larger than we would ever need (in this decade)! There may be patches for this in the Linux kernel but it's very ugly. The obvious thing would be to switch to using a 64 bit native machine whose 'int' s are all 2^64-1.
>
That's what I understand out of my readings. But isn't the LFS
implemented in both the linux kernel and glibc? I even have support for
>2TB files optional in my kernel (CONFIG_LBDAF -- "Enable block devices
or files of size 2TB and larger.")!
> Now the crash that you're getting is because of this 32 bit limitation. However, this crash could occur in
> 1) the OS itself - in which case a large file patch to your kernel *could* work but I wouldn't want to be the one trying it!
See above.
> 2) In our memops C code, we could rewrite this to use 'long int' s - but this would be at the (minimal) expense of efficiency for small spec. files (and a lot of expense for Wayne) or
I was just asking a question, it wasn't a bug report. I just wondered
what happened.
> 3) in Python itself, in which case you would have to email the Python developers and explain explicitly what you want them to do.
Too much work for this ...
>
> Possible ways around this:
>
> 1) Slice your spec so that we have two smaller, more manageable files (but then two spectrums and so two peak lists etc. etc.)
Mmmmh ...
> 2) Re-process your data at a lower resolution, or chop out empty parts of the spectrum (again reducing the file to < 2 GB)
I hit it with our new 900MHz spectrometer. This also increases the data
amount a lot with similar resolution in the indirect dimensions. But
with realistic processing parameters it still fits to 2GB.
> 3) Get a 64 bit computer.
Fundings are welcome ;)
>
> It's inconvenient, but I don't think that there will be an easy way to handle 2 GB files in 32 bit machines.
I never used but I know that there are a couple of archive programs like
7zip which have LFS.
>
> But then again, if this isn't the case we can only help if you send your machine details AND the stacktrace from the error.
It is my setup dependent so You cannot do anything. But thanks for
offering your help.
justin
--
Justin Lecher
Institute for Neuroscience and Biophysics
ISB 3 - Institute for structural biochemistry
Research Centre Juelich GmbH,
52425 Juelich,Germany
phone: +49 2461 61 5385
|