Hi all,
Actually, I made a dumb calc when I sent the file size; all those
copes should add up to slightly more than 1.2 GB. But I'll happily
try the suggestion below, since that will probably solve my issue
for the moment. Thanks very much for the explicit insturctions.
Given that I didn't give the right information the other
day, I think this memory issue might still be of interest to
FSL developers. One reason is that I tried an experiment; I
I pulled an extra GB of physical RAM from another G5 and
added that to the 1GB on my box, I still got the allocation error.
I ould push only 66 subjects (=330 copes) through. This meant that
adding an doubling the RAM didn't allow me to add a single subject
to the analysis.
Here's the basic info; someone else can multiply it properly!
72 subjects
5 copes per subject
3.4 MB per cope.
I can run (with 1 or 2 GB of RAM)
66 subjects
5 copes per subject
3.4 MB per cope
I tried increasing the N one S at a time; adding more physical RAM
did not allow me to merge even one additional subject. When I
added the additional S and checked physical memory usage it did
not increase at all. (The machine was able to access the new
RAM).
Maybe all this is an OS issure rather than a development issue,
but I thought I'd at least set the record straight.
Thanks,
Paige
On Mon, 23 Aug 2004, Stephen Smith wrote:
> I think that's right - you are probably running into max process size
> limits - this is a huge amount of data to do at group level.
>
> I think you'll need to split the data up and process it a slize at a
> time....to do this, the following should work:
>
> 1. edit fsl/tcl/feat.tcl and comment out _all_ the calls to "featregapply"
>
> 2. setup the higher-level FEAT design fully. don't run it, just save the
> design.fsf. don't bother using full flame for this high number of inputs,
> just just flame1 or even just OLS.
>
> 3. script a loop to:
> - run featregapply on all the lower-level FEAT directories
> - rename all the reg_standard subdirectories to reg_standard_orig
>
> 4. script a loop to:
> foreach slice N in standard space:
> - create a copy of reg_standard_orig called reg_standard that just
> contains the slice N for each image in reg_standard_orig
> - run "feat design.fsf" (with the saved design) and rename the output
> to sliceN.gfeat
>
> 5. concatenate the sliceN.gfeat/cope1.feat/stats/zstat1.hdr (etc) in the Z
> direction and use easythresh to threshold
>
>
> Good luck!
>
>
>
>
> On Mon, 23 Aug 2004, Christopher Bailey wrote:
>
> > Hi Page,
> >
> > I'm not an expert, but I have a feeling you might indeed be running into
> > a memory-related, 32-bit processing limit! You say avwmerge should
> > create a file < 10GB. I don't think any 32-bit program can access such a
> > large memory block (2 or 4 GB limit). I know in linux it's possible to
> > compile the kernel with up to 64GB memory allocation, but I think this
> > is a curiosity. I couldn't imagine a vanilla mac OS being able to do it
> > either, though I know even less about mac than I do about 32-bit
> > processing...
> >
> > Perhaps some of the more experienced members on the list have a
> > suggestion about alternative analysis setups (which likely depends on
> > what exactly you're setting up).
> >
> > -Chris
> >
> > --
> > Christopher Bailey <[log in to unmask]>
> > MSc (engineering physics & maths)
> > Center for Functionally Integrative Neuroscience
> > Aarhus University Hospital, Denmark
> > http://www.cfin.au.dk/
> >
> >
> > On Sat, 2004-08-21 at 17:33, Paige Scalf wrote:
> > > HI all,
> > >
> > > I know the RAM/SAWP issue has been addressed on this
> > > list before; I'm getting the "can't allocate region" messages even
> > > though I should have plenty of available swap.
> > >
> > > I'm trying to run flame on a set of 360 copes from lower level
> > > analysis. I'm using a dual G5 with one GB of RAM, and I have
> > > 70 GB free on the root drive. OS is some variant of 10.3. I think
> > > the G5 should be able to automatically allocate up to 64GB of
> > > swap, all of which should be available to it on my system.
> > >
> > > I'm getting the error message after FLAME calls avwmerge -T across
> > > the registered copes and varcopes. This should create a file of
> > > <10 GB.
> > >
> > > Any idea why I'm getting this error? Is this reflecting an upper
> > > limit on some FSL routine, or is there some memory
> > > allocation problem that I don't understand?
> > >
> > > Thanks for your help.
> > > Paige
> >
>
> Stephen M. Smith DPhil
> Associate Director, FMRIB and Analysis Research Coordinator
>
> Oxford University Centre for Functional MRI of the Brain
> John Radcliffe Hospital, Headington, Oxford OX3 9DU, UK
> +44 (0) 1865 222726 (fax 222717)
>
> [log in to unmask] http://www.fmrib.ox.ac.uk/~steve
>
|