I think that's right - you are probably running into max process size
limits - this is a huge amount of data to do at group level.
I think you'll need to split the data up and process it a slize at a
time....to do this, the following should work:
1. edit fsl/tcl/feat.tcl and comment out _all_ the calls to "featregapply"
2. setup the higher-level FEAT design fully. don't run it, just save the
design.fsf. don't bother using full flame for this high number of inputs,
just just flame1 or even just OLS.
3. script a loop to:
- run featregapply on all the lower-level FEAT directories
- rename all the reg_standard subdirectories to reg_standard_orig
4. script a loop to:
foreach slice N in standard space:
- create a copy of reg_standard_orig called reg_standard that just
contains the slice N for each image in reg_standard_orig
- run "feat design.fsf" (with the saved design) and rename the output
to sliceN.gfeat
5. concatenate the sliceN.gfeat/cope1.feat/stats/zstat1.hdr (etc) in the Z
direction and use easythresh to threshold
Good luck!
On Mon, 23 Aug 2004, Christopher Bailey wrote:
> Hi Page,
>
> I'm not an expert, but I have a feeling you might indeed be running into
> a memory-related, 32-bit processing limit! You say avwmerge should
> create a file < 10GB. I don't think any 32-bit program can access such a
> large memory block (2 or 4 GB limit). I know in linux it's possible to
> compile the kernel with up to 64GB memory allocation, but I think this
> is a curiosity. I couldn't imagine a vanilla mac OS being able to do it
> either, though I know even less about mac than I do about 32-bit
> processing...
>
> Perhaps some of the more experienced members on the list have a
> suggestion about alternative analysis setups (which likely depends on
> what exactly you're setting up).
>
> -Chris
>
> --
> Christopher Bailey <[log in to unmask]>
> MSc (engineering physics & maths)
> Center for Functionally Integrative Neuroscience
> Aarhus University Hospital, Denmark
> http://www.cfin.au.dk/
>
>
> On Sat, 2004-08-21 at 17:33, Paige Scalf wrote:
> > HI all,
> >
> > I know the RAM/SAWP issue has been addressed on this
> > list before; I'm getting the "can't allocate region" messages even
> > though I should have plenty of available swap.
> >
> > I'm trying to run flame on a set of 360 copes from lower level
> > analysis. I'm using a dual G5 with one GB of RAM, and I have
> > 70 GB free on the root drive. OS is some variant of 10.3. I think
> > the G5 should be able to automatically allocate up to 64GB of
> > swap, all of which should be available to it on my system.
> >
> > I'm getting the error message after FLAME calls avwmerge -T across
> > the registered copes and varcopes. This should create a file of
> > <10 GB.
> >
> > Any idea why I'm getting this error? Is this reflecting an upper
> > limit on some FSL routine, or is there some memory
> > allocation problem that I don't understand?
> >
> > Thanks for your help.
> > Paige
>
Stephen M. Smith DPhil
Associate Director, FMRIB and Analysis Research Coordinator
Oxford University Centre for Functional MRI of the Brain
John Radcliffe Hospital, Headington, Oxford OX3 9DU, UK
+44 (0) 1865 222726 (fax 222717)
[log in to unmask] http://www.fmrib.ox.ac.uk/~steve
|