Hi Joao,
Don't coregister (and write) EPIs to high resolution T1s. Bad idea, as they will then assume T1 resolution (all of them) which is usually a lot higher.
Why do you need EPIs at such a high resolution? When you upsample them they will not contain more information than before, you will just multiply the same information. And why do you need to write them, only registering is often enough (assuming a next spatial step will follow where you can reslice).
I assume below that you want your EPIs to assume MNI space (i.e. normalize them). For a native space analysis you might need to take a different approach as listed below.
In a nutshell: realign (no write) EPIs, create mean. Coregister (without write) your T1 to your mean EPI (or equivalent for ASL). Then normalize T1, and apply parameters to all EPIs. Rewrite EPIs at something close to EPI native resolution during normalization (usually around 3x3x3mm for 3T).
This way you will accomplish something very similar in the end but have 3x3x3=27 times less data.
Good luck,
Bas
--------------------------------------------------
Dr. S.F.W. Neggers
Division of Brain Research
Rudolf Magnus Institute for Neuroscience
Utrecht University Medical Center
Visiting : Heidelberglaan 100, 3584 CX Utrecht
Room B.01.1.03
Mail : Huispost B01.206, P.O. Box 85500
3508 GA Utrecht, the Netherlands
Tel : +31 (0)88 7559609
Fax : +31 (0)88 7555443
E-mail : [log in to unmask]
Web : http://www.neuromri.nl/people/bas-neggers
: http://www.brainsciencetools.com (CEO)
--------------------------------------------------
________________________________________
From: SPM (Statistical Parametric Mapping) [[log in to unmask]] on behalf of Joao Pereira [[log in to unmask]]
Sent: Thursday, August 09, 2012 8:07 PM
To: [log in to unmask]
Subject: [SPM] Too many large files in fmri study
Dear SPMers,
I am processing a quite large number of fmri ASL and BOLD subjects, and I have encountered an issue with the amount of data I am generating. With over 100 subjects, I need almost 1Tb of disk space (174 scans per subject). The culprit, I've found, is the coregistration of the fmri data to the structural scan: the output of this step generates 174 registered scans for each study, all with the voxel size (and subsequent file size) of the T1 image.
I cannot find a way of removing this step, as I will need to apply the unfied segmentation seg.mat file to finally all scans to MNI, necessary for level 2 analyses. Are there any tricks out there to sort this out? I can't hold 1Tb+ of data for this study alone...
Thank you very much!
Best,
Joao
------------------------------------------------------------------------------
De informatie opgenomen in dit bericht kan vertrouwelijk zijn en is
uitsluitend bestemd voor de geadresseerde. Indien u dit bericht onterecht
ontvangt, wordt u verzocht de inhoud niet te gebruiken en de afzender direct
te informeren door het bericht te retourneren. Het Universitair Medisch
Centrum Utrecht is een publiekrechtelijke rechtspersoon in de zin van de W.H.W.
(Wet Hoger Onderwijs en Wetenschappelijk Onderzoek) en staat geregistreerd bij
de Kamer van Koophandel voor Midden-Nederland onder nr. 30244197.
Denk s.v.p aan het milieu voor u deze e-mail afdrukt.
------------------------------------------------------------------------------
This message may contain confidential information and is intended exclusively
for the addressee. If you receive this message unintentionally, please do not
use the contents but notify the sender immediately by return e-mail. University
Medical Center Utrecht is a legal person by public law and is registered at
the Chamber of Commerce for Midden-Nederland under no. 30244197.
Please consider the environment before printing this e-mail.
|