Evening,
I did some quick benchmarks on my home computer (iCore5 with 4 threads), it seems 496x496x496 is not too horrible:
FFTPack time for 512 in (s): 5.402000
FFTW planning time (FFTW_MEASURE) for 512^3 in (s): 0.962000
FFTW execution time (FFTW_MEASURE) for 512^3 in (s): 0.386000
FFTW planning time (FFTW_PATIENT) for 512^3 in (s): 29.558000
FFTW execution time (FFTW_PATIENT) for 512^3 in (s): 0.428000
FFTPack time for 496^3 in (s): 11.753000
FFTW planning time (FFTW_MEASURE) for 496^3 in (s): 0.909000
FFTW execution time (FFTW_MEASURE) for 496^3 in (s): 1.131000
FFTW planning time (FFTW_PATIENT) for 496^3 in (s): 19.574000
FFTW execution time (FFTW_PATIENT) for 496^3 in (s): 1.219000
The 512x512x512 box is only about 250 % faster than the 496x496x496 box. This is too small to make a 3 minute job into a 4 hour job. Although 496^3 can be subdivided into equal blocks with 4 threads, not so with 15 threads.
On the higher planning levels (MEASURE and PATIENT) actual cycle-counting simulations are done on the processor to assess which arrangement of blocks to execute the FFT algorithm on. Sometimes this doesn't work well. I have seen, intermittently, that if the planning is run on a processor that is busy with other threads that it will return a terrible plan that will then slow everything to a crawl until I manually go in and re-plan / delete the FFTW wisdom. Looking at the source, Relion seems to use FFTW_ESTIMATE, which is generally less aggressive and hence safer (but I use FFTW_MEASURE). It looks like there's some debug output if you set #define DEBUG_PLANS somewhere during compilation (in fftw.h for example).
Robert
--
Robert McLeod, Ph.D.
Center for Cellular Imaging and Nano Analytics (C-CINA)
Biozentrum der Universität Basel
Mattenstrasse 26, 4058 Basel
Office: +41.061.387.3225
[log in to unmask]
[log in to unmask]
[log in to unmask]
________________________________________
From: Collaborative Computational Project in Electron cryo-Microscopy [[log in to unmask]] on behalf of Leo Sazanov [[log in to unmask]]
Sent: Sunday, January 17, 2016 6:40 PM
To: [log in to unmask]
Subject: Re: [ccpem] slow last iteration in autorefine
Dear Sjors,
Thank you - we tried various combinations before and the one with 2 MPIs
(15 threads each) per node gives the fastest "normal" iterations.
This set up also seems to be the best for the last iteration, although
it is difficult to be sure as it still takes 2-4 days to run (and this
is with up to 15 nodes in total per job).
But do you think Robert MacLeod suggestion about big prime number in the
decomposition of 496 might be correct?
This would actually be consistent with the fact that in the 3D
classification run consecutive maximization iterations can run in the
pattern like this: 3 mins, 4 hours, 3 mins, 3 mins, 4 hours, etc.
And those taking long to run do have big prime number in the
decomposition of CurrentImageSize for the iteration.
Although there seems to be no strict dependence as some iterations with
big prime number in the decomposition of CurrentImageSize do run fast.
If that is right we will try 512 box size.
What do you think?
Leo
Prof. Leonid Sazanov
IST Austria
Am Campus 1
A-3400 Klosterneuburg
Austria
Phone: +43 2243 9000 3026
E-mail: [log in to unmask]
Web: https://ist.ac.at/research/life-sciences/sazanov-group/
On 17/01/2016 16:59, Sjors Scheres wrote:
> Dear Leo,
> If each MPI node takes 30Gb, you could run multiple MPI processes per
> node. Having 32 hyper-threaded cores, you could run for example run 2 MPIs
> per node, each launching 16 threads. Perhaps 4 MPIs, each running 8
> threads may run a bit faster. Then, you could scale up by using as many
> nodes as you have in your cluster. If you have say 10 of those nodes, then
> it shouldn't take 3 days for a single iteration.
> HTH,
> Sjors
>
>
>> Dear all,
>>
>> We are still struggling with this - it is very frustrating that with 496
>> pixel box the last maximization iteration in autorefine takes 2-3-4 days
>> (and apparently nothing happens during this time, no progress output,
>> though CPUs are used).
>> We have plenty of CPUs (usually we use ~17 MPIs with 15 threads = 255
>> threads per job) and memory (128 GB per node with 32 hyper-threaded
>> cores), so there is no swapping to disk. Memory requested by Relion in the
>> last iteration is about 30GB.
>>
>> I wonder if people could share their examples of how long this iteration
>> takes on their set-up, especially with large box of about 500 pixels?
>> And whether anybody resolved similar problem?
>>
>> Many thanks!
>>
>>
>>> Hi Leo,
>> It also puts pixels until Nyquist back into the 3D transform, so will cost
>> more CPU than the other iterations.
>> HTH
>> Sjors
>>
>>
>>> Hi, still an important question for us -
>>> It does not look like overall I/O cluster load is a big issue and memory
>>> also is not an issue.
>>> What else can be done to speed up the last iteration in 3D autorefine
>>> (496
>>> box, 128 GB memory per node)?
>>> Now it takes up to several days so we really want to do something about
>>> it.
>>> Apart from using more memory per image, what else is different about the
>>> last 3D autorefine operation so that it is so slow?
>>>
>>> Many thanks!
>>>
>>>
>>>
>>> On our cluster we started to get exceedingly long times for the last
>>> iteration in 3D autorefine (with large box). There is definitely enough
>>> RAM so there is no swapping. Previously the same jobs run about 10X
>>> faster
>>> on our cluster, so I wonder if the problem is in general I/O bottlenecks
>>> in the cluster.
>>> Is there a lot of particle images reading in the final maximisation step
>>> (takes up to a day now)?
>>> Thanks!
>>>
>
|