JiscMail Logo
Email discussion lists for the UK Education and Research communities

Help for CCPEM Archives


CCPEM Archives

CCPEM Archives


CCPEM@JISCMAIL.AC.UK


View:

Message:

[

First

|

Previous

|

Next

|

Last

]

By Topic:

[

First

|

Previous

|

Next

|

Last

]

By Author:

[

First

|

Previous

|

Next

|

Last

]

Font:

Proportional Font

LISTSERV Archives

LISTSERV Archives

CCPEM Home

CCPEM Home

CCPEM  January 2016

CCPEM January 2016

Options

Subscribe or Unsubscribe

Subscribe or Unsubscribe

Log In

Log In

Get Password

Get Password

Subject:

Re: slow last iteration in autorefine

From:

Robert McLeod <[log in to unmask]>

Reply-To:

Robert McLeod <[log in to unmask]>

Date:

Sun, 17 Jan 2016 22:37:29 +0000

Content-Type:

text/plain

Parts/Attachments:

Parts/Attachments

text/plain (136 lines)

Evening,

I did some quick benchmarks on my home computer (iCore5 with 4 threads), it seems 496x496x496 is not too horrible:

FFTPack time for 512 in (s): 5.402000
FFTW planning time (FFTW_MEASURE) for 512^3 in (s): 0.962000
FFTW execution time (FFTW_MEASURE) for 512^3 in (s): 0.386000
FFTW planning time (FFTW_PATIENT) for 512^3 in (s): 29.558000
FFTW execution time (FFTW_PATIENT) for 512^3 in (s): 0.428000

FFTPack time for 496^3 in (s): 11.753000
FFTW planning time (FFTW_MEASURE) for 496^3 in (s): 0.909000
FFTW execution time (FFTW_MEASURE) for 496^3 in (s): 1.131000
FFTW planning time (FFTW_PATIENT) for 496^3 in (s): 19.574000
FFTW execution time (FFTW_PATIENT) for 496^3 in (s): 1.219000

The 512x512x512 box is only about 250 % faster than the 496x496x496 box.  This is too small to make a 3 minute job into a 4 hour job.  Although 496^3 can be subdivided into equal blocks with 4 threads, not so with 15 threads. 

On the higher planning levels (MEASURE and PATIENT) actual cycle-counting simulations are done on the processor to assess which arrangement of blocks to execute the FFT algorithm on.  Sometimes this doesn't work well.  I have seen, intermittently, that if the planning is run on a processor that is busy with other threads that it will return a terrible plan that will then slow everything to a crawl until I manually go in and re-plan / delete the FFTW wisdom. Looking at the source, Relion seems to use FFTW_ESTIMATE, which is generally less aggressive and hence safer (but I use FFTW_MEASURE).  It looks like there's some debug output if you set #define DEBUG_PLANS somewhere during compilation (in fftw.h for example).  

Robert

--
Robert McLeod, Ph.D.
Center for Cellular Imaging and Nano Analytics (C-CINA)
Biozentrum der Universität Basel
Mattenstrasse 26, 4058 Basel
Office: +41.061.387.3225
[log in to unmask]
[log in to unmask]
[log in to unmask]

________________________________________
From: Collaborative Computational Project in Electron cryo-Microscopy [[log in to unmask]] on behalf of Leo Sazanov [[log in to unmask]]
Sent: Sunday, January 17, 2016 6:40 PM
To: [log in to unmask]
Subject: Re: [ccpem] slow last iteration in autorefine

Dear Sjors,

Thank you - we tried various combinations before and the one with 2 MPIs
(15 threads each) per node gives the fastest "normal" iterations.
This set up also seems to be the best for the last iteration, although
it is difficult to be sure as it still takes 2-4 days to run (and this
is with up to 15 nodes in total per job).

But do you think Robert MacLeod suggestion about big prime number in the
decomposition of 496 might be correct?
This would actually be consistent with the fact that in the 3D
classification run consecutive maximization iterations can run in the
pattern like this: 3 mins, 4 hours, 3 mins, 3 mins, 4 hours, etc.
And those taking long to run do have big prime number in the
decomposition of CurrentImageSize for the iteration.
Although there seems to be no strict dependence as some iterations with
big prime number in the decomposition of CurrentImageSize do run fast.

If that is right we will try 512 box size.
What do you think?
Leo



Prof. Leonid Sazanov
IST Austria
Am Campus 1
A-3400 Klosterneuburg
Austria

Phone: +43 2243 9000 3026
E-mail: [log in to unmask]
Web: https://ist.ac.at/research/life-sciences/sazanov-group/

On 17/01/2016 16:59, Sjors Scheres wrote:
> Dear Leo,
> If each MPI node takes 30Gb, you could run multiple MPI processes per
> node. Having 32 hyper-threaded cores, you could run for example run 2 MPIs
> per node, each launching 16 threads. Perhaps 4 MPIs, each running 8
> threads may run a bit faster. Then, you could scale up by using as many
> nodes as you have in your cluster. If you have say 10 of those nodes, then
> it shouldn't take 3 days for a single iteration.
> HTH,
> Sjors
>
>
>> Dear all,
>>
>> We are still struggling with this - it is very frustrating that with 496
>> pixel box the last maximization iteration in autorefine takes 2-3-4 days
>> (and apparently nothing happens during this time, no progress output,
>> though CPUs are used).
>> We have plenty of CPUs (usually we use ~17 MPIs with 15 threads = 255
>> threads per job) and memory (128 GB per node with 32 hyper-threaded
>> cores), so there is no swapping to disk. Memory requested by Relion in the
>> last iteration is about 30GB.
>>
>> I wonder if people could share their examples of how long this iteration
>> takes on their set-up, especially with large box of about 500 pixels?
>> And whether anybody resolved similar problem?
>>
>> Many thanks!
>>
>>
>>> Hi Leo,
>> It also puts pixels until Nyquist back into the 3D transform, so will cost
>> more CPU than the other iterations.
>> HTH
>> Sjors
>>
>>
>>> Hi, still an important question for us -
>>> It does not look like overall I/O cluster load is a big issue and memory
>>> also is not an issue.
>>> What else can be done to speed up the last iteration in 3D autorefine
>>> (496
>>> box, 128 GB memory per node)?
>>> Now it takes up to several days so we really want to do something about
>>> it.
>>> Apart from using more memory per image, what else is different about the
>>> last 3D autorefine operation so that it is so slow?
>>>
>>> Many thanks!
>>>
>>>
>>>
>>> On our cluster we started to get exceedingly long times for the last
>>> iteration in 3D autorefine (with large box). There is definitely enough
>>> RAM so there is no swapping. Previously the same jobs run about 10X
>>> faster
>>> on our cluster, so I wonder if the problem is in general I/O bottlenecks
>>> in the cluster.
>>> Is there a lot of particle images reading in the final maximisation step
>>> (takes up to a day now)?
>>> Thanks!
>>>
>

Top of Message | Previous Page | Permalink

JiscMail Tools


RSS Feeds and Sharing


Advanced Options


Archives

April 2024
March 2024
February 2024
January 2024
December 2023
November 2023
October 2023
September 2023
August 2023
July 2023
June 2023
May 2023
April 2023
March 2023
February 2023
January 2023
December 2022
November 2022
October 2022
September 2022
August 2022
July 2022
June 2022
May 2022
April 2022
March 2022
February 2022
January 2022
December 2021
November 2021
October 2021
September 2021
August 2021
July 2021
June 2021
May 2021
April 2021
March 2021
February 2021
January 2021
December 2020
November 2020
October 2020
September 2020
August 2020
July 2020
June 2020
May 2020
April 2020
March 2020
February 2020
January 2020
December 2019
November 2019
October 2019
September 2019
August 2019
July 2019
June 2019
May 2019
April 2019
March 2019
February 2019
January 2019
December 2018
November 2018
October 2018
September 2018
August 2018
July 2018
June 2018
May 2018
April 2018
March 2018
February 2018
January 2018
December 2017
November 2017
October 2017
September 2017
August 2017
July 2017
June 2017
May 2017
April 2017
March 2017
February 2017
January 2017
December 2016
November 2016
October 2016
September 2016
August 2016
July 2016
June 2016
May 2016
April 2016
March 2016
February 2016
January 2016
December 2015
November 2015
October 2015
September 2015
August 2015
July 2015
June 2015
May 2015
April 2015
March 2015
February 2015
January 2015
December 2014
November 2014
October 2014
September 2014
August 2014
July 2014
June 2014
May 2014
April 2014
March 2014
February 2014
January 2014
December 2013
November 2013
October 2013
September 2013
August 2013
July 2013
June 2013
May 2013
April 2013
March 2013
February 2013


JiscMail is a Jisc service.

View our service policies at https://www.jiscmail.ac.uk/policyandsecurity/ and Jisc's privacy policy at https://www.jisc.ac.uk/website/privacy-notice

For help and support help@jisc.ac.uk

Secured by F-Secure Anti-Virus CataList Email List Search Powered by the LISTSERV Email List Manager