JiscMail Logo
Email discussion lists for the UK Education and Research communities

Help for CCPEM Archives


CCPEM Archives

CCPEM Archives


CCPEM@JISCMAIL.AC.UK


View:

Message:

[

First

|

Previous

|

Next

|

Last

]

By Topic:

[

First

|

Previous

|

Next

|

Last

]

By Author:

[

First

|

Previous

|

Next

|

Last

]

Font:

Proportional Font

LISTSERV Archives

LISTSERV Archives

CCPEM Home

CCPEM Home

CCPEM  May 2016

CCPEM May 2016

Options

Subscribe or Unsubscribe

Subscribe or Unsubscribe

Log In

Log In

Get Password

Get Password

Subject:

Re: relion and gpu questions

From:

Sjors Scheres <[log in to unmask]>

Reply-To:

Sjors Scheres <[log in to unmask]>

Date:

Sat, 21 May 2016 11:07:06 +0100

Content-Type:

text/plain

Parts/Attachments:

Parts/Attachments

text/plain (211 lines)

Hi Kyle,
Thanks for that info! very useful. Just to confirm that rank0 (the first
MPI process, aka the master) only dispenses jobs to the other MPI
processes. It doesn't do any calculations itself. It does need some memory
to store all the metadata, but not as much as the slaves (which need to
store the FTs of the maps, the probability arrays, etc)
HTH,
Sjors


> Hi Giulia,
>
> More follow up to Sjors and Neil and to answer your last Q - if of
> interest here are my experiences of what relion uses on a 384 GB memory
> system when running a refinement with three MPI processes and two threads
> per process. ~2000 ptcls but pretty big ones!
>
> box 750px^2
>                                              Memory per MPI process (GB)
> Expectation                          Not recorded
> Maximisation                        Not recorded
> Converged expectation        69.1
> Converged maximisation      134.4
>
> box 600px^2
>                                              Memory per MPI process (GB)
> Expectation                          2.7
> Maximisation                        38.4
> Converged expectation        38.4
> Converged maximisation      67.6
>
> I should note on this that two MPI processes use the resources as above
> and the third tends to use much less, I presume because it’s not doing
> calculations and just controlling the refinement, but maybe someone can
> enlighten me on this.
>
> At the end of the day, on the stated system you can see I’ve not managed
> to run anything larger than 750 px^2, like this 600 px^2 should fit on 256
> GB system but we needed more RAM for the 750 px^2.
>
> Hope this helps!
> Kyle
>
> On 19 May 2016, at 13:47, Neil Ranson
> <[log in to unmask]<mailto:[log in to unmask]>> wrote:
>
> Dear All,
>
> Just to add a word to what Sjors said at CCPEM - in that your sysadmin
> will be keener on Teslas than consumer graphics cards. I can attest to
> this as we’ve been talking in earnest with our sysadmin and a big
> vendor, planning for Relion2.0 etc.
>
> If you want to put you GPUs in a proper server, e.g. a 2U dual socket
> machine, they have fans that create a linear air flow from one side of the
> rack to the other, and the BIOS ramps the fans up and down. If you then
> put the rotary fans of a gaming card in there, all sorts of mayhem can
> ensue, with cards finding resonant frequencies that make them wobble “a
> bit”, and you have to hack the bios to let the gaming card look after
> its own temperature etc. We have not actually done it, so I cant attest to
> how bad it might be, but I thought I would pass on that bit of wisdom!
>
> Teslas are for sure more expensive, but come with the same 5 year warranty
> we get with servers if bought with the server. Something to bear in mind
> when weighing up the costs, especially if you want dense computing.
>
> I’m sure the next generation Tesla will be even better though!
>
> Neil R.
>
>
>
>
>
>
> On 19/05/2016, 20:23, "Sjors Scheres"
> <[log in to unmask]<mailto:[log in to unmask]>> wrote:
>
> Hi Giulia,
>
> We've not yet finished all our tests with the GPUs, but let me try and
> answer some of your questions.
>
> - The maximization is not likely to become a severe bottleneck (access to
> disk will probably much more of a bottleneck). Maximization will be as
> fast as it was in relion-1.4, which is usually not more than 10-20
> minutes
> for each iteration (execpt the last one, which may cost more). Also, the
> RAM requirements haven't changed for that. We have (now a bit old)
> cluster
> nodes with 64Gb of RAM. That has been enough to do 400x400 pixel ribosome
> particles, but not enough for 600x600 pixel virus boxes. The storage of
> the oversampled FT of the map to be projected and the map to be
> reconstructed will take (approximately) 8*5*(2*boxsize)^3 bytes. That
> would mean at least 55Gb for 600x600 boxes. Then you'll need more space
> for probability vectors (which depends on resolution, accuracy of
> sampling
> etc). That made 64Gb too small for the 600x600 virus data set.
>
> - Another advantage of lots of RAM is that you can preread all particles
> into RAM, and thereby prevent problems with slow access to the disk.
> Another option would be to have fast, local (SSD?) disks on each node, on
> which one can automatically copy all particles in a refinement (now an
> option in relion-2.0).
>
> - In principle, you wouldn't need more than 3 CPU cores for 3D
> refinements, but most machines will come with at least 12-16 cores
> nowadays anyway. Also, there are parts of the workflow (e.g. polishing)
> which aren't GPU-accelerated and you may very likely want to run other
> programs (EMAN2, SIMPLE, etc) as well, so you'll probably still want a
> decent number of cores in your machine. Perhaps something like 16-32?
>
> - We've only tested scaling of multiple GPUs up to 4 in one box. That
> scales very well: your 4-GPU box will almost perform the E-steps 4x
> faster
> than a single GPU-box. Because half-sets of 3d auto-refine are executed
> in
> parallel by different MPI processes, it would be advantageous to have at
> least 2 GPUs, so you could run one MPI process/half on each GPU (thereby
> you don't have to share the GPU memory).
>
> - Then: what GPU should you buy? That I don't really know yet.
> Developments go really fast and the promised specs of the pascal cards
> sound great (but they're not available yet). We've had excellent
> performance out of the titan-X cards (which now appear to be really hard
> to order). The K80s also work great, but are much more expensive. They
> might be easier to maintain and more robust to heating problems though
> (which we had no issues with so far on the titan-x's either). All I can
> say at this point is you need CUDA compute capability 3.5+, and more GPU
> memory will mean you can do larger boxes. However, I'm not really sure
> yet
> where the limits lie. One test showed that limiting memory usage on our
> titan-X cards to only 6Gb (they come with 12Gb), still let us refine a
> 360x360 pixel ribosome data set. For what it's worth: at LMB we're
> holding
> our purchase until the new cards have been tested.
>
> I'll keep the list informed when more definite data have been collected.
> Currently we're working very hard on making the code stable...
>
> HTH,
> Sjors
>
>
>
>
>
> Hi Sjors (and everyone in the list),
>
> I have some questions about computing resources - mainly referring to
> the gpu version of relion.
>
> You presented a nice overview of the new (not yet available) version of
> relion at the ccpem symposium last week. You mentioned that while the
> expectation step is accelerated using gpus the maximisation step runs on
> cpus because of high memory requirements. (Please correct me if I'm
> wrong).
>
> When switching to gpu one would rather avoid making the maximisation
> step a new time-limiting factor, so one question I have is:
> You say the maximisation step isn't parallelised very well: what does
> this mean in terms of ideal number of cpus? In other words: if one
> wishes to buy a workstation with a gpu - mainly to run relion - how many
> cpus would you recommend it should have?
>
> Secondly - do you (and others in the list) have a feeling for how easy
> it is for a project to take more than 256G of memory? Currently that is
> our max allowance, and it has been sufficient so far - with (in my case
> for example) a dataset of nearly 1 mln particles of ~200 cubic pixels.
> I'd be interested in hearing if and when other people required more than
> 256G of memory.
>
> Also, I wonder about how well the expectation step time scales up with
> the number of gpus. This would be an important factor to take into
> account when budgeting for a new workstation/server.
>
>
> Thank you so much in advance for any suggestions.
>
> Best,
> Giulia
>
> --
> Giulia Zanetti
> ISMB, Birkbeck College
> Malet St. London
> WC1E 7HX
> 02076316898
>
>
>
>
> --
> Sjors Scheres
> MRC Laboratory of Molecular Biology
> Francis Crick Avenue, Cambridge Biomedical Campus
> Cambridge CB2 0QH, U.K.
> tel: +44 (0)1223 267061
> http://www2.mrc-lmb.cam.ac.uk/groups/scheres
>
>


-- 
Sjors Scheres
MRC Laboratory of Molecular Biology
Francis Crick Avenue, Cambridge Biomedical Campus
Cambridge CB2 0QH, U.K.
tel: +44 (0)1223 267061
http://www2.mrc-lmb.cam.ac.uk/groups/scheres

Top of Message | Previous Page | Permalink

JiscMail Tools


RSS Feeds and Sharing


Advanced Options


Archives

April 2024
March 2024
February 2024
January 2024
December 2023
November 2023
October 2023
September 2023
August 2023
July 2023
June 2023
May 2023
April 2023
March 2023
February 2023
January 2023
December 2022
November 2022
October 2022
September 2022
August 2022
July 2022
June 2022
May 2022
April 2022
March 2022
February 2022
January 2022
December 2021
November 2021
October 2021
September 2021
August 2021
July 2021
June 2021
May 2021
April 2021
March 2021
February 2021
January 2021
December 2020
November 2020
October 2020
September 2020
August 2020
July 2020
June 2020
May 2020
April 2020
March 2020
February 2020
January 2020
December 2019
November 2019
October 2019
September 2019
August 2019
July 2019
June 2019
May 2019
April 2019
March 2019
February 2019
January 2019
December 2018
November 2018
October 2018
September 2018
August 2018
July 2018
June 2018
May 2018
April 2018
March 2018
February 2018
January 2018
December 2017
November 2017
October 2017
September 2017
August 2017
July 2017
June 2017
May 2017
April 2017
March 2017
February 2017
January 2017
December 2016
November 2016
October 2016
September 2016
August 2016
July 2016
June 2016
May 2016
April 2016
March 2016
February 2016
January 2016
December 2015
November 2015
October 2015
September 2015
August 2015
July 2015
June 2015
May 2015
April 2015
March 2015
February 2015
January 2015
December 2014
November 2014
October 2014
September 2014
August 2014
July 2014
June 2014
May 2014
April 2014
March 2014
February 2014
January 2014
December 2013
November 2013
October 2013
September 2013
August 2013
July 2013
June 2013
May 2013
April 2013
March 2013
February 2013


JiscMail is a Jisc service.

View our service policies at https://www.jiscmail.ac.uk/policyandsecurity/ and Jisc's privacy policy at https://www.jisc.ac.uk/website/privacy-notice

For help and support help@jisc.ac.uk

Secured by F-Secure Anti-Virus CataList Email List Search Powered by the LISTSERV Email List Manager