Hi Kyle,
Thanks for that info! very useful. Just to confirm that rank0 (the first
MPI process, aka the master) only dispenses jobs to the other MPI
processes. It doesn't do any calculations itself. It does need some memory
to store all the metadata, but not as much as the slaves (which need to
store the FTs of the maps, the probability arrays, etc)
HTH,
Sjors
> Hi Giulia,
>
> More follow up to Sjors and Neil and to answer your last Q - if of
> interest here are my experiences of what relion uses on a 384 GB memory
> system when running a refinement with three MPI processes and two threads
> per process. ~2000 ptcls but pretty big ones!
>
> box 750px^2
> Memory per MPI process (GB)
> Expectation Not recorded
> Maximisation Not recorded
> Converged expectation 69.1
> Converged maximisation 134.4
>
> box 600px^2
> Memory per MPI process (GB)
> Expectation 2.7
> Maximisation 38.4
> Converged expectation 38.4
> Converged maximisation 67.6
>
> I should note on this that two MPI processes use the resources as above
> and the third tends to use much less, I presume because it’s not doing
> calculations and just controlling the refinement, but maybe someone can
> enlighten me on this.
>
> At the end of the day, on the stated system you can see I’ve not managed
> to run anything larger than 750 px^2, like this 600 px^2 should fit on 256
> GB system but we needed more RAM for the 750 px^2.
>
> Hope this helps!
> Kyle
>
> On 19 May 2016, at 13:47, Neil Ranson
> <[log in to unmask]<mailto:[log in to unmask]>> wrote:
>
> Dear All,
>
> Just to add a word to what Sjors said at CCPEM - in that your sysadmin
> will be keener on Teslas than consumer graphics cards. I can attest to
> this as we’ve been talking in earnest with our sysadmin and a big
> vendor, planning for Relion2.0 etc.
>
> If you want to put you GPUs in a proper server, e.g. a 2U dual socket
> machine, they have fans that create a linear air flow from one side of the
> rack to the other, and the BIOS ramps the fans up and down. If you then
> put the rotary fans of a gaming card in there, all sorts of mayhem can
> ensue, with cards finding resonant frequencies that make them wobble “a
> bit”, and you have to hack the bios to let the gaming card look after
> its own temperature etc. We have not actually done it, so I cant attest to
> how bad it might be, but I thought I would pass on that bit of wisdom!
>
> Teslas are for sure more expensive, but come with the same 5 year warranty
> we get with servers if bought with the server. Something to bear in mind
> when weighing up the costs, especially if you want dense computing.
>
> I’m sure the next generation Tesla will be even better though!
>
> Neil R.
>
>
>
>
>
>
> On 19/05/2016, 20:23, "Sjors Scheres"
> <[log in to unmask]<mailto:[log in to unmask]>> wrote:
>
> Hi Giulia,
>
> We've not yet finished all our tests with the GPUs, but let me try and
> answer some of your questions.
>
> - The maximization is not likely to become a severe bottleneck (access to
> disk will probably much more of a bottleneck). Maximization will be as
> fast as it was in relion-1.4, which is usually not more than 10-20
> minutes
> for each iteration (execpt the last one, which may cost more). Also, the
> RAM requirements haven't changed for that. We have (now a bit old)
> cluster
> nodes with 64Gb of RAM. That has been enough to do 400x400 pixel ribosome
> particles, but not enough for 600x600 pixel virus boxes. The storage of
> the oversampled FT of the map to be projected and the map to be
> reconstructed will take (approximately) 8*5*(2*boxsize)^3 bytes. That
> would mean at least 55Gb for 600x600 boxes. Then you'll need more space
> for probability vectors (which depends on resolution, accuracy of
> sampling
> etc). That made 64Gb too small for the 600x600 virus data set.
>
> - Another advantage of lots of RAM is that you can preread all particles
> into RAM, and thereby prevent problems with slow access to the disk.
> Another option would be to have fast, local (SSD?) disks on each node, on
> which one can automatically copy all particles in a refinement (now an
> option in relion-2.0).
>
> - In principle, you wouldn't need more than 3 CPU cores for 3D
> refinements, but most machines will come with at least 12-16 cores
> nowadays anyway. Also, there are parts of the workflow (e.g. polishing)
> which aren't GPU-accelerated and you may very likely want to run other
> programs (EMAN2, SIMPLE, etc) as well, so you'll probably still want a
> decent number of cores in your machine. Perhaps something like 16-32?
>
> - We've only tested scaling of multiple GPUs up to 4 in one box. That
> scales very well: your 4-GPU box will almost perform the E-steps 4x
> faster
> than a single GPU-box. Because half-sets of 3d auto-refine are executed
> in
> parallel by different MPI processes, it would be advantageous to have at
> least 2 GPUs, so you could run one MPI process/half on each GPU (thereby
> you don't have to share the GPU memory).
>
> - Then: what GPU should you buy? That I don't really know yet.
> Developments go really fast and the promised specs of the pascal cards
> sound great (but they're not available yet). We've had excellent
> performance out of the titan-X cards (which now appear to be really hard
> to order). The K80s also work great, but are much more expensive. They
> might be easier to maintain and more robust to heating problems though
> (which we had no issues with so far on the titan-x's either). All I can
> say at this point is you need CUDA compute capability 3.5+, and more GPU
> memory will mean you can do larger boxes. However, I'm not really sure
> yet
> where the limits lie. One test showed that limiting memory usage on our
> titan-X cards to only 6Gb (they come with 12Gb), still let us refine a
> 360x360 pixel ribosome data set. For what it's worth: at LMB we're
> holding
> our purchase until the new cards have been tested.
>
> I'll keep the list informed when more definite data have been collected.
> Currently we're working very hard on making the code stable...
>
> HTH,
> Sjors
>
>
>
>
>
> Hi Sjors (and everyone in the list),
>
> I have some questions about computing resources - mainly referring to
> the gpu version of relion.
>
> You presented a nice overview of the new (not yet available) version of
> relion at the ccpem symposium last week. You mentioned that while the
> expectation step is accelerated using gpus the maximisation step runs on
> cpus because of high memory requirements. (Please correct me if I'm
> wrong).
>
> When switching to gpu one would rather avoid making the maximisation
> step a new time-limiting factor, so one question I have is:
> You say the maximisation step isn't parallelised very well: what does
> this mean in terms of ideal number of cpus? In other words: if one
> wishes to buy a workstation with a gpu - mainly to run relion - how many
> cpus would you recommend it should have?
>
> Secondly - do you (and others in the list) have a feeling for how easy
> it is for a project to take more than 256G of memory? Currently that is
> our max allowance, and it has been sufficient so far - with (in my case
> for example) a dataset of nearly 1 mln particles of ~200 cubic pixels.
> I'd be interested in hearing if and when other people required more than
> 256G of memory.
>
> Also, I wonder about how well the expectation step time scales up with
> the number of gpus. This would be an important factor to take into
> account when budgeting for a new workstation/server.
>
>
> Thank you so much in advance for any suggestions.
>
> Best,
> Giulia
>
> --
> Giulia Zanetti
> ISMB, Birkbeck College
> Malet St. London
> WC1E 7HX
> 02076316898
>
>
>
>
> --
> Sjors Scheres
> MRC Laboratory of Molecular Biology
> Francis Crick Avenue, Cambridge Biomedical Campus
> Cambridge CB2 0QH, U.K.
> tel: +44 (0)1223 267061
> http://www2.mrc-lmb.cam.ac.uk/groups/scheres
>
>
--
Sjors Scheres
MRC Laboratory of Molecular Biology
Francis Crick Avenue, Cambridge Biomedical Campus
Cambridge CB2 0QH, U.K.
tel: +44 (0)1223 267061
http://www2.mrc-lmb.cam.ac.uk/groups/scheres
|