Hi Giulia,
We've not yet finished all our tests with the GPUs, but let me try and
answer some of your questions.
- The maximization is not likely to become a severe bottleneck (access to
disk will probably much more of a bottleneck). Maximization will be as
fast as it was in relion-1.4, which is usually not more than 10-20 minutes
for each iteration (execpt the last one, which may cost more). Also, the
RAM requirements haven't changed for that. We have (now a bit old) cluster
nodes with 64Gb of RAM. That has been enough to do 400x400 pixel ribosome
particles, but not enough for 600x600 pixel virus boxes. The storage of
the oversampled FT of the map to be projected and the map to be
reconstructed will take (approximately) 8*5*(2*boxsize)^3 bytes. That
would mean at least 55Gb for 600x600 boxes. Then you'll need more space
for probability vectors (which depends on resolution, accuracy of sampling
etc). That made 64Gb too small for the 600x600 virus data set.
- Another advantage of lots of RAM is that you can preread all particles
into RAM, and thereby prevent problems with slow access to the disk.
Another option would be to have fast, local (SSD?) disks on each node, on
which one can automatically copy all particles in a refinement (now an
option in relion-2.0).
- In principle, you wouldn't need more than 3 CPU cores for 3D
refinements, but most machines will come with at least 12-16 cores
nowadays anyway. Also, there are parts of the workflow (e.g. polishing)
which aren't GPU-accelerated and you may very likely want to run other
programs (EMAN2, SIMPLE, etc) as well, so you'll probably still want a
decent number of cores in your machine. Perhaps something like 16-32?
- We've only tested scaling of multiple GPUs up to 4 in one box. That
scales very well: your 4-GPU box will almost perform the E-steps 4x faster
than a single GPU-box. Because half-sets of 3d auto-refine are executed in
parallel by different MPI processes, it would be advantageous to have at
least 2 GPUs, so you could run one MPI process/half on each GPU (thereby
you don't have to share the GPU memory).
- Then: what GPU should you buy? That I don't really know yet.
Developments go really fast and the promised specs of the pascal cards
sound great (but they're not available yet). We've had excellent
performance out of the titan-X cards (which now appear to be really hard
to order). The K80s also work great, but are much more expensive. They
might be easier to maintain and more robust to heating problems though
(which we had no issues with so far on the titan-x's either). All I can
say at this point is you need CUDA compute capability 3.5+, and more GPU
memory will mean you can do larger boxes. However, I'm not really sure yet
where the limits lie. One test showed that limiting memory usage on our
titan-X cards to only 6Gb (they come with 12Gb), still let us refine a
360x360 pixel ribosome data set. For what it's worth: at LMB we're holding
our purchase until the new cards have been tested.
I'll keep the list informed when more definite data have been collected.
Currently we're working very hard on making the code stable...
HTH,
Sjors
> Hi Sjors (and everyone in the list),
>
> I have some questions about computing resources - mainly referring to
> the gpu version of relion.
>
> You presented a nice overview of the new (not yet available) version of
> relion at the ccpem symposium last week. You mentioned that while the
> expectation step is accelerated using gpus the maximisation step runs on
> cpus because of high memory requirements. (Please correct me if I'm
> wrong).
>
> When switching to gpu one would rather avoid making the maximisation
> step a new time-limiting factor, so one question I have is:
> You say the maximisation step isn't parallelised very well: what does
> this mean in terms of ideal number of cpus? In other words: if one
> wishes to buy a workstation with a gpu - mainly to run relion - how many
> cpus would you recommend it should have?
>
> Secondly - do you (and others in the list) have a feeling for how easy
> it is for a project to take more than 256G of memory? Currently that is
> our max allowance, and it has been sufficient so far - with (in my case
> for example) a dataset of nearly 1 mln particles of ~200 cubic pixels.
> I'd be interested in hearing if and when other people required more than
> 256G of memory.
>
> Also, I wonder about how well the expectation step time scales up with
> the number of gpus. This would be an important factor to take into
> account when budgeting for a new workstation/server.
>
>
> Thank you so much in advance for any suggestions.
>
> Best,
> Giulia
>
> --
> Giulia Zanetti
> ISMB, Birkbeck College
> Malet St. London
> WC1E 7HX
> 02076316898
>
>
--
Sjors Scheres
MRC Laboratory of Molecular Biology
Francis Crick Avenue, Cambridge Biomedical Campus
Cambridge CB2 0QH, U.K.
tel: +44 (0)1223 267061
http://www2.mrc-lmb.cam.ac.uk/groups/scheres
|