dear david,
for sge and high memory configuration you can use either:
- a special hostgroup with these machines and assign it to a special mpi
queue
- create dynamic rules (resource quotas) and assign these
- create your own sensor (watching each node) and a corresponding
resourcename (which can be requested at submission time)
but non of this above fits well to relion's mixed mpi and threading
programming model.
so i recommend for processing:
- use always a whole processing node for a relion job and do not mix it
with other jobs from other users
- use more nodes with exactly the same hardware configuration (e.g. mpi
hostfile)
- then you can experiment with hwloc e.g. socket binding for mpi (e.g.
one mpi process per socket and rest --j threads).
- your system administrator can help you making relion submission/bash
script templates optimized for your specific hardware and relion step
(the relion documentation provides some recommendations when to use more
mpi and when to use more system threads).
- intel hyperthreading on brings maybe 10-20% boost only (compared to
real cores).
ad hardware (sometimes a budget question):
- memory: the more the better - with high resolution images 6-8gb/core
can get too less in some relion steps (you can overcome this by starting
less threads on each node)
- networking: 10gb or infiniband - depends on the amount of mpi jobs and
how many other jobs from other users block/use the router
- parallel disk access: fine - but a good network and fast monolithic
storage can also do it (most money you will need for the archive storage
of your large movie frames).
ad cloud computing:
- if you really work with large data: 100% cloud processing will
currently not be an option
-- check: transfertime of your data?
-- check: maximum memory per cloud node?
if possible, this can still be very expensive.
- some relion steps could be done in the cloud, but is it worth mixing
and syncing the results?
cheers,
wolfgang
On 12/09/2015 12:16 PM, David Bhella wrote:
> I am trying to put together a funding request for a cluster and I would be interested to know what people’s thoughts and experiences are regarding optimal cluster configuration for Relion. In particular I am wondering whether lots of 8-core nodes (with ~64 GB RAM) or fewer higher core-density nodes would be preferable for example with 4x E7-8420 cores and ~512 GB RAM? Is 8-12 GB/core an appropriate amount of memory (considering we sometimes work with quite large viruses)? It seems to me that most of the heavy lifting in terms of memory use is done by the MPI master - is this true, and if so is it possible to configure SGE to address MPI jobs to designated high-memory nodes?
>
> Also any comments on where bottlenecks are would be helpful - what is the best networking option (10Gb?), is parallel disk access beneficial?
>
> I would be most grateful for any guidance or recent experience. Finally, should I just forget local hardware and go for a cloud computing option? (What worries me about this is that we then pay for data processing from our grants for ever after rather than a one-off capital equipment award to cover several years of number crunching.)
>
> Many thanks,
> D.
>
> Dr David Bhella
> MRC-University of Glasgow Centre for Virus Research
> Sir Michael Stoker Building
> Garscube Campus
> 464 Bearsden Road
> Glasgow G61 1QH
> Scotland (UK)
>
> Telephone: 0141-330-3685
> Skype: d.bhella
>
> Virus structure group on Facebook: https://www.facebook.com/CVRstructure
> Molecular Machines - Images from Virus Research: http://www.molecularmachines.org.uk
>
> CVR website: http://www.cvr.ac.uk
> CVR on Facebook: https://www.facebook.com/centreforvirusresearch
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
--
Universitätsklinikum Hamburg-Eppendorf (UKE)
@ Centre for Structral Systems Biology (CSSB)
@ Institute of Molecular Biotechnology (IMBA)
Dr. Bohr-Gasse 3-7 (Room 6.14)
1030 Vienna, Austria
Tel.: +43 (1) 790 44-4649
Email: [log in to unmask]
http://www.cssb-hamburg.de/
--
_____________________________________________________________________
Universitätsklinikum Hamburg-Eppendorf; Körperschaft des öffentlichen Rechts; Gerichtsstand: Hamburg | www.uke.de
Vorstandsmitglieder: Prof. Dr. Burkhard Göke (Vorsitzender), Prof. Dr. Dr. Uwe Koch-Gromus, Joachim Prölß, Rainer Schoppik
_____________________________________________________________________
SAVE PAPER - THINK BEFORE PRINTING
|