Hi,
> Would that allow us to keep the entire refinement on GPUs instead of
having to restart the latter iterations on CPU only?
Skip padding should allow you to run all iterations of Refine3D on GPU.
The maximum box size on 1080Ti without padding is about 1000 px.
> Increasing system RAM is relatively cheap
Yes, but using SSD for scratch is even cheaper and almost as fast.
> Would going to a Tesla M10 (or similar) be expected to make a
noticeable difference?
Probably not.
Best regards,
Takanori Nakane
On 2019/05/02 22:34, Eric J. Montemayor wrote:
> Thanks Takanori. Our iterations are really long in Relion 3 for datasets like this (like 24 hours) even with skip padding. We'd like to speed them up more. In your opinion, where can we get the most return on our investment? Increasing system RAM is relatively cheap, so I was thinking about doing that first. But what would be the next thing on the upgrade list? Would going to a Tesla M10 (or similar) be expected to make a noticeable difference? Would that allow us to keep the entire refinement on GPUs instead of having to restart the latter iterations on CPU only?
>
> -Eric
>
>
> On 5/2/19, 4:21 PM, "Takanori Nakane" <[log in to unmask]> wrote:
>
> Hi,
>
> > 600 pixel box
>
> With 'Skip padding: Yes', you can run Refine3D on your 1080 Ti
> and save a lot of money.
>
> Best regards,
>
> Takanori Nakane
>
> On 2019/05/02 20:41, Eric J. Montemayor wrote:
> > Hi All,
> >
> > I am hoping to get some advice on how to best upgrade our rack mounted
> > GPU server. It’s currently used mostly for Relion jobs, but we’d like
> > something that’s also well suited for cryosparc, cisTEM, etc…
> >
> > What we have now:
> >
> > 2x Xeon E5-2640 (20 physical cores)
> >
> > 256 GB RAM
> >
> > 4x 1080 Ti GPUs
> >
> > When processing large datasets with large box sizes we are unable to
> > cache particle stacks into RAM. That’s easy enough to fix by getting
> > more RAM. But we are also running into problems where Relion 3
> > refinement jobs crash in the latter iterations, even with “skip
> > padding”, and I suspect this may be an issue with available GPU memory
> > (230k particles, 600 pixel box, 311 GB particle stack, 5 MPIs/8
> > threads). The jobs can be restarted and run to completion if we do not
> > use GPUs, but it takes much, much longer.
> >
> > Does anybody think upgrading our GPUs would be worthwhile here? If so,
> > which cards would be worth it? The various Tesla cards have more
> > memory, but at a much higher price point and I’d like to maximize our
> > bang for our buck (i.e. not spending 20k for only a meager increase in
> > performance).
> >
> > Thanks in advance,
> >
> > --
> >
> > Eric J. Montemayor
> >
> > Associate Scientist
> >
> > Member of USNC/Cr
> >
> > Department of Biochemistry
> >
> > University of Wisconsin Madison
> >
> > 433 Babcock Dr. Room 145
> >
> > Madison, WI 53706
> >
> >
> > ------------------------------------------------------------------------
> >
> > To unsubscribe from the CCPEM list, click the following link:
> > https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=CCPEM&A=1
> >
>
>
>
########################################################################
To unsubscribe from the CCPEM list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=CCPEM&A=1
|