Hi,
VDAM algorithm does not support MPI, soassign multiple GPUs to a single process.
You can leave the GPU field empty to use all GPUs on the system, or
specify the GPU IDs separated by commas (e.g. "0,1,2,3"). Note that
colons are used to separate different processes.
Best regards,
Takanori Nakane
On 2021/10/01 7:46, Eric Gibbs wrote:
> Thanks Takanori,
> How do I use multiple gpus? Is there a specific flag? The GUI throws an error for multi mpi processes and even if I specify multiple gpus under
> compute, only one runs.
> Best,
> Eric
>
> On Thu, Sep 30, 2021, 6:43 PM Takanori Nakane <[log in to unmask] <mailto:[log in to unmask]>> wrote:
>
> Hi,
>
> Of course the performance depends on many things (box size, noise level, hardware etc),
> but the most obvious things are:
>
> - we typically use 4 GPUs, not 1 GPU, for a job
> - increasing the number of pooled particles sometimes help
> - Although a smaller test set might have fitted within the OS's file system cache,
> and "didn't seem to make a difference", the full dataset might benefit from scratch.
>
> Best regards,
>
> Takanori Nakane
>
> On 2021/10/01 1:17, Eric Gibbs wrote:
> > Hello,
> > I'm not seeing the speed up I've expected with the new 2D VDAM algorithm. From the online presentation it was mentioned that 15 million
> particles ran in 3 hours or something like that. I have about 1.3 million particles selected from 5000 movies. I binned the particles so that I
> have a box size of 90. I started the VDAM algorithm about 15 hours ago and it is on iteration 194 of 200. Each iteration is taking about 8 minutes
> (66K particles). I assume the last iteration will take hours. I tried running with smaller subsets simultateously (~240K particles) but those are
> also taking several hours
> > I am using a 2080 gpu on a standalone workstation. Here is the command:
> >
> > `which relion_refine` --o Class2D/job055/run --grad --class_inactivity_threshold 0.1 --grad_write_iter 10 --iter 200 --i
> Extract/job037/particles.star --dont_combine_weights_via_disc --pool 3 --pad 2 --ctf --ctf_intact_first_peak --tau2_fudge 1 --particle_diameter
> 170 --K 50 --flatten_solvent --zero_mask --strict_highres_exp 8 --center_classes --oversampling 1 --psi_step 20 --offset_range 8 --offset_step
> 2 --norm --scale --j 20 --gpu "2" --pipeline_control Class2D/job055/
> >
> > Factors that may be slowing me down.
> > 1) My data is pretty noisy because my grids are coated with graphene oxide and there is a fair amount of aggregation and degradation. A
> majority of my particles are 'junk'.
> > 2) I did not use the 16 bit data option because I wanted to maintain compatibility with other programs.
> > 3) I did not load particles into scratch, though in smaller test runs that didn't seem to make a difference.
> >
> > Let me know if you have any suggestions or if I've missed something. I am very happy with the classes produced so far, they do seem better.
> Hopefully I am just making a silly mistake and I can have speed and quality. Thanks so much for your support, I'm sure you all have to take the
> week off just to respond to inquiries with this release.
> > Thanks again,
> > Eric Gibbs
> >
> > ########################################################################
> >
> > To unsubscribe from the CCPEM list, click the following link:
> > https://www.jiscmail.ac.uk/cgi-bin/WA-JISC.exe?SUBED1=CCPEM&A=1 <https://www.jiscmail.ac.uk/cgi-bin/WA-JISC.exe?SUBED1=CCPEM&A=1>
> >
> > This message was issued to members of www.jiscmail.ac.uk/CCPEM <http://www.jiscmail.ac.uk/CCPEM>, a mailing list hosted by www.jiscmail.ac.uk
> <http://www.jiscmail.ac.uk>, terms & conditions are available at https://www.jiscmail.ac.uk/policyandsecurity/
> <https://www.jiscmail.ac.uk/policyandsecurity/>
> >
>
########################################################################
To unsubscribe from the CCPEM list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/WA-JISC.exe?SUBED1=CCPEM&A=1
This message was issued to members of www.jiscmail.ac.uk/CCPEM, a mailing list hosted by www.jiscmail.ac.uk, terms & conditions are available at https://www.jiscmail.ac.uk/policyandsecurity/
|