Print

Print


Hi John,

In terms of memory issues, you should be fine using more than 2 classes (I
usually use 3). Below is a sample command line from a K=3 job for amyloid
fibrils (don't forget to use --bimodal_psi, last time I checked it wasn't
in the Relion GUI for Class3D on the Helix tab...)

 /home/davboyer/cryoem/openmpi-3.0.0/build/bin/mpiexec --bind-to none
`which relion_refine_mpi` --o Class3D/rod_320_K3_R2/run --i
./Select/rod_refine_class2_particles/particles.star --ref
Class3D/refine_rod_K1/run_179p355_K1_ct113_it125_class001.mrc   --ini_high
40 --dont_combine_weights_via_disc --scratch_dir /scratch --pool 30 --ctf
--ctf_corrected_ref --iter 25 --tau2_fudge 4 --particle_diameter 336 --K 3
--flatten_solvent --oversampling 1 --healpix_order 3 --offset_range 5
--offset_step 2 --sym C1 --norm --scale  --helix --helical_outer_diameter
200 --helical_nr_asu 14 --helical_twist_initial 179.352996
--helical_rise_initial 2.407172 --helical_z_percentage 0.3  --sigma_psi 5
--j 5 --gpu "" --bimodal_psi --limit_tilt 30

In this case I was using SLURM for working on our cluster, but to adapt for
a single machine (perhaps what you have according to your description?) we
could say that I was using 3 mpi tasks per node (16 cores and 2 1080's on
each node), so there is one master and two slaves on the node. I also gave
each mpi slave 5 cpus (hence j 5) therefore each gpu card talks to 5 cpus
(could do more at the early stage of classification when sampling is less
memory intensive). If using a single machine, I would suggest something
like mpiexec -n 3 --bind-to-none .... j 5. If you move towards higher
healpix, you may need to turn down the j number to 4, 3, 2, or 1 so the
gpus don't run out of memory.

Second question, yes *particles* from multiple 2D classes are put into 3D
classification to supply all the views of the helix. So you just do a
selection job from your 2D classes to gather all the particles that
contribute to the same species and are in higher resolution classes as
input for your 3D job. Unless you are working with big boxes where all the
views of the helix are present, this is necessary for IHRSR.

Good luck!

David


On Thu, Feb 7, 2019 at 4:19 PM John Heumann <[log in to unmask]>
wrote:

> I'd appreciate some guidance regarding  helical refinement? Specifically:
>
> 1) I'm trying to refine some amyloid data using parameters like those used
> by Fitzpatrick et al for Tau (box size 280, ~10% spacing), but seem to be
> continually running into segmentation violations or GPU memory allocation
> errors particularly during 3D classification. My presumption is that the
> segmentation violations also result from an out-of-memory issue, but on the
> main computer instead of the gpu. This is on a system with 128 GB of ram
> and 4 GTX 1080's.  So far, the only thing that seems to help is reducing
> the number of classes to 2, but that largely defeats the purpose of
> classification. Reducing the number of mpi processes and / or threads seems
> ineffective.  Can some please describe the main determinants of memory
> usage during helical refinement? I assume small box sizes might help, but
> that would also lead to reduced overlap and more particles.
>
> 2) I'm having  a hard time understanding the following portion of the
> Fitzpatrick et al Methods:
>
> "We then selected those segments from the PHF and SF datasets that were
> assigned to 2D class averages with β​-strand separation for subsequent 3D
> clas-
> sification runs. For these calculations, we used a single class (K =​  1);
> a T value of 20; and the previously obtained sub-nanometre PHF and SF
> reconstructions,
> lowpass filtered to 15 Å, as initial models"
>
> So wait, multiple 2D classes are input to 3D classification with only a
> single output class? What purpose does that serve? Was this done solely to
> generate a 3D alignment for exploring twist / rise parameters as is
> described next?
>
> Thanks!
>
> Regards,
> -jh-
>
> ########################################################################
>
> To unsubscribe from the CCPEM list, click the following link:
> https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=CCPEM&A=1
>

########################################################################

To unsubscribe from the CCPEM list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=CCPEM&A=1