I'd appreciate some guidance regarding helical refinement? Specifically:
1) I'm trying to refine some amyloid data using parameters like those used by Fitzpatrick et al for Tau (box size 280, ~10% spacing), but seem to be continually running into segmentation violations or GPU memory allocation errors particularly during 3D classification. My presumption is that the segmentation violations also result from an out-of-memory issue, but on the main computer instead of the gpu. This is on a system with 128 GB of ram and 4 GTX 1080's. So far, the only thing that seems to help is reducing the number of classes to 2, but that largely defeats the purpose of classification. Reducing the number of mpi processes and / or threads seems ineffective. Can some please describe the main determinants of memory usage during helical refinement? I assume small box sizes might help, but that would also lead to reduced overlap and more particles.
2) I'm having a hard time understanding the following portion of the Fitzpatrick et al Methods:
"We then selected those segments from the PHF and SF datasets that were assigned to 2D class averages with β-strand separation for subsequent 3D clas-
sification runs. For these calculations, we used a single class (K = 1); a T value of 20; and the previously obtained sub-nanometre PHF and SF reconstructions,
lowpass filtered to 15 Å, as initial models"
So wait, multiple 2D classes are input to 3D classification with only a single output class? What purpose does that serve? Was this done solely to generate a 3D alignment for exploring twist / rise parameters as is described next?
Thanks!
Regards,
-jh-
########################################################################
To unsubscribe from the CCPEM list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=CCPEM&A=1
To unsubscribe from the CCPEM list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=CCPEM&A=1