JiscMail Logo
Email discussion lists for the UK Education and Research communities

Help for CCPEM Archives


CCPEM Archives

CCPEM Archives


CCPEM@JISCMAIL.AC.UK


View:

Message:

[

First

|

Previous

|

Next

|

Last

]

By Topic:

[

First

|

Previous

|

Next

|

Last

]

By Author:

[

First

|

Previous

|

Next

|

Last

]

Font:

Proportional Font

LISTSERV Archives

LISTSERV Archives

CCPEM Home

CCPEM Home

CCPEM  January 2017

CCPEM January 2017

Options

Subscribe or Unsubscribe

Subscribe or Unsubscribe

Log In

Log In

Get Password

Get Password

Subject:

Re: batch jobs scripts

From:

Sjors Scheres <[log in to unmask]>

Reply-To:

Sjors Scheres <[log in to unmask]>

Date:

Fri, 27 Jan 2017 14:00:17 +0000

Content-Type:

text/plain

Parts/Attachments:

Parts/Attachments

text/plain (179 lines)

One could add more EXTRA variables to the GUI by editing below line 212 
of the src/gui_jobwindow.cpp.
Alternatively, one could use a trick by using the standard linux command 
'cat' instead of 'qsub' (or the likes) for the "Queue submit command:". 
That way, when pressing "Run now" the script gets written to disc, the 
job gets incorporated into the pipeline, but the job actually does not 
get submitted. One can then edit the script and submit it manually. I 
think I saw this first being done by Tom Houweling in the Nogales lab.
HTH,
Sjors

On 01/26/2017 11:17 PM, Elad Binshtein wrote:
> Hi Daniel,
>
> I'm agree that more XXXextraNXXX variables will be nice.
> for know I have default val. for GPU that I'm edit before running the job
> (if I want more GPU).
> I can run more that 1 GPU node.
> you can also use the XXXdedicatedXXX as an extra variables
>
> Best,
>
> On Thu, Jan 26, 2017 at 4:43 PM, Daniel Larsson <[log in to unmask]>
> wrote:
>
>> Hi Elad,
>>
>> Thanks for your suggestion about the extra variables. I also figured I
>> should use the —cpus-per-task slurm variable to get the right number of
>> processes and threads. I now have a script that seems to work ok. However I
>> cannot run multiple nodes with GPUs. Is it just me (or our hardware), or is
>> that feature not implemented yet?
>>
>> In my mind, there is still room for additional XXXextraNXXX variables,
>> e.g. for setting memory usage and max wall time, like you did. (My guess is
>> that the GUI could only fit two more lines, so therefore we can only have 2
>> extra variables.) It would also be nice if one could toggle the numeric
>> value slider for the extra variables.
>>
>>
>> For posterity, this is what I ended up with:
>>
>> #!/bin/bash
>> #SBATCH --job-name=XXXnameXXX
>> #SBATCH -NXXXextra1XXX
>> #SBATCH --ntasks-per-node=XXXmpinodesXXX
>> #SBATCH --cpus-per-task=XXXthreadsXXX
>> #SBATCH -p XXXqueueXXX
>> #SBATCH --gres=gpu:XXXextra2XXX
>> #SBATCH --output=XXXnameXXXslurm-%j.out
>> mpirun XXXcommandXXX
>>
>> Using these environmental variables:
>>
>> export RELION_QSUB_EXTRA1="No. of nodes"
>> export RELION_QSUB_EXTRA2="No. of GPUs/node"
>> export RELION_QSUB_EXTRA1_DEFAULT="1"
>> export RELION_QSUB_EXTRA2_DEFAULT=“0"
>>
>>
>> Regards,
>> Daniel
>>
>>
>> On 25 Jan 2017, at 22:29, Elad Binshtein <[log in to unmask]> wrote:
>>
>> Hi Daniel,
>> my script look like this:
>>
>> #SBATCH --partition=  XXXqueueXXX
>> #SBATCH --account=accout_gpu
>> #SBATCH --gres=gpu:2
>> #SBATCH --ntasks=XXXmpinodesXXX
>> #SBATCH --cpus-per-task=XXXthreadsXXX
>> #SBATCH --time=XXXextra1XXX
>> #SBATCH --mem-per-cpu=XXXextra2XXX
>> #SBATCH -J XXXoutfileXXX
>> #SBATCH --error=XXXerrfileXXX
>> #SBATCH --output=XXXoutfileXXX
>> srun --mpi=pmi2 XXXcommandXXX
>>
>> you can add 2 extra variables and use them as you want.
>>
>> Best,
>>
>>
>>
>> On Wed, Jan 25, 2017 at 3:04 PM, Daniel Larsson <[log in to unmask]>
>> wrote:
>>
>>> Hi all,
>>>
>>> I have a few thoughts regarding batch jobs using multiple nodes. From my
>>> research, the mapping between the GUI parameters and the batch job
>>> variables seems to be:
>>>
>>> Number of MPI procs => XXXmpinodesXXX
>>> Number of MPI procs => XXXnodesXXX
>>> Number of threads => XXXthreadsXXX
>>> Number of MPI procs * Number of threads => XXXcoresXXX
>>> Minimum dedicated cores per node => XXXdedicatedXXX
>>> XXXthreadsXXX => sets the -j flag of the relion_refine command
>>>
>>> My current template scripts to run jobs on a single node on our SLURM
>>> cluster looks like this (for single node job):
>>>
>>> #!/bin/bash
>>> #SBATCH --job-name=XXXnameXXX
>>> #SBATCH -N1
>>> #SBATCH --ntasks-per-node=XXXcoresXXX
>>> #SBATCH -p c
>>> #SBATCH --gres=gpu:4
>>> #SBATCH --output=XXXnameXXX/slurm-%j.out
>>> mpirun XXXcommandXXX
>>>
>>> For two nodes I use this:
>>>
>>> #!/bin/bash
>>> #SBATCH --job-name=XXXnameXXX
>>> #SBATCH -N2
>>> #SBATCH --ntasks-per-node=XXXcoresXXX
>>> #SBATCH -p c
>>> #SBATCH --gres=gpu:4
>>> #SBATCH --output=XXXnameXXX/slurm-%j.out
>>> mpirun XXXcommandXXX
>>>
>>> The two-node version works I have one thread per MPI process. But for
>>> more, there is no way to separately define -j and XXXcoresXXX, since -j
>>> seems to implicitly be controlled by XXXthreadsXXX and XXXcoresXXX seems to
>>> be defined by XXXmpinodesXXX * XXXthreadsXXX. This causes "XXXmpinodesXXX *
>>> XXXthreadsXXX” MPI processes to be spawned on each node.
>>>
>>> Please suggest improvements to my script if there is something I have
>>> overlooked. Maybe I can use the XXXdedicatedXXX to set the variable
>>> "#SBATCH --ntasks-per-node” instead and thereby be able to control it using
>>> the “Minimum dedicated cores per node” slider in the GUI? That is a bit
>>> ugly according to me.
>>>
>>> My suggestions are that there should be:
>>> - independent ways to set XXXcoresXXX (which controls -j) and
>>> XXXthreadsXXX from the GUI
>>> - independent ways to set XXXnodesXXX and XXXmpinodesXXX from the GUI (so
>>> that I don’t have to use separate scripts for different N)
>>> - XXXmpinodesXXX should be renamed to XXXmpiprocsXXX.
>>>
>>> Regards,
>>> Daniel
>>>
>>>
>>>
>>>
>>>
>>>
>>
>> --
>> ________________________________
>> Elad Binshtein, Ph.D.
>> Cryo EM specialist - staff scientist
>> Center for Structure Biology (CSB)
>> MCN Room 1207
>> Vanderbilt University
>> Nashville, TN
>> Office: +1-615-322-4671 <(615)%20322-4671>
>> Mobile: +1-615-481-4408 <(615)%20481-4408>
>> E-Mail: [log in to unmask]
>> ________________________________
>>
>>
>>
>

-- 
Sjors Scheres
MRC Laboratory of Molecular Biology
Francis Crick Avenue, Cambridge Biomedical Campus
Cambridge CB2 0QH, U.K.
tel: +44 (0)1223 267061
http://www2.mrc-lmb.cam.ac.uk/groups/scheres

Top of Message | Previous Page | Permalink

JiscMail Tools


RSS Feeds and Sharing


Advanced Options


Archives

April 2024
March 2024
February 2024
January 2024
December 2023
November 2023
October 2023
September 2023
August 2023
July 2023
June 2023
May 2023
April 2023
March 2023
February 2023
January 2023
December 2022
November 2022
October 2022
September 2022
August 2022
July 2022
June 2022
May 2022
April 2022
March 2022
February 2022
January 2022
December 2021
November 2021
October 2021
September 2021
August 2021
July 2021
June 2021
May 2021
April 2021
March 2021
February 2021
January 2021
December 2020
November 2020
October 2020
September 2020
August 2020
July 2020
June 2020
May 2020
April 2020
March 2020
February 2020
January 2020
December 2019
November 2019
October 2019
September 2019
August 2019
July 2019
June 2019
May 2019
April 2019
March 2019
February 2019
January 2019
December 2018
November 2018
October 2018
September 2018
August 2018
July 2018
June 2018
May 2018
April 2018
March 2018
February 2018
January 2018
December 2017
November 2017
October 2017
September 2017
August 2017
July 2017
June 2017
May 2017
April 2017
March 2017
February 2017
January 2017
December 2016
November 2016
October 2016
September 2016
August 2016
July 2016
June 2016
May 2016
April 2016
March 2016
February 2016
January 2016
December 2015
November 2015
October 2015
September 2015
August 2015
July 2015
June 2015
May 2015
April 2015
March 2015
February 2015
January 2015
December 2014
November 2014
October 2014
September 2014
August 2014
July 2014
June 2014
May 2014
April 2014
March 2014
February 2014
January 2014
December 2013
November 2013
October 2013
September 2013
August 2013
July 2013
June 2013
May 2013
April 2013
March 2013
February 2013


JiscMail is a Jisc service.

View our service policies at https://www.jiscmail.ac.uk/policyandsecurity/ and Jisc's privacy policy at https://www.jisc.ac.uk/website/privacy-notice

For help and support help@jisc.ac.uk

Secured by F-Secure Anti-Virus CataList Email List Search Powered by the LISTSERV Email List Manager