I don't know if this one has been answered already. It has puzzled me
for some time and I couldn't find a reply to your message. Anyway here
is my recipe that works with CCP4 6.0.2, CCP4i 18.104.22.168 and
OpenPBS/Torque 2.1.6 on Fedora Core 5.
If you have a look in the file .../ccp4i/src/local.tcl, "generic" batch
jobs are started with "$batch_queue $batch_options source $com_file";
OpenPBS doesn't neeed the inserted "source". A special case has been
defined for the SGE batch type which will work for PBS. So if you don't
want to modify the tcl source, a workaround would be to open
CCP4i>System Administration>Configure Interface and define the batch
queue as a Sun Grid Engine. Fill the batch command at least with a qsub
-V for OpenPBS to keep the environment variables (maybe a -cwd if your
qsub version implements it). On top of the interface, fill the "Command
to set up CCP4 (used by remote jobs)" with cd $PBS_O_WORKDIR otherwise
your home directory will be trashed with data files. Check that
TEMPORARY and CCP4_SCR are the same.
Michael Weyand wrote:
> Dear colleagues,
> I have problems to start jobs from the CCP4i by using the (OpenPBS)
> batch queing system on our Linux cluster
> Here are some machine details
> System software
> Suse Linux 9.3, 22.214.171.124-20a-smp Kernel
> Batch-Queing-System: OpenPBS
> CCP4 version 6.0, binary download; everything works fine by direct job
> First I tried to start the jobs by configuring the interface,
> adressing a PBS system and some qsub options in the command line, like
> 'qsub -q small -N ccp4-weyand' . But I always got the following pop-up
> message, independent of the used options or without any qsub options:
> My interpretation is that the qsub command is executed, but maybe with
> the wrong options??
> Then I did some reading (always good!!) and found some hints on the
> CCP4 homepage doing some scripting for a Condor system.
> I wrote a script for our PBS queues. May be not clever, but it worked.
> At least partly: I was able to start jobs via CCP4i, and I got the
> right output files in the right folders, but the relevant CCP4i
> project 'database.def' file was not updated. The job list was extended
> by the new job, but no entries for the output files were added. So,
> this option is also useless.
> Here is my script, which I started within a CCP4i queue command line,
> like '/home/weyand/myqsub'
> ----> SNIP
> rm -f ccp4ish.sh
> rm -f ccp4.pbs
> ####create ccp4ish.sh
> sh1="#!/bin/csh -f"
> echo $sh1 > ccp4ish.sh
> echo $sh2 >> ccp4ish.sh
> chmod 755 ccp4ish.sh
> echo "#PBS -l nodes=1:ppn=1" > ccp4.pbs
> echo "#PBS -N ccp4_$USER" >> ccp4.pbs
> echo "#PBS -q small" >> ccp4.pbs
> echo "cd `pwd`" >> ccp4.pbs
> echo $exe >> ccp4.pbs
> ### submit to PBS
> qsub ccp4.pbs
> echo "echo "Job finished on: " `date`" >> ccp4.pbs
> echo "exit 0" >> ccp4.pbs
> -----> SNIP
> I address here maybe two problems:
> 1. why is the internal PBS/qsub option is not working on our system?
> 2. why is my script (which leads to an executed (!) CCP4i job) not
> updating the database file properly?
> If there is someone who can at least give hints for solving one
> problem, I would forget the second...
> Any comments are highly appreciated.