HPCC usage rules
The CSDMS High Performance Computing Cluster uses TORQUE as a job scheduler. With TORQUE you can allocate resources, schedule and manage job execution, monitor and view the status of your jobs.
TORQUE uses instructions given on the command line and embedded within comments of the shell script that runs your program. This page describes basic TORQUE usage. Please visit the TORQUE website for a more complete guide.
Depending on the type of job that you wish to run, you may want to send your job to a particular queue. Note that some of the queues have time limits and will kill your job if this limit is exceeded. As such, it is probably a good idea to have a look at the set of queues that are set up on the CSDMS HPCC.
To minimize communications traffic, it is best for your job to work with files on the local disk of the compute node. These disks are mounted on each of the compute nodes as /state/partition1. Hence, your submission script will need to transfer files from your home directory on the head node to a temporary directory on the compute nodes. Before finishing, your script should transfer any necessary files back to your home directory and remove all files from the temporary directory of the compute node.
There are essentially two ways to achieve this: (1) to use the PBS stagein and stageout utilities, or (2) to manually copy the files by commands in your submission script. The stagein and stageout features of Torque are somewhat awkward, especially since wildcards and macros in the file lists cannot be used. This method also has some timing issues. Hence, we ask you to use the second method, and to use secure copy (scp) to do the file transfers to avoid NFS bottlenecks. An example of how the second method might be done is given below in the serial example.
TORQUE Commands
To use TORQUE, you will probably want to add their locations to your path. For the CSDMS HPCC, these directories are:
- TORQUE: /opt/torque/bin
If you are using modules, load the torque module,
> module load torque
This will set up your environment to use both TORQUE.
Frequently Used Commands
Command | Description |
---|---|
qsub [script] | Submit a pbs job |
qstat [job_id] | Show status of pbs batch jobs |
qdel [job_id] | Delete pbs batch job |
qhold [job_id] | Hold pbs batch jobs |
qrls [job_id] | Release hold on pbs batch jobs |
Check Queue and Job Status
Command | Description |
---|---|
qstat -q | List all queues |
qstat -a | List all jobs |
qstat -au <userid> | list jobs for userid |
qstat -r | List running jobs |
qstat -f <job_id> | List full information about job_id |
qstat -Qf <queue> | List full information about queue |
qstat -B | List summary status of the job server |
pbsnodes | List status of all compute nodes |
Job Submission Options for qsub
When submitting a job to the queue with qsub you can specify options either within your script, or as command line options. If given within the script, they must be at the beginning of the script and preceded by #PBS (as shown in the following table). If given on the command line, drop the #PBS and just use the option as usual.
Command | Description |
---|---|
#PBS -N myjob | Set the job name |
#PBS -m ae | Mail status when the job completes |
#PBS -M your@email.address | Mail to this address |
#PBS -l nodes=4 | Allocate specified number of nodes |
#PBS -l file=150gb | Allocate disk space on nodes |
#PBS -l walltime=1:00:00 | Inform the PBS scheduler of the expected runtime |
#PBS -t 0-5 | Start a job array with IDs that range from 0 to 5 |
#PBS -l host=<hostname> | Run your job on a specific host (cl1n0[1-64]-ib) |
#PBS -V | Export all environment variables to the batch job |
Basic Usage
TORQUE dynamically allocates resources for your job. All you need to do is submit it to the queue (with qsub) and it will find the resources for you. Note though that TORQUE is not aware of the details of the program that you are wanting to run and so may need to tell it what resources you require (memory, nodes, cpus, etc.).
Submitting a job
To submit a job to the queue you must write a shell script that torque will use to run your program. In its simplest form, a TORQUE command file would look like the following:
#!/bin/sh
my_prog
This shell script simply runs the program, my_prog. To submit this job to the queue, use the qsub command,
> qsub run_my_prog.sh
where the contents of the file run_my_prog.sh is code snippet above. TORQUE will respond with the job number and location,
45.beach.colorado.edu
In this case TORQUE has identified your job with job number 45. You have now submitted your job to the default queue and will be run as soon as there are resources available for it. By default, the standard error and output of your script are redirected to files in your home directory. They will have the name <job_name>.o<job_no>, and <job_name>.e<job_no> for standard output and error, respectively. Thus, for our example, standard output will be written to run_my_prog.sh.o45, and standard error will be written to run_my_prog.sh.e.45.
Deleting a job
If you want to delete a job that you already submitted, use the qdel command. This immediately removes your job from the queue and kills it if it is already running. To delete the job from the previous example (job number 45),
> qdel 45
Check the status of a job
Use qstat to check the status of a job. This returns a brief status report of all your jobs that are either queued or running. For example,
> qstat
Job id Name User Time Use S Queue
------------------------- ---------------- --------------- -------- - -----
45.beach.colorado.edu STDIN username 0 R workq
46.beach.colorado.edu STDIN username 0 Q workq
In this case, job number 45 is running ('R'), and job number 46 is queued ('Q'). Both have been submitted to the workq.
Advanced Usage
As mentioned before, TORQUE is not aware of what resources your program will need and so may need to give it some hints. This can be done on the command line when calling qsub or within your TORQUE command file. TORQUE will parse comments within your command file of the form #PBS. Text that follows this is interpreted as if it were given on the command line with the qsub command. Please see the qsub man page for a full list of options (man qsub).
Job Submission: There are options in the shell script that can be used to customize your job. Continuing with the example of the previous section, the command script could be customized,
#!/bin/sh
#PBS -N example_job
#PBS -l mem=2gb
#PBS -o my_job.out
#PBS -e my_job.err
my_prog
Here we have renamed the job to be example_job, tell TORQUE that the job will use 2GB of memory, and redirect standard output and error to the files my_job.out and my_job.err, respectively. TORQUE looks for lines that begin with #PBS at the beginning of your command file (ignoring a first line starting with #!). Once it encounters a non-comment line (that isn't blank), it ignores any other directives that might be present.
#PBS -r n # The job is not rerunnable
#PBS -r y # The job is rerunnable
#PBS -q testq # The queue to submit to
#PBS -N testjob # The name of the job
#PBS -o testjob.out # The file to print the output to
#PBS -e testjob.err # The file to print the error to
#PBS -m abe # Send email at these points in the execution
#PBS -M me@colorado.edu # Whom to email
#PBS -l walltime=01:00:00 # Specify the walltime
#PBS -l pmem=100mb # Memory allocation for the Job
#PBS -l nodes=4 # Number of nodes to allocate
#PBS -l nodes=4:ppn=3 # Number of nodes and the number processors per node
You can use any of the above options in the script to customize your job. If all of the above options are used, the job will be named testjob and be put into the testq. It will only run for 1 hour and mail me@colorado.edu at the beginning and end of the job. It will use 4 nodes with 3 processors per node, with a total of 12 processors and 100 mb of memory.
Job Arrays
Sometimes you may want to submit a large number of jobs based on the same script. An example might be a Monte Carlo simulation where each simulation uses a different input file or set of input files. TORQUE uses job arrays to handle this situation. Job arrays allow the user to submit a large number of jobs with a single qsub command. For example,
> qsub -t 10-23 my_job_script.sh
would submit 14 jobs to the queue with each job sharing the same script and running in a similar environment. When the script is run for each job, torque defines the envrionment variable PBS_ARRAYID that is set to the array index of the job. For the above example, the array indices would range from 10 to 23. The script then is able to use the PBS_ARRAYID variable to take particular action depending on its id. For instance, it could gather particular input files that are identified by its id.
TORQUE references the set of jobs generated by such a command with a slightly different naming convention,
> qsub -t 100,102-105
45.beach.colorado.edu
> qstat
45-100.beach.colorado.edu ...
45-102.beach.colorado.edu ...
45-103.beach.colorado.edu ...
45-104.beach.colorado.edu ...
45-105.beach.colorado.edu ...
You can now refer to all of the jobs as a group or individual jobs. For example, if you would like to stop all of the jobs
> qdel 45
If you would like to stop a single job of the group
> qdel 45-103
Environment variables
The qsub command passes a limited set of environment variables from your shell environment to the job. They include:
Variable Name | Description |
---|---|
HOME | The path to your home directory. |
LANG | Sets the locale; e.g., en_US.UTF-8. |
LOGNAME | Your login name. |
PATH | A list of directories to be searched for executables. |
SHELL | The shell that you're using; e.g., bash, csh, tcsh. |
Torque also defines a set of environment variables. You can use these environment variables in PBS directives or in commands. For example,
#!/bin/sh
#PBS -N example_job
#PBS -l mem=2gb
#PBS -o $PBS_JOBNAME.out
#PBS -e $PBS_JOBNAME.err
IN_FILE=${PBS_O_HOME}/my_input_file.txt
my_prog ${IN_FILE}
The following environment variables relate to the machine on which qsub was executed:
Variable Name | Description |
---|---|
PBS_O_HOST | The name of the host machine. |
PBS_O_LOGNAME | The login name of the user running qsub. |
PBS_O_HOME | Home directory of the user running qsub. |
PBS_O_WORKDIR | The working directory (the directory where qsub was executed). |
PBS_O_QUEUE | The original queue to which the job was submitted. |
The following variables relate to the environment on the machine where the job is to be run:
Variable Name | Description |
---|---|
PBS_ENVIRONMENT | Evaluates to PBS_BATCH for batch jobs and to PBS_INTERACTIVE for interactive jobs. |
PBS_JOBID | The identifier that PBS assigns to the job. |
PBS_JOBNAME | The name of the job. |
PBS_NODEFILE | The file containing the list of nodes assigned to a parallel job. |
PBS_ARRAYID | ID assigned to a job of a job array |
Check the status of a job
You can check the status of your jobs with TORQUE.
TORQUE provides the qstat command to check job status. Please see the qstat man page for a full list of options (man qstat). Some useful options that were not listed above include:
Option | Description |
---|---|
qstat -n | Show which nodes are allocated to each job. |
qstat -f | Show a full status display. |
qstat -u | Show status for jobs owned by a specified user. |
qstat -q | Show status for a particular queue. |
Example TORQUE Scripts
Serial job with lots of I/O
Because /home and /scratch are NFS mounted on the compute nodes through the head node, file i/o to these disks can be slow. Furthermore, excessive i/o to these disks can cause the head node to become completely unresponsive. This is bad. Each compute node has a disk that is locally mounted (/state/partition1) and so if your job will have lots of i/o please use these disks rather than /scratch or /home.
All TORQUE jobs define the environment variable TMPDIR that contains the path to a temporary directory that was created on each of your job's nodes. After your job completes, this directory (along with everything underneath it) is automatically removed. Use this environment variable to control where your job writes data (don't forget to move any data you want to save to a permanent location!). To make sure you have enough disk space, you can ask TORQUE to allocate a certain amount of space in the same way that you ask for other computational resources. This is done with the file keyword.
The following is an example of a script that uses the TMPDIR variable and requests disk space (download).
#! /bin/sh
# Request nodes and processors per node
#PBS -l nodes=1:ppn=1
# Request disk space on compute nodes
#PBS -l file=150gb
# Move to the temporary directory and run job
cd $TMPDIR && my_serial_prog
# Copy output back to home dir before exiting (TMPDIR will be automatically removed when this script exits)
cp $TMPDIR/* $HOME/Output
Serial MATLAB job
Running a matlab script through Torque is easy. You just have to use the proper options when running MATLAB. Note that you need to have an exit call at the end of your MATLAB function. If you forget this, your script will never complete.
#! /bin/sh
#PBS -l nodes=1:ppn=1
RUNDIR=$HOME/my_simulation_dir
MATLAB_FUNCTION=hello_world
cd $RUNDIR && matlab -r $MATLAB_FUNCTION -nodesktop -nosplash
Array of serial jobs
An example of a script of an array of serial jobs.
#! /bin/sh
## Create a job array of two jobs with IDs 0 and 5
#PBS -t 0,5
## The maximum amount of memory required for the job
#PBS -l mem=30gb
## Send email when the job is aborted, started, or stopped
#PBS -m abe
## Send email here
#PBS -M myname@gmail.com
# This is the sedflux version to run.
SEDFLUX=/data/progs/sedflux/mars/bin/sedflux
# Get input files from here.
INPUT_DIR=${PBS_O_HOME}/Job_Input/
# Put output files from here.
OUTPUT_DIR=${PBS_O_HOME}/Job_Output/
# The base work directory. This is the local disk for each node.
WORK_DIR=/data2/
# This simulation number provides a key to a particular set of input files.
SIM_NO=${PBS_ARRAYID}
# Run the simulation here.
SIM_DIR=myname${SIM_NO}
# The input files for this particular simulation are here.
INPUT_FILES=${INPUT_DIR}/sim${SIM_NO}/
## Set up a simulation.
# Create a simulation directory, and copy input files into it.
setup()
{
echo "Transferring input to compute node..."
echo "${INPUT_FILES} -> ${SIM_DIR}"
cd ${WORK_DIR} && \
mkdir -p ${SIM_DIR} && \
cp ${INPUT_FILES}/* ${SIM_DIR}
}
## Cleanup after a simulation.
# Create an output directory, tar the simulation directory, and remove
# the simulation directory (and everything within it).
teardown()
{
echo "Transferring output to server and cleaning up..."
mkdir -p ${OUTPUT_DIR} && \
cd ${WORK_DIR} && \
tar --create --gzip --file ${OUTPUT_DIR}/${SIM_DIR}.tar.gz ${SIM_DIR} && \
rm -r ${SIM_DIR}
}
## Run the simulation
# Move to the simulation directory, run sedflux, and move back to the work
# directory.
run()
{
echo "Running program in ${SIM_DIR} on node ${PBS_NODENUM}..."
cd ${SIM_DIR} && \
${SEDFLUX} -3 -i mars_init.kvf --msg="A test run using PBS"
}
setup
run
teardown
Parallel openmpi job
An example script for submitting a parallel openmpi job to the queue using qsub.
#!/bin/sh
## Specify the number of nodes and the number of processors
## per node to allocate for this job.
#PBS -l nodes=4:ppn=8
NCPU=`wc -l < $PBS_NODEFILE`
NNODES=`uniq $PBS_NODEFILE | wc -l`
MPIRUN=/usr/local/openmpi/bin/mpirun
CMD="$MPIRUN -n $NCPU"
echo "--> Running on nodes " `uniq $PBS_NODEFILE`
echo "--> Number of available cpus " $NCPU
echo "--> Number of available nodes " $NNODES
echo "--> Launch command is " $CMD
$CMD my_mpi_prog
Parallel mpich2 job
An example script for submitting a parallel mpich2 job. Note that if you are using mpich2, you should have a file called .mpd.conf file in your home directory.
#!/bin/sh
#PBS -l nodes=4:ppn=8
NCPU=`wc -l < $PBS_NODEFILE`
NNODES=`uniq $PBS_NODEFILE | wc -l`
MPICHPREFIX=/usr/local/mpich
MPIRUN=$MPICHPREFIX/bin/mpirun
MPICHCMD="$MPIRUN -np $NCPU"
echo "Running on nodes " `uniq $PBS_NODEFILE`
echo "Number of available cpus " $NCPU
echo "Number of available nodes " $NNODES
echo "Launch command " $CMD
start_mpd ()
{
MPDBOOT=$MPICHPREFIX/bin/mpdboot
MPDTRACE=$MPICHPREFIX/bin/mpdtrace
MPDRINGTEST=$MPICHPREFIX/bin/mpdringtest
echo '--> Starting up mpd daemons '
export MPD_CON_EXT=${PBS_JOBID}
$MPDBOOT -n ${NNODES} -f ${PBS_NODEFILE} -v --remcons && \
$MPDTRACE -l && \
$MPDRINGTEST 100
}
start_mpd
$MPICHCMD my_mpich_prog
Parallel mvapich2 job
An example script for submitting an mvapich program with qsub.
#!/bin/sh
#PBS -l nodes=12:ppn=7
NCPU=`wc -l < $PBS_NODEFILE`
NNODES=`uniq $PBS_NODEFILE | wc -l`
MPIPREFIX=/usr/local/mvapich2
MPIRUN=$MPIPREFIX/bin/mpirun_rsh
echo "Running on nodes " `uniq $PBS_NODEFILE`
echo "Number of available cpus " $NCPU
echo "Number of available nodes " $NNODES
echo "Launch command " $CMD
$MPIRUN -np $NCPU -hostfile $PBS_NODEFILE ~/mpi_test/trap
Monitoring the CSDMS HPCC (Beach)
Beach is equipped with a monitoring system named Ganglia. Ganglia reports real time information of how active the beach cluster is used, overall as well as on a node basis. You can see the activity on beach by going to the following site: http://csdms.colorado.edu/ganglia