Programs are scheduled to run on Orbital using the sbatch command, a component of Slurm. You should not directly specify a queue as your job will be put into the appropriate queue based on the requirements that you describe. For further information, see the Usage Guidelines section and, the sbatch man page.
The Intel and GNU compilers are installed on Orbital. The standard MPI implementation for Orbital is OpenMPI, an MPICH-compatible library that supports the Infiniband infrastructure.
To set up your environment correctly, it is highly recommended to use the module facility. This is a utility to correctly set your environment without having to know all the paths to the executables. Different environments can be set quickly allowing useful comparisons of code compiled with different executables. In most cases a simple module load openmpi command can be issued, setting up your environment to use the latest openmpi as well as the Intel compilers.
Compiling Parallel MPI programs
module load openmpi (loads the openmpi environment as well as the Intel compilers)
Once the executable is compiled, a job script will need to be created for the scheduler. For this machine there are a total of 12 processors per node. Here is a sample script which uses 32 processors allocated as 1 processor on 32 nodes:
#!/bin/bash#SBATCH -N 32 # node count#SBATCH --ntasks-per-node=1#SBATCH -t 1:00:00#SBATCH --mail-type=begin#SBATCH --mail-type=end#SBATCH --mail-user=yourNetID@princeton.edumodule load openmpicd /home/yourNetID/mpi_directorysrun ./a.out
To submit the job to the batch queuing system use:
To monitor jobs, use showq and slurmtop
|sinfo||Shows how nodes are being used|
|sshar/sprio||Shows the priority assigned to queued jobs.|
|squeue or qstat||Shows jobs in the queues.|
|smap/sview||A graphical display of the queues.|
|slurmtop||A text based view of the cluster nodes.|