The head node, della, should be used for interactive work only, such as compiling programs, and submitting jobs as described below. No jobs should be run on the head node, other than brief tests that last no more than a few minutes. Where practical, we ask that you entirely fill the nodes so that CPU core fragmentation is minimized.
Access from the campus wireless network is restricted.
All jobs must be run through the Slurm scheduler on Della. If a job would exceed any of the limits below, it will be held until it is eligible to run. Jobs should not specify the qos into which it should run, allowing the Slurm scheduler to distribute the jobs accordingly.
Jobs will be assigned a quality of service (QOS) according to the length of time specified for their job:
1 hour limit
30 node maximum
360 core maximum allocation
2 job maximum per user
NOT for production mode
24 hour limit
40 job maximum per user
128 processor maximum per user
no limits on total cores
72 hour limit
16 jobs maximum per user
128 processor maximum per user
432 total cores
6 day limit
10 jobs per user
160 cores per user
400 total cores
Jobs are further prioritized through the Slurm scheduler based on a number of factors: job size, run times, node availability, wait times, and percentage of usage over a 30 day period (fairshare)
Distribution of CPU and memory
There are 2816 processor cores available: della-r1c1n1 through della-r1c4n16 have 48 GB (4 GB memory per core, 64 nodes, 12 cores per node), della-r2c1n1 through della-r2c4n16 have 96 GB (8 GB memory per core, 64 nodes, 12 cores per node) and della-r3c1n1 through della-r3c4n16 have 128 GB (6.4 GB per core, 64 nodes, 20 cores per node).
The nodes are all connected with QDR Infiniband. The Ivybridge nodes (with 20 cores) are connected with FDR Infiniband.
/home (shared via NFS to all the compute nodes) is intended for scripts, source code, executables and small static data sets that may be needed as standard input/configuration for codes.
/scratch/network (shared via NFS to all the compute nodes) is intended for dynamic data that doesn't require high bandwidth i/o such as storing final output for a compute job. You may a create a directory /scratch/network/myusername, and use this to place your temporary files. Files are NOT backed up so this data should be moved to persistent storage once it is no longer needed for continued computation. Any files left here will be removed after 60 days.
/scratch/gpfs (shared via GPFS to all the compute nodes, 260 TB) is intended for dynamic data that requires higher bandwidth i/o. Files are NOT backed up so this data should be moved to persistent storage as soon as it is no longer needed for computations. Any files left here will be removed after 180 days.
/tigress (shared via GPFS to all TIGRESS resources, 2.5 PB) is intended for more persistent storage and should provide high bandwidth i/o (10 GB/s aggregate bandwidth for jobs across 16 or more nodes). Users are provided with a default quota of 512 GB when they request a directory in this storage, and that default can be increased by requesting more. We do ask people to consider what they really need, and to make sure they regularly clean out data that is no longer needed since this filesystem is shared by the users of all our systems.
/scratch (local to each compute node) is intended for data local to each task of a job, and it should be cleaned out at the end of each job. Nodes della-r1c1n1 through della-r1c4n16 have about 130G available while the others have about 1400G.
Running 3rd-Party Software
If you are running 3rd-party software whose characteristics (e.g., memory usage) you are unfamiliar with, please check your job after 5-15 minutes using 'top' or 'ps -ef' on the compute nodes being used. If the memory usage is growing rapidly, or close to exceeding the per-processor memory limit, you should terminate your job before it causes the system to hang or crash. You can detremine on which node(s) your job is running using the "scontrol show job <jobnumber>" command.
Please remember that these are shared resources for all users.