SthenoQueues

From HPC Wiki
Jump to: navigation, search


The Stheno queues will be re-structured on 24-Jun-2015. The new strategy introduces three queues which segregate jobs by anticipated wall clock time (as opposed to CPU time) limits. Jobs that run past their wall clock limits are automatically terminated. Each queue will be allocated an initial number of nodes, but the node counts will be adjusted depending on demand, possibly even dynamically.

Contents

ib-short

All users have access to the "ib-short" queue. This queue is the default queue : if no queue is specified in the job submission script, the job will run in this queue. This queue has a 48 hour wall time limit : jobs running in this queue will terminate after 48 hours.

ib-medium

All users have access to the "ib-medium" queue. This queue has a 168 hour (7 days) wall time limit : jobs running in this queue will terminate after 168 hours. Users must specify this queue with "-q ib-medium" on a qsub or qlogin command, or in the qsub submit script by including the following line:

#$ -q ib-medium

ib-long

The "ib-long" queue has no wall time limit.

Jobs running on this queue have no wall clock limit, but are impacted by the monthly maintenance cycle.

If a user wishes to run on this queue a request must be sent to arcs@njit.edu.

Users must specify this queue with "-q ib-long" on a qsub or qlogin command, or in the qsub submit script by including the following line:

#$ -q ib-long

ib-gpu

This queue has a 168 hour (7 days) wall time limit : jobs running in this queue will terminate after 168 hours.

Three of Stheno's nodes contain twin GPUs and 12 or 20 CPU cores each. These nodes are in contention for both GPU and SMP jobs, so we are still observing their usage in order to devise a fair use policy.

  • Access to this queue is by request to arcs@njit.edu
  • You can run jobs on two of the six GPUs simultaneously (intended for GPU jobs).
  • You are limited to 10 CPU cores simultaneously (intended for SMP jobs).

Please refer to this page for updates on gpu queue policy.

Please refer to Running CUDA Samples on Kong for examples of how to specify the GPU queue and number of GPUs desired.