SchedulerIntro

From HPC Wiki
Jump to: navigation, search

Scheduler/resource manager Introduction

A high performance computing (HPC) cluster is comprised of many compute nodes, with multiple users submitting multiple (often very large) jobs. The cluster must have a mechanism for distributing jobs across the nodes in a way that tries to satisfy many constraints - this is the responsibility of a program called a scheduler.

The scheduler has a complicated task: the various jobs can have varying requirements (e.g., CPU cores, GPU, memory, parallel execution), as well as differing priorities.

Since it is often necessary to enable large parallel jobs to run, the scheduler needs to be able to reserve nodes for such jobs - e.g., if a user submits a job requiring 100 nodes, and only 90 nodes are currently free, the scheduler might need to keep other jobs off the 90 free nodes in order that the 100 node job might eventually run.

The scheduler must also account for nodes which are down, or have insufficient resources for a particular job, etc. As such, a resource manager is also needed (which can either be integrated with the scheduler or run as a separate program).

The scheduler will also need to interface with an accounting system (which also can be integrated into the scheduler) to handle the charging of allocations for time used on the cluster.

Users interact with the scheduler and/or resource manager whenever you submit a job, or query on the status of your jobs or the cluster as a whole, or manage your jobs.

The SLURM workload manager incorporates scheduler, resource manager, and accounting into a single program.