Jump to content

Slurm Workload Manager

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Dannyauble (talk | contribs) at 17:23, 11 October 2018 (Better formatting). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Slurm
Stable release
18.08.1, 17.11.10
Repository
Written inC
Operating systemLinux, BSDs
TypeJob Scheduler for Clusters and Supercomputers
LicenseGNU General Public License
Websiteslurm.schedmd.com

The Slurm Workload Manager (formerly known as Simple Linux Utility for Resource Management or SLURM), or Slurm, is a free and open-source job scheduler for Linux and Unix-like kernels, used by many of the world's supercomputers and computer clusters.

It provides three key functions:

  • allocating exclusive and/or non-exclusive access to resources (computer nodes) to users for some duration of time so they can perform work,
  • providing a framework for starting, executing, and monitoring work (typically a parallel job such as MPI) on a set of allocated nodes, and
  • arbitrating contention for resources by managing a queue of pending jobs.

Slurm is the workload manager on about 60% of the TOP500 supercomputers. [citation needed]

Slurm uses a best fit algorithm based on Hilbert curve scheduling or fat tree network topology in order to optimize locality of task assignments on parallel computers.[1]

History

Slurm began development as a collaborative effort primarily by Lawrence Livermore National Laboratory, SchedMD,[2] Linux NetworX, Hewlett-Packard, and Groupe Bull as a Free Software resource manager. It was inspired by the closed source Quadrics RMS and shares a similar syntax. The name is a reference to the soda in Futurama.[3] Over 100 people around the world have contributed to the project. It has since evolved into a sophisticated batch scheduler capable of satisfying the requirements of many large computer centers.

As of November 2017, TOP500 list of most powerful computers in the world indicates that Slurm is the workload manager on six of the top ten systems including the number 1 system, Sunway TaihuLight with 10,649,600 computing cores.

Structure

Slurm's design is very modular with about 100 optional plugins. In its simplest configuration, it can be installed and configured in a couple of minutes. More sophisticated configurations provide database integration for accounting, management of resource limits and workload prioritization.

Notable features

Notable Slurm features include the following:[citation needed]

  • No single point of failure, backup daemons, fault-tolerant job options
  • Highly scalable (schedules up to 100,000 independent jobs on the 100,000 sockets of IBM Sequoia)
  • High performance (up to 1000 job submissions per second and 600 job executions per second)
  • Free and open-source software (GNU General Public License)
  • Highly configurable with about 100 plugins
  • Fair-share scheduling with hierarchical bank accounts
  • Preemptive and gang scheduling (time-slicing of parallel jobs)
  • Integrated with database for accounting and configuration
  • Resource allocations optimized for network topology and on-node topology (sockets, cores and hyperthreads)
  • Advanced reservation
  • Idle nodes can be powered down
  • Different operating systems can be booted for each job
  • Scheduling for generic resources (e.g. Graphics processing unit)
  • Real-time accounting down to the task level (identify specific tasks with high CPU or memory usage)
  • Resource limits by user or bank account
  • Accounting for power usage by job
  • Support of IBM Parallel Environment (PE/POE)
  • Support for job arrays
  • Job profiling (periodic sampling of each tasks CPU use, memory use, power consumption, network and file system use)
  • Accounting for a job's power consumption
  • Sophisticated multifactor job prioritization algorithms
  • Support for MapReduce+

The following features are announced for version 14.11 of Slurm, was released in November 2014:[4]

  • Improved job array data structure and scalability
  • Support for heterogeneous generic resources
  • Add user options to set the CPU governor
  • Automatic job requeue policy based on exit value
  • Report API use by user, type, count and time consumed
  • Communication gateway nodes improve scalability

Supported platforms

Slurm is primarily developed to work alongside Linux distributions, although there is also support for a few other POSIX-based operating systems, including BSDs (FreeBSD, NetBSD and OpenBSD).[5] Slurm also supports several unique computer architectures, including:

License

Slurm is available under the GNU General Public License V2.

Commercial support

In 2010, the developers of Slurm founded SchedMD, which maintains the canonical source, provides development, level 3 commercial support and training services. Commercial support is also available from Bright Computing, Bull, Cray, and Science + Computing.

See also

References

  1. ^ Pascual, Jose Antonio; Navaridas, Javier; Miguel-Alonso, Jose (2009). Effects of Topology-Aware Allocation Policies on Scheduling Performance. Job Scheduling Strategies for Parallel Processing. Lecture Notes in Computer Science. Vol. 5798. pp. 138–144. doi:10.1007/978-3-642-04633-9_8. ISBN 978-3-642-04632-2.
  2. ^ "Slurm Commercial Support, Development, and Installation". SchedMD. Retrieved 2014-02-23.
  3. ^ "SLURM: Simple Linux Utility for Resource Management" (PDF). 23 June 2003. Retrieved 11 January 2016.
  4. ^ "Slurm - What's New". SchedMD. Retrieved 2014-08-29.
  5. ^ Slurm Platforms

Further reading


SLURM Commands

The following is a list of useful commands available for SLURM. Some of these were built by CCR to allow easier reporting for users.

For usage information for these commands, use --help (example: sinfo --help)

Use the linux command 'man' for more information about most of these commands (example: man sinfo)

Bold-italicized font on the commands below indicates user supplied information. Brackets indicate optional flags.

List SLURM commands slurmhelp
[View information about SLURM nodes & partitions ] sinfo [-p partition_name or -M cluster_name]
[List example SLURM scripts ls -p /util/slurm-scripts less
[Submit a job script for later execution sbatch 'script-file
[Cancel a pending or running job scancel jobid
[Check the state of a user’s jobs squeue --user=username
[Allocate compute nodes for interactive use salloc
[Run a command on allocated compute nodes srun
[Display node information snodes [node cluster/partition state]
[Launch an interactive job fisbatch [various sbatch options]
[List priorities of queued jobs sranks
[Get the efficiency of a running job sueff user-name
[Get SLURM accounting information for a user’s jobs from start date to now suacct start-date user-name
[Get SLURM accounting and node information for a job slist jobid
[Get resource usage and accounting information for a user’s jobs from start date to now slogs start-date user-list
[Get estimated starting times for queued jobs stimes [various squeue options]
[Monitor performance of a SLURM job /util/ccrjobvis/slurmjobvis jobid