Jump to content

Slurm Workload Manager: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
→‎top: Linux kernel isn't an operating system
Removed irrelevent data
Line 33: Line 33:
'''Simple Linux Utility for Resource Management''' ('''SLURM''') is a [[free and open-source]] [[job scheduler]] for the [[Linux kernel]] used by many of the world's [[supercomputer]]s and [[computer cluster]]s. It provides three key functions. First, it allocates exclusive and/or non-exclusive access to resources (computer nodes) to users for some duration of time so they can perform work. Second, it provides a framework for starting, executing, and monitoring work (typically a parallel job such as [[Message Passing Interface|MPI]]) on a set of allocated nodes. Finally, it arbitrates contention for resources by managing a queue of pending jobs.
'''Simple Linux Utility for Resource Management''' ('''SLURM''') is a [[free and open-source]] [[job scheduler]] for the [[Linux kernel]] used by many of the world's [[supercomputer]]s and [[computer cluster]]s. It provides three key functions. First, it allocates exclusive and/or non-exclusive access to resources (computer nodes) to users for some duration of time so they can perform work. Second, it provides a framework for starting, executing, and monitoring work (typically a parallel job such as [[Message Passing Interface|MPI]]) on a set of allocated nodes. Finally, it arbitrates contention for resources by managing a queue of pending jobs.


SLURM is the batch system on many of the [[TOP500]] supercomputers, including the most powerful computer of all
SLURM is the batch system on many of the [[TOP500]] supercomputers, including
[[Tianhe-2]] with 3.1 million cores and 33.9 Petaflop performance at the [[NUDT]].
[[Tianhe-2]] which is currently the worlds fastest computer.


SLURM uses a [[curve fitting|best fit algorithm]] based on [[Hilbert curve scheduling]] or [[fat tree]] network topology in order to optimize locality of task assignments on parallel computers.<ref name=Eitan>{{Cite journal|doi=10.1007/978-3-642-04633-9_8|chapter=Effects of Topology-Aware Allocation Policies on Scheduling Performance|title=Job Scheduling Strategies for Parallel Processing|series=Lecture Notes in Computer Science|year=2009|last1=Pascual|first1=Jose Antonio|last2=Navaridas|first2=Javier|last3=Miguel-Alonso|first3=Jose|isbn=978-3-642-04632-2|volume=5798|pages=138–144}}</ref>
SLURM uses a [[curve fitting|best fit algorithm]] based on [[Hilbert curve scheduling]] or [[fat tree]] network topology in order to optimize locality of task assignments on parallel computers.<ref name=Eitan>{{Cite journal|doi=10.1007/978-3-642-04633-9_8|chapter=Effects of Topology-Aware Allocation Policies on Scheduling Performance|title=Job Scheduling Strategies for Parallel Processing|series=Lecture Notes in Computer Science|year=2009|last1=Pascual|first1=Jose Antonio|last2=Navaridas|first2=Javier|last3=Miguel-Alonso|first3=Jose|isbn=978-3-642-04632-2|volume=5798|pages=138–144}}</ref>

Revision as of 22:47, 27 June 2014

SLURM
Stable release
2.6
Repository
Written inC
Operating systemLinux
TypeJob Scheduler for Cluster and Supercomputers
LicenseGNU General Public License
Websiteslurm.schedmd.com

Simple Linux Utility for Resource Management (SLURM) is a free and open-source job scheduler for the Linux kernel used by many of the world's supercomputers and computer clusters. It provides three key functions. First, it allocates exclusive and/or non-exclusive access to resources (computer nodes) to users for some duration of time so they can perform work. Second, it provides a framework for starting, executing, and monitoring work (typically a parallel job such as MPI) on a set of allocated nodes. Finally, it arbitrates contention for resources by managing a queue of pending jobs.

SLURM is the batch system on many of the TOP500 supercomputers, including Tianhe-2 which is currently the worlds fastest computer.

SLURM uses a best fit algorithm based on Hilbert curve scheduling or fat tree network topology in order to optimize locality of task assignments on parallel computers.[1]

History

SLURM began development as a collaborative effort primarily by Lawrence Livermore National Laboratory, SchedMD,[2] Linux NetworX, Hewlett-Packard, and Groupe Bull as a Free Software resource manager. It was inspired by the closed source Quadrics RMS and shares a similar syntax. Over 100 people around the world have contributed to the project. It has since evolved into a sophisticated batch scheduler capable of satisfying the requirements of many large computer centers.

As of the June 2013 list of the TOP500 most powerful computers in the world, SLURM is the workload manager on 5 of the top 10 systems, including the most powerful computer of all, Tianhe-2, with 3.1 million cores and 33.9 Petaflop performance at the NUDT. Other systems in the top 10 running SLURM include IBM Sequoia, an IBM Bluegene/Q with 1.57 million cores and 17.2 Petaflops at Lawrence Livermore National Laboratory; Stampede, a 5.17 PetaFlop Dell computer at the Texas Advance Computing Center;[3] Vulcan, a 4.29 Petaflop IBM Bluegene/Q at Lawrence Livermore National Laboratory;[4] and Tianhe-I, a 2.56 PetaFlop system at NUDT.

Structure

SLURM's design is very modular with dozens of optional plugins. In its simplest configuration, it can be installed and configured in a couple of minutes. More sophisticated configurations provide database integration for accounting, management of resource limits and workload prioritization. SLURM also works with several meta-schedulers such as Moab Cluster Suite, Maui Cluster Scheduler, and Platform LSF.

Notable features

  • No single point of failure, backup daemons, fault-tolerant job options
  • Highly scalable (schedules up to 100,000 independent jobs on the 100,000 sockets of IBM Sequoia)
  • High performance (up to 1000 job submissions per second and 600 job executions per second)
  • Free and open-source software (GNU General Public License)
  • Highly configurable with about 100 plugins
  • Fair-share scheduling with hierarchical bank accounts
  • Preemptive and gang scheduling (time-slicing of parallel jobs)
  • Integrated with database for accounting and configuration
  • Resource allocations optimized for network topology and on-node topology (sockets, cores and hyperthreads)
  • Advanced reservation
  • Idle nodes can be powered down
  • Different operating systems can be booted for each job
  • Scheduling for generic resources (e.g. Graphics processing unit)
  • Real-time accounting down to the task level (identify specific tasks with high CPU or memory usage)
  • Accounting for power usage by job
  • Support of IBM Parallel Environment (PE/POE)
  • Support for job arrays
  • Job profiling (periodic sampling of each tasks CPU use, memory use, power consumption, network and file system use)
  • Accounting for a job's power consumption
  • Support for MapReduce+

Coming in SLURM version 13.12 (4Q 2013)

  • Integration with Apache Hadoop + Open MPI based job launch
  • Hot spare nodes and other fault tolerance enhancements for long running jobs
  • Energy efficient scheduling (including job specification of CPU frequency)
  • Integration with FlexNet Publisher (FlexLM License Manager)

Supported platforms

While SLURM was originally written for the Linux kernel, the latest version supports many other operating systems:[5]

SLURM also supports several unique computer architectures including:

License

SLURM is available under the GNU General Public License V2.

Commercial support

In 2010, the developers of SLURM founded SchedMD, which maintains the canonical source, provides development, level 3 commercial support and training services. Commercial support is also available from Bright Computing, Bull. Cray, and Science + Computing

References

  1. ^ Pascual, Jose Antonio; Navaridas, Javier; Miguel-Alonso, Jose (2009). "Job Scheduling Strategies for Parallel Processing". Lecture Notes in Computer Science. 5798: 138–144. doi:10.1007/978-3-642-04633-9_8. ISBN 978-3-642-04632-2. {{cite journal}}: |chapter= ignored (help); Cite journal requires |journal= (help)
  2. ^ "Slurm Commercial Support, Development, and Installation". SchedMD. Retrieved 2014-02-23.
  3. ^ "Texas Advanced Computing Center - Home". Tacc.utexas.edu. Retrieved 2014-02-23.
  4. ^ Donald B Johnston (2010-10-01). "Lawrence Livermore's Vulcan brings 5 petaflops computing power to collaborations with industry and academia to advance science and technology". Llnl.gov. Retrieved 2014-02-23.
  5. ^ SLURM Platforms
General