Slurm Workload Manager: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
use doi so I find refs again | Assisted by Citation bot r419
Line 61: Line 61:
==References==
==References==
{{Reflist}}
{{Reflist}}
* Balle, S. M. Balle and D. Palermo '''Enhancing an Open Source Resource Manager with Multi-Core/Multi-threaded Support''', ''Job Scheduling Strategies for Parallel Processing'', 2007.
* {{Cite journal|doi=10.1007/978-3-540-78699-3_3|chapter=Enhancing an Open Source Resource Manager with Multi-core/Multi-threaded Support|title=Job Scheduling Strategies for Parallel Processing|series=Lecture Notes in Computer Science|year=2008|last1=Balle|first1=Susanne M.|last2=Palermo|first2=Daniel J.|isbn=978-3-540-78698-6|volume=4942|pages=37}}


* Jette, M. and M. Grondona, [http://www.schedmd.com/slurmdocs/slurm_design.pdf SLURM: Simple Linux Utility for Resource Management] ''Proceedings of ClusterWorld Conference and Expo'', San Jose, California, June 2003.
* {{Cite journal|last1=Jette|first1= M. |first2= M. |last2=Grondona|url=http://www.schedmd.com/slurmdocs/slurm_design.pdf |title=SLURM: Simple Linux Utility for Resource Management|journal=Proceedings of ClusterWorld Conference and Expo|location=San Jose, California|month= June|year= 2003}}


* Layton, Jeffrey B. [http://www.linux-mag.com/id/7239/1/ Caos NSA and Perceus: All-in-one Cluster Software Stack] Linux Magazine,5 February 2009.
* {{cite journal|last=Layton|first= Jeffrey B. |url=http://www.linux-mag.com/id/7239/1/|title= Caos NSA and Perceus: All-in-one Cluster Software Stack|journal= Linux Magazine|date=5 February 2009|year=2009}}


* Yoo, A., M. Jette, and M. Grondona, '''SLURM: Simple Linux Utility for Resource Management''', ''Job Scheduling Strategies for Parallel Processing'', volume 2862 of ''Lecture Notes in Computer Science'', pages 44–60, Springer-Verlag, 2003.
* {{cite journal|doi=10.1007/10968987_3|url=http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.10.6834|chapter=SLURM: Simple Linux Utility for Resource Management|title=Job Scheduling Strategies for Parallel Processing|series=Lecture Notes in Computer Science|year=2003|last1=Yoo|first1=Andy B.|last2=Jette|first2=Morris A.|last3=Grondona|first3=Mark|isbn=978-3-540-20405-3|volume=2862|pages=44}}


==External links==
==External links==

Revision as of 22:27, 30 January 2013

Simple Linux Utility for Resource Management (SLURM) is an open source job scheduler used by many of the world's supercomputers and computer clusters. It provides three key functions. First, it allocates exclusive and/or non-exclusive access to resources (computer nodes) to users for some duration of time so they can perform work. Second, it provides a framework for starting, executing, and monitoring work (typically a parallel job such as MPI) on a set of allocated nodes. Finally, it arbitrates contention for resources by managing a queue of pending jobs.

SLURM is the batch system on many of the TOP500 supercomputers, including the second fastest one in the world IBM Sequoia, a 20-petaflop IBM BlueGene/Q with 1.6 million cores and has scheduled machines over an order of magnitude larger than this using emulation.

SLURM uses a best fit algorithm based on Hilbert curve scheduling or fat tree network topology in order to optimize locality of task assignments on parallel computers.[1]

History

SLURM began development as a collaborative effort primarily by Lawrence Livermore National Laboratory, SchedMD, Linux NetworX, Hewlett-Packard, and Groupe Bull as a Free Software resource manager. It has since evolved into a sophisticated batch scheduler capable of satisfying the requirements of many large computer centers. SLURM is currently used on many of the largest computers in the world.

Structure

SLURM's design is very modular with dozens of optional plugins. In its simplest configuration, it can be installed and configured in a couple of minutes. More sophisticated configurations provide database integration for accounting, management of resource limits and workload prioritization. SLURM also works with several meta-schedulers such as Moab Cluster Suite, Maui Cluster Scheduler, and Platform LSF.

Notable features

  • No single point of failure, backup daemons, fault-tolerant job options
  • Highly scalable (schedules up to 100,000 independent jobs on the 100,000 sockets of IBM Sequoia)
  • High performance (up to 1000 job submissions per second and 600 job executions per second)
  • Free Software (GNU General Public License)
  • Highly configurable with about 100 plugins
  • Fair-share scheduling with hierarchical bank accounts
  • Preemptive and gang scheduling (time-slicing of parallel jobs)
  • Integrated with database for accounting and configuration
  • Resource allocations optimized for network topology and on-node topology (sockets, cores and hyperthreads)
  • Advanced reservation
  • Idle nodes can be powered down
  • Different operating systems can be booted for each job
  • Scheduling for generic resources (e.g. Graphics processing unit)
  • Real-time accounting down to the task level (identify specific tasks with high CPU or memory usage)
  • Accounting for power usage by job
  • Support of IBM Parallel Environment (PE/POE)

Coming in SLURM version 2.6 (2Q 2013)

  • Support for job arrays
  • Higher scalability
  • Energy efficient scheduling (including job specification of CPU frequency)
  • Integration with Apache Hadoop + Open MPI based job launch
  • Integration with FlexNet Publisher (FlexLM License Manager)

Supported platforms

While SLURM was originally written for Linux, the latest version supports many other operating systems:[2]

SLURM also supports several unique computer architectures including:

License

SLURM is available under the GNU General Public License V2.

Commercial support

In 2010, the developers of SLURM founded SchedMD, which provides development, level 3 commercial support and training services. Commercial support is also available from Bright Computing Bright Computing.

References

  1. ^ Job Scheduling Strategies for Parallel Processing: by Eitan Frachtenberg and Uwe Schwiegelshohn 2010 ISBN 3-642-04632-0 pages 138-144
  2. ^ Platforms
  • Balle, Susanne M.; Palermo, Daniel J. (2008). "Job Scheduling Strategies for Parallel Processing". Lecture Notes in Computer Science. 4942: 37. doi:10.1007/978-3-540-78699-3_3. ISBN 978-3-540-78698-6. {{cite journal}}: |chapter= ignored (help); Cite journal requires |journal= (help)

External links