Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Slurm Workload Manager

From Wikipedia, the free encyclopedia
Free and open-source job scheduler for Linux and similar computers
Slurm
DeveloperSchedMD
Stable release
25.05.2[1]Edit this on Wikidata / 7 August 2025; 3 months ago (7 August 2025)
Repository
Written inC
Operating systemLinux
TypeJob Scheduler for Clusters and Supercomputers
LicenseGNU General Public License
Websiteslurm.schedmd.comEdit this at Wikidata

TheSlurm Workload Manager, formerly known asSimple Linux Utility for Resource Management (SLURM), or simplySlurm, is afree and open-sourcejob scheduler forLinux andUnix-likekernels, used by many of the world'ssupercomputers andcomputer clusters.

It provides three key functions:

  • allocating exclusive and/or non-exclusive access to resources (computer nodes) to users for some duration of time so they can perform work,
  • providing a framework for starting, executing, and monitoring work, typically a parallel job such asMessage Passing Interface (MPI) on a set of allocated nodes, and
  • arbitrating contention for resources by managing a queue of pending jobs.

Slurm is the workload manager on about 60% of theTOP500 supercomputers.[2]

Slurm uses abest-fit algorithm based onHilbert curve scheduling orfat tree network topology in order to optimize locality of task assignments on parallel computers.[3]

History

[edit]

Slurm began development as a collaborative effort primarily byLawrence Livermore National Laboratory,SchedMD,[4] Linux NetworX,Hewlett-Packard, andGroupe Bull as a Free Software resource manager. The first release happened in 2002.[5] It was inspired by the closed sourceQuadrics RMS and shares a similar syntax. The name is a reference to thesoda inFuturama.[6] Over 100 people around the world have contributed to the project. It has since evolved into a sophisticated batch scheduler capable of satisfying the requirements of many large computer centers.

As of November 2021[update],TOP500 list of most powerful computers in the world indicates that Slurm is the workload manager on more than half of the top ten systems.

Structure

[edit]

Slurm's design is very modular with about 100 optional plugins. In its simplest configuration, it can be installed and configured in a couple of minutes. More sophisticated configurations provide database integration for accounting, management of resource limits and workload prioritization.

Features

[edit]

Slurm features include:[7]

  • No single point of failure, backup daemons, fault-tolerant job options
  • Highly scalable (schedules up to 100,000 independent jobs on the 100,000 sockets ofIBM Sequoia)
  • High performance (up to 1000 job submissions per second and 600 job executions per second)
  • Free and open-source software (GNU General Public License)
  • Highly configurable with about 100 plugins
  • Fair-share scheduling with hierarchical bank accounts
  • Preemptive and gang scheduling (time-slicing of parallel jobs)
  • Integrated with database for accounting and configuration
  • Resource allocations optimized for network topology and on-node topology (sockets, cores and hyperthreads)
  • Advanced reservation
  • Idle nodes can be powered down
  • Different operating systems can be booted for each job
  • Scheduling for generic resources (e.g.Graphics processing unit)
  • Real-time accounting down to the task level (identify specific tasks with high CPU or memory usage)
  • Resource limits by user or bank account
  • Accounting for power consumption by job
  • Support of IBM Parallel Environment (PE/POE)
  • Support for job arrays
  • Job profiling (periodic sampling of each task's CPU use, memory use, power consumption, network and file system use)
  • Sophisticated multifactor job prioritization algorithms
  • Support for MapReduce+
  • Support forburst buffer that accelerates scientific data movement
  • Support for heterogeneous generic resources
  • Automatic job requeue policy based on exit value

Supported platforms

[edit]

Recent Slurm releases run only onLinux. Older versions had been ported to a few otherPOSIX-basedoperating systems, includingBSDs (FreeBSD,NetBSD andOpenBSD),[8] but this is no longerfeasible as Slurm now requirescgroups for core operations. Clusters running operating systems other than Linux will need to usea different batch system, such as LPJS.[9] Slurm also supports several unique computer architectures, including:

License

[edit]

Slurm is available under theGNU General Public License v2.

Commercial support

[edit]

In 2010, the developers of Slurm founded SchedMD, which maintains the canonical source, provides development, level 3 commercial support and training services. Commercial support is also available fromBull,Cray, and Science + Computing (subsidiary ofAtos).

Usage

[edit]
Slurm distinguishes several stages for a job

Theslurm system has three main parts:

  • slurmctld, a central controldaemon running on a single control node (optionally withfailover backups);
  • many computing nodes, each with one or moreslurmd daemons;
  • clients that connect to the manager node, often withssh.

The clients can issue commands to the control daemon, which would accept and divide the workload to the computing daemons.

For clients, the main commands aresrun (queue up an interactive job),sbatch (queue up a job),squeue (print the job queue) andscancel (remove a job from the queue).

Jobs can be run inbatch mode orinteractive mode. For interactive mode, a compute node would start a shell, connects the client into it, and run the job. From there the user may observe and interact with the job while it is running. Usually, interactive jobs are used for initial debugging, and after debugging, the same job would be submitted bysbatch. For a batch mode job, itsstdout andstderr outputs are typically directed to text files for later inspection.

See also

[edit]

References

[edit]
  1. ^"Release slurm-25-05-2-1". 7 August 2025. Retrieved11 August 2025.
  2. ^"Running a Job on HPC using Slurm".hpcc.usc.edu. Center for High-Performance Computing - University of Southern California. Archived fromthe original on 2019-03-06. Retrieved2019-03-05.
  3. ^Pascual, Jose Antonio; Navaridas, Javier; Miguel-Alonso, Jose (2009).Effects of Topology-Aware Allocation Policies on Scheduling Performance. Job Scheduling Strategies for Parallel Processing. Lecture Notes in Computer Science. Vol. 5798. pp. 138–144.doi:10.1007/978-3-642-04633-9_8.ISBN 978-3-642-04632-2.
  4. ^"Slurm Commercial Support, Development, and Installation". SchedMD. Retrieved2014-02-23.
  5. ^"Slurm History - SchedMD".SchedMD.Archived from the original on 2025-07-18. Retrieved2025-11-10.
  6. ^"SLURM: Simple Linux Utility for Resource Management"(PDF). 23 June 2003. Retrieved11 January 2016.
  7. ^"Slurm Workload Manager - Overview".slurm.schedmd.com. Retrieved2025-10-10.
  8. ^Slurm Platforms
  9. ^Bacon, Jason (2025-08-26),outpaddling/LPJS, retrieved2025-10-10

Further reading

[edit]

External links

[edit]
Organization
Kernel
Support
People
Technical
Debugging
Startup
ABIs
APIs
Kernel
System Call
Interface
In-kernel
Userspace
Daemons,
File systems
Wrapper
libraries
Components
Variants
Virtualization
Adoption
Range
of use
Adopters
Retrieved from "https://en.wikipedia.org/w/index.php?title=Slurm_Workload_Manager&oldid=1322856382"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2025 Movatter.jp