.TH "slurm.conf" "5" "Slurm Configuration File" "June 2016" "Slurm Configuration File"
.SH "NAME"
slurm.conf \- Slurm configuration file
.SH "DESCRIPTION"
\fBslurm.conf\fP is an ASCII file which describes general Slurm
configuration information, the nodes to be managed, information about
how those nodes are grouped into partitions, and various scheduling
parameters associated with those partitions. This file should be
consistent across all nodes in the cluster.
.LP
The file location can be modified at system build time using the
DEFAULT_SLURM_CONF parameter or at execution time by setting the SLURM_CONF
environment variable. The Slurm daemons also allow you to override
both the built\-in and environment\-provided location using the "\-f"
option on the command line.
.LP
The contents of the file are case insensitive except for the names of nodes
and partitions. Any text following a "#" in the configuration file is treated
as a comment through the end of that line.
Changes to the configuration file take effect upon restart of
Slurm daemons, daemon receipt of the SIGHUP signal, or execution
of the command "scontrol reconfigure" unless otherwise noted.
.LP
If a line begins with the word "Include" followed by whitespace
and then a file name, that file will be included inline with the current
configuration file. For large or complex systems, multiple configuration files
may prove easier to manage and enable reuse of some files (See INCLUDE
MODIFIERS for more details).
.LP
Note on file permissions:
.LP
The \fIslurm.conf\fR file must be readable by all users of Slurm, since it
is used by many of the Slurm commands. Other files that are defined
in the \fIslurm.conf\fR file, such as log files and job accounting files,
may need to be created/owned by the user "SlurmUser" to be successfully
accessed. Use the "chown" and "chmod" commands to set the ownership
and permissions appropriately.
See the section \fBFILE AND DIRECTORY PERMISSIONS\fR for information
about the various files and directories used by Slurm.
.SH "PARAMETERS"
.LP
The overall configuration parameters available include:
.TP
\fBAccountingStorageBackupHost\fR
The name of the backup machine hosting the accounting storage database.
If used with the accounting_storage/slurmdbd plugin, this is where the backup
slurmdbd would be running.
Only used for database type storage plugins, ignored otherwise.
.TP
\fBAccountingStorageEnforce\fR
This controls what level of association\-based enforcement to impose
on job submissions. Valid options are any combination of
\fIassociations\fR, \fIlimits\fR, \fInojobs\fR, \fInosteps\fR, \fIqos\fR, \fIsafe\fR, and \fIwckeys\fR, or
\fIall\fR for all things (expect nojobs and nosteps, they must be requested as well).
If limits, qos, or wckeys are set, associations will automatically be set.
If wckeys is set, TrackWCKey will automatically be set.
If safe is set, limits and associations will automatically be set.
If nojobs is set nosteps will automatically be set.
By enforcing Associations no new job is allowed to run unless a corresponding
association exists in the system. If limits are enforced users can be
limited by association to whatever job size or run time limits are defined.
If nojobs is set Slurm will not account for any jobs or steps on the system,
like wise if nosteps is set Slurm will not account for any steps ran limits
will still be enforced.
If safe is enforced a job will only be launched against an association or qos
that has a GrpCPUMins limit set if the job will be able to run to completion.
Without this option set, jobs will be launched as long as their usage
hasn't reached the cpu-minutes limit which can lead to jobs being
launched but then killed when the limit is reached.
With qos and/or wckeys enforced jobs will not be scheduled unless a valid qos
and/or workload characterization key is specified.
When \fBAccountingStorageEnforce\fR is changed, a restart of the slurmctld
daemon is required (not just a "scontrol reconfig").
.TP
\fBAccountingStorageHost\fR
The name of the machine hosting the accounting storage database.
Only used for database type storage plugins, ignored otherwise.
Also see \fBDefaultStorageHost\fR.
.TP
\fBAccountingStorageLoc\fR
The fully qualified file name where accounting records are written
when the \fBAccountingStorageType\fR is "accounting_storage/filetxt"
or else the name of the database where accounting records are stored when the
\fBAccountingStorageType\fR is a database.
Also see \fBDefaultStorageLoc\fR.
.TP
\fBAccountingStoragePass\fR
The password used to gain access to the database to store the
accounting data. Only used for database type storage plugins, ignored
otherwise. In the case of Slurm DBD (Database Daemon) with MUNGE
authentication this can be configured to use a MUNGE daemon
specifically configured to provide authentication between clusters
while the default MUNGE daemon provides authentication within a
cluster. In that case, \fBAccountingStoragePass\fR should specify the
named port to be used for communications with the alternate MUNGE
daemon (e.g. "/var/run/munge/global.socket.2"). The default value is
NULL. Also see \fBDefaultStoragePass\fR.
.TP
\fBAccountingStoragePort\fR
The listening port of the accounting storage database server.
Only used for database type storage plugins, ignored otherwise.
Also see \fBDefaultStoragePort\fR.
.TP
\fBAccountingStorageTRES\fR
Comma separated list of resources you wish to track on the cluster.
These are the resources requested by the sbatch/srun job when it
is submitted. Currently this consists of any GRES, BB (burst buffer) or
license along with CPU, Memory, Node, and Energy.
By default CPU, Energy, Memory, and Node are tracked.
AccountingStorageTRES=gres/craynetwork,license/iop1
will track cpu, energy, memory, nodes along with a gres called craynetwork
as well as a license called iop1. Whenever these resources are used on the
cluster they are recorded. The TRES are automatically set up in the database
on the start of the slurmctld.
.TP
\fBAccountingStorageType\fR
The accounting storage mechanism type. Acceptable values at
present include "accounting_storage/filetxt",
"accounting_storage/mysql", "accounting_storage/none"
and "accounting_storage/slurmdbd". The
"accounting_storage/filetxt" value indicates that accounting records
will be written to the file specified by the
\fBAccountingStorageLoc\fR parameter. The "accounting_storage/mysql"
value indicates that accounting records will be written to a MySQL or
MariaDB database specified by the \fBAccountingStorageLoc\fR parameter.
The "accounting_storage/slurmdbd" value indicates that accounting records
will be written to the Slurm DBD, which manages an underlying MySQL
database. See "man slurmdbd" for more information. The
default value is "accounting_storage/none" and indicates that account
records are not maintained.
Note: The filetxt plugin records only a limited subset of accounting
information and will prevent some sacct options from proper operation.
Also see \fBDefaultStorageType\fR.
.TP
\fBAccountingStorageUser\fR
The user account for accessing the accounting storage database.
Only used for database type storage plugins, ignored otherwise.
Also see \fBDefaultStorageUser\fR.
.TP
\fBAccountingStoreJobComment\fR
If set to "YES" then include the job's comment field in the job
complete message sent to the Accounting Storage database. The default
is "YES".
.TP
\fBAcctGatherNodeFreq\fR
The AcctGather plugins sampling interval for node accounting.
For AcctGather plugin values of none, this parameter is ignored.
For all other values this parameter is the number
of seconds between node accounting samples. For the
acct_gather_energy/rapl plugin, set a value less
than 300 because the counters may overflow beyond this rate.
The default value is zero. This value disables accounting sampling
for nodes. Note: The accounting sampling interval for jobs is
determined by the value of \fBJobAcctGatherFrequency\fR.
.TP
\fBAcctGatherEnergyType\fR
Identifies the plugin to be used for energy consumption accounting.
The jobacct_gather plugin and slurmd daemon call this plugin to collect
energy consumption data for jobs and nodes. The collection of energy
consumption data takes place on node level, hence only in case of exclusive
job allocation the energy consumption measurements will reflect the jobs
real consumption. In case of node sharing between jobs the reported consumed
energy per job (through sstat or sacct) will not reflect the real energy
consumed by the jobs.
Configurable values at present are:
.RS
.TP 20
\fBacct_gather_energy/none\fR
No energy consumption data is collected.
.TP
\fBacct_gather_energy/ipmi\fR
Energy consumption data is collected from the Baseboard Management Controller
(BMC) using the Intelligent Platform Management Interface (IPMI).
.TP
\fBacct_gather_energy/rapl\fR
Energy consumption data is collected from hardware sensors using the Running
Average Power Limit (RAPL) mechanism. Note that enabling RAPL may require the
execution of the command "sudo modprobe msr".
.RE
.TP
\fBAcctGatherInfinibandType\fR
Identifies the plugin to be used for infiniband network traffic accounting.
The plugin is activated only when profiling on hdf5 files is activated and
the user asks for network data collection for jobs through \-\-profile=Network
(or =All). The collection of network traffic data takes place on node level,
hence only in case of exclusive job allocation the collected values will
reflect the jobs real traffic. All network traffic data are logged on hdf5 files
per job on each node. No storage on the Slurm database takes place.
Configurable values at present are:
.RS
.TP 20
\fBacct_gather_infiniband/none\fR
No infiniband network data are collected.
.TP
\fBacct_gather_infiniband/ofed\fR
Infiniband network traffic data are collected from the hardware monitoring
counters of Infiniband devices through the OFED library.
.RE
.TP
\fBAcctGatherFilesystemType\fR
Identifies the plugin to be used for filesystem traffic accounting.
The plugin is activated only when profiling on hdf5 files is activated and
the user asks for filesystem data collection for jobs through \-\-profile=Lustre
(or =All). The collection of filesystem traffic data takes place on node level,
hence only in case of exclusive job allocation the collected values will
reflect the jobs real traffic. All filesystem traffic data are logged on hdf5 files
per job on each node. No storage on the Slurm database takes place.
Configurable values at present are:
.RS
.TP 20
\fBacct_gather_filesystem/none\fR
No filesystem data are collected.
.TP
\fBacct_gather_filesystem/lustre\fR
Lustre filesystem traffic data are collected from the counters found in
/proc/fs/lustre/.
.RE
.TP
\fBAcctGatherProfileType\fR
Identifies the plugin to be used for detailed job profiling.
The jobacct_gather plugin and slurmd daemon call this plugin to collect
detailed data such as I/O counts, memory usage, or energy consumption for jobs
and nodes. There are interfaces in this plugin to collect data as step start
and completion, task start and completion, and at the account gather
frequency. The data collected at the node level is related to jobs only in
case of exclusive job allocation.
Configurable values at present are:
.RS
.TP 20
\fBacct_gather_profile/none\fR
No profile data is collected.
.TP
\fBacct_gather_profile/hdf5\fR
This enables the HDF5 plugin. The directory where the profile files
are stored and which values are collected are configured in the
acct_gather.conf file.
.RE
.TP
\fBAllowSpecResourcesUsage\fR
If set to 1, Slurm allows individual jobs to override node's configured
CoreSpecCount value. For a job to take advantage of this feature,
a command line option of \-\-core\-spec must be specified. The default
value for this option is 1 for Cray systems and 0 for other system types.
.TP
\fBAuthInfo\fR
Additional information to be used for authentication of communications
between the Slurm daemons (slurmctld and slurmd) and the Slurm
clients. The interpretation of this option is specific to the
configured \fBAuthType\fR.
Multiple options may be specified in a comma delimited list.
If not specified, the default authentication information will be used.
.RS
.TP 14
\fBcred_expire\fR
Default job step credential lifetime, in seconds (e.g. "cred_expire=1200").
It must be sufficiently long enough to load user environment, run prolog,
deal with the slurmd getting paged out of memory, etc.
This also controls how long a requeued job must wait before starting again.
The default value is 120 seconds.
.TP
\fBsocket\fR
Path name to a MUNGE daemon socket to use
(e.g. "socket=/var/run/munge/munge.socket.2").
The default value is "/var/run/munge/munge.socket.2".
Used by \fIauth/munge\fR and \fIcrypto/munge\fR.
.TP
\fBttl\fR
Credential lifetime, in seconds (e.g. "ttl=300").
The default value is dependent upon the Munge installation, but is typically
300 seconds.
.RE
.TP
\fBAuthType\fR
The authentication method for communications between Slurm
components.
Acceptable values at present include "auth/none" and "auth/munge".
The default value is "auth/munge".
"auth/none" includes the UID in each communication, but it is not verified.
This may be fine for testing purposes, but
\fBdo not use "auth/none" if you desire any security\fR.
"auth/munge" indicates that LLNL's MUNGE is to be used
(this is the best supported authentication mechanism for Slurm,
see "http://munge.googlecode.com/" for more information).
All Slurm daemons and commands must be terminated prior to changing
the value of \fBAuthType\fR and later restarted (Slurm jobs can be
preserved).
.TP
\fBBackupAddr\fR
The name that \fBBackupController\fR should be referred to in
establishing a communications path. This name will
be used as an argument to the gethostbyname() function for
identification. For example, "elx0000" might be used to designate
the Ethernet address for node "lx0000".
By default the \fBBackupAddr\fR will be identical in value to
\fBBackupController\fR.
.TP
\fBBackupController\fR
The short, or long, name of the machine where Slurm control functions are to be
executed in the event that \fBControlMachine\fR fails (i.e. the name returned by
the command "hostname \-s"). This node may also be used as a compute server if
so desired. It will come into service as a controller only upon the failure of
ControlMachine and will revert to a "standby" mode when the ControlMachine
becomes available once again.
The backup controller recovers state information from the
\fBStateSaveLocation\fR directory, which must be readable and writable from both
the primary and backup controllers.
While not essential, it is recommended that you specify a backup controller.
See the \fBRELOCATING CONTROLLERS\fR section if you change this.
.TP
\fBBatchStartTimeout\fR
The maximum time (in seconds) that a batch job is permitted for
launching before being considered missing and releasing the
allocation. The default value is 10 (seconds). Larger values may be
required if more time is required to execute the \fBProlog\fR, load
user environment variables (for Moab spawned jobs), or if the slurmd
daemon gets paged from memory.
.br
.br
\fBNote\fR: The test for a job being successfully launched is only performed when
the Slurm daemon on the compute node registers state with the slurmctld daemon
on the head node, which happens fairly rarely.
Therefore a job will not necessarily be terminated if its start time exceeds
\fBBatchStartTimeout\fR.
This configuration parameter is also applied to launch tasks and avoid aborting
\fBsrun\fR commands due to long running \fBProlog\fR scripts.
.TP
\fBBurstBufferType\fR
The plugin used to manage burst buffers.
Acceptable values at present include "burst_buffer/none".
More information later...
.TP
\fBCheckpointType\fR
The system\-initiated checkpoint method to be used for user jobs.
The slurmctld daemon must be restarted for a change in \fBCheckpointType\fR
to take effect. Supported values presently include:
.RS
.TP 18
\fBcheckpoint/aix\fR
for IBM AIX systems only
.TP
\fBcheckpoint/blcr\fR
Berkeley Lab Checkpoint Restart (BLCR).
NOTE: If a file is found at sbin/scch (relative to the Slurm installation
location), it will be executed upon completion of the checkpoint. This can
be a script used for managing the checkpoint files.
NOTE: Slurm's BLCR logic only supports batch jobs.
.TP
\fBcheckpoint/none\fR
no checkpoint support (default)
.TP
\fBcheckpoint/ompi\fR
OpenMPI (version 1.3 or higher)
.TP
\fBcheckpoint/poe\fR
for use with IBM POE (Parallel Operating Environment) only
.RE
.TP
\fBChosLoc\fR
If configured, then any processes invoked on the user behalf (namely the
SPANK prolog/epilog scripts and the slurmstepd processes, which in turn spawn
the user batch script and applications) are not directly executed by the slurmd
daemon, but instead the \fBChosLoc\fR program is executed.
Both are spawned with the same user ID as the configured SlurmdUser
(typically user root).
That program's argument are the program and arguments that would otherwise be
invoked directly by the slurmd daemon.
The intent of this feature is to be able to run a user application in some
sort of container.
This option specified the fully qualified pathname of the chos command
(see https://github.com/scanon/chos for details).
.TP
\fBClusterName\fR
The name by which this Slurm managed cluster is known in the
accounting database. This is needed distinguish accounting records
when multiple clusters report to the same database. Because of limitations
in some databases, any upper case letters in the name will be silently mapped
to lower case. In order to avoid confusion, it is recommended that the name
be lower case.
.TP
\fBCompleteWait\fR
The time, in seconds, given for a job to remain in COMPLETING state
before any additional jobs are scheduled.
If set to zero, pending jobs will be started as soon as possible.
Since a COMPLETING job's resources are released for use by other
jobs as soon as the \fBEpilog\fR completes on each individual node,
this can result in very fragmented resource allocations.
To provide jobs with the minimum response time, a value of zero is
recommended (no waiting).
To minimize fragmentation of resources, a value equal to \fBKillWait\fR
plus two is recommended.
In that case, setting \fBKillWait\fR to a small value may be beneficial.
The default value of \fBCompleteWait\fR is zero seconds.
The value may not exceed 65533.
.TP
\fBControlAddr\fR
Name that \fBControlMachine\fR should be referred to in
establishing a communications path. This name will
be used as an argument to the gethostbyname() function for
identification. For example, "elx0000" might be used to designate
the Ethernet address for node "lx0000".
By default the \fBControlAddr\fR will be identical in value to
\fBControlMachine\fR.
.TP
\fBControlMachine\fR
The short, or long, hostname of the machine where Slurm control functions are
executed (i.e. the name returned by the command "hostname \-s").
This value must be specified.
In order to support some high availability architectures, multiple
hostnames may be listed with comma separators and one \fBControlAddr\fR
must be specified. The high availability system must insure that the
slurmctld daemon is running on only one of these hosts at a time.
See the \fBRELOCATING CONTROLLERS\fR section if you change this.
.TP
\fBCoreSpecPlugin\fR
Identifies the plugins to be used for enforcement of core specialization.
The slurmd daemon must be restarted for a change in CoreSpecPlugin
to take effect.
Acceptable values at present include:
.RS
.TP 20
\fBcore_spec/cray\fR
used only for Cray systems
.TP
\fBcore_spec/none\fR
used for all other system types
.RE
.TP
\fBCpuFreqDef\fR
Default CPU frequency governor to use when running a job step if it
has not been explicitly set with the \-\-cpu\-freq option.
Acceptable values at present include:
.RS
.TP 14
\fBConservative\fR
attempts to use the Conservative CPU governor
.TP
\fBOnDemand\fR
attempts to use the OnDemand CPU governor
.TP
\fBPerformance\fR
attempts to use the Performance CPU governor
.TP
\fBPowerSave\fR
attempts to use the PowerSave CPU governor
.RE
There is no default value. If unset, no attempt to set the governor is
made if the \-\-cpu\-freq option has not been set.
.TP
\fBCpuFreqGovernors\fR
List of CPU frequency governors allowed to be set with the salloc, sbatch, or
srun option \-\-cpu\-freq.
Acceptable values at present include:
.RS
.TP 14
\fBConservative\fR
attempts to use the Conservative CPU governor
.TP
\fBOnDemand\fR
attempts to use the OnDemand CPU governor (the default value)
.TP
\fBPerformance\fR
attempts to use the Performance CPU governor (the default value)
.TP
\fBPowerSave\fR
attempts to use the PowerSave CPU governor
.TP
\fBUserSpace\fR
attempts to use the UserSpace CPU governor
.RE
The default is OnDemand, Performance.
.TP
\fBCryptoType\fR
The cryptographic signature tool to be used in the creation of
job step credentials.
The slurmctld daemon must be restarted for a change in \fBCryptoType\fR
to take effect.
Acceptable values at present include "crypto/munge" and "crypto/openssl".
The default value is "crypto/munge".
.TP
\fBDebugFlags\fR
Defines specific subsystems which should provide more detailed event logging.
Multiple subsystems can be specified with comma separators.
Most DebugFlags will result in verbose logging for the identified subsystems
and could impact performance. The below DB_* flags are only useful when
writing directly to the database. If using the DBD put these debug flags in the
slurmdbd.conf.
Valid subsystems available today (with more to come) include:
.RS
.TP 17
\fBBackfill\fR
Backfill scheduler details
.TP
\fBBackfillMap\fR
Backfill scheduler to log a very verbose map of reserved resources through
time. Combine with \fBBackfill\fR for a verbose and complete view of the
backfill scheduler's work.
.TP
\fBBGBlockAlgo\fR
BlueGene block selection details
.TP
\fBBGBlockAlgoDeep\fR
BlueGene block selection, more details
.TP
\fBBGBlockPick\fR
BlueGene block selection for jobs
.TP
\fBBGBlockWires\fR
BlueGene block wiring (switch state details)
.TP
\fBBurstBuffer\fR
Burst Buffer plugin
.TP
\fBCPU_Bind\fR
CPU binding details for jobs and steps
.TP
\fBCpuFrequency\fR
Cpu frequency details for jobs and steps using the \-\-cpu\-freq option.
.TP
\fBDB_ASSOC\fR
SQL statements/queries when dealing with associations in the database.
.TP
\fBDB_EVENT\fR
SQL statements/queries when dealing with (node) events in the database.
.TP
\fBDB_JOB\fR
SQL statements/queries when dealing with jobs in the database.
.TP
\fBDB_QOS\fR
SQL statements/queries when dealing with QOS in the database.
.TP
\fBDB_QUERY\fR
SQL statements/queries when dealing with transactions and such in the database.
.TP
\fBDB_RESERVATION\fR
SQL statements/queries when dealing with reservations in the database.
.TP
\fBDB_RESOURCE\fR
SQL statements/queries when dealing with resources like licenses in the
database.
.TP
\fBDB_STEP\fR
SQL statements/queries when dealing with steps in the database.
.TP
\fBDB_USAGE\fR
SQL statements/queries when dealing with usage queries and inserts
in the database.
.TP
\fBDB_WCKEY\fR
SQL statements/queries when dealing with wckeys in the database.
.TP
\fBElasticsearch\fR
Elasticsearch debug info
.TP
\fBEnergy\fR
AcctGatherEnergy debug info
.TP
\fBExtSensors\fR
External Sensors debug info
.TP
\fBFrontEnd\fR
Front end node details
.TP
\fBGres\fR
Generic resource details
.TP
\fBGang\fR
Gang scheduling details
.TP
\fBJobContainer\fR
Job container plugin details
.TP
\fBLicense\fR
License management details
.TP
\fBNodeFeatures\fR
Node Features plugin debug info
.TP
\fBNO_CONF_HASH\fR
Do not log when the slurm.conf files differs between Slurm daemons
.TP
\fBPower\fR
Power management plugin
.TP
\fBPriority\fR
Job prioritization
.TP
\fBProtocol\fR
Communication protocol details
.TP
\fBReservation\fR
Advanced reservations
.TP
\fBSelectType\fR
Resource selection plugin
.TP
\fBSteps\fR
Slurmctld resource allocation for job steps
.TP
\fBSwitch\fR
Switch plugin
.TP
\fBTraceJobs\fR
Trace jobs in slurmctld. It will print detailed job information
including state, job ids and allocated nodes counter.
.TP
\fBTriggers\fR
Slurmctld triggers
.RE
.TP
\fBDefMemPerCPU\fR
Default real memory size available per allocated CPU in megabytes.
Used to avoid over\-subscribing memory and causing paging.
\fBDefMemPerCPU\fR would generally be used if individual processors
are allocated to jobs (\fBSelectType=select/cons_res\fR).
The default value is 0 (unlimited).
Also see \fBDefMemPerNode\fR and \fBMaxMemPerCPU\fR.
\fBDefMemPerCPU\fR and \fBDefMemPerNode\fR are mutually exclusive.
NOTE: Enforcement of memory limits currently requires enabling of
accounting, which samples memory use on a periodic basis (data need
not be stored, just collected).
.TP
\fBDefMemPerNode\fR
Default real memory size available per allocated node in megabytes.
Used to avoid over\-subscribing memory and causing paging.
\fBDefMemPerNode\fR would generally be used if whole nodes
are allocated to jobs (\fBSelectType=select/linear\fR) and
resources are over\-subscribed (\fBOverSubscribe=yes\fR or
\fBOverSubscribe=force\fR).
The default value is 0 (unlimited).
Also see \fBDefMemPerCPU\fR and \fBMaxMemPerNode\fR.
\fBDefMemPerCPU\fR and \fBDefMemPerNode\fR are mutually exclusive.
NOTE: Enforcement of memory limits currently requires enabling of
accounting, which samples memory use on a periodic basis (data need
not be stored, just collected).
.TP
\fBDefaultStorageHost\fR
The default name of the machine hosting the accounting storage and
job completion databases.
Only used for database type storage plugins and when the
\fBAccountingStorageHost\fR and \fBJobCompHost\fR have not been
defined.
.TP
\fBDefaultStorageLoc\fR
The fully qualified file name where accounting records and/or job
completion records are written when the \fBDefaultStorageType\fR is
"filetxt" or the name of the database where accounting records and/or job
completion records are stored when the \fBDefaultStorageType\fR is a
database.
Also see \fBAccountingStorageLoc\fR and \fBJobCompLoc\fR.
.TP
\fBDefaultStoragePass\fR
The password used to gain access to the database to store the
accounting and job completion data.
Only used for database type storage plugins, ignored otherwise.
Also see \fBAccountingStoragePass\fR and \fBJobCompPass\fR.
.TP
\fBDefaultStoragePort\fR
The listening port of the accounting storage and/or job completion
database server.
Only used for database type storage plugins, ignored otherwise.
Also see \fBAccountingStoragePort\fR and \fBJobCompPort\fR.
.TP
\fBDefaultStorageType\fR
The accounting and job completion storage mechanism type. Acceptable
values at present include "filetxt", "mysql" and "none".
The value "filetxt" indicates that records will be written to a file.
The value "mysql" indicates that accounting records will be written to a MySQL
or MariaDB database.
The default value is "none", which means that records are not maintained.
Also see \fBAccountingStorageType\fR and \fBJobCompType\fR.
.TP
\fBDefaultStorageUser\fR
The user account for accessing the accounting storage and/or job
completion database.
Only used for database type storage plugins, ignored otherwise.
Also see \fBAccountingStorageUser\fR and \fBJobCompUser\fR.
.TP
\fBDisableRootJobs\fR
If set to "YES" then user root will be prevented from running any jobs.
The default value is "NO", meaning user root will be able to execute jobs.
\fBDisableRootJobs\fR may also be set by partition.
.TP
\fBEioTimeout\fR
The number of seconds srun waits for slurmstepd to close the TCP/IP
connection used to relay data between the user application and srun
when the user application terminates. The default value is 60 seconds.
May not exceed 65533.
.TP
\fBEnforcePartLimits\fR
If set to "ALL" then jobs which exceed a partition's size and/or
time limits will be rejected at submission time. If job is submitted to
multiple partitions, the job must satisfy the limits on all the requested
partitions. If set to "NO" then the job will be accepted and remain queued
until the partition limits are altered(Time and Node Limits).
If set to "ANY" or "YES" a job must satisfy any of the requested partitions
to be submitted. The default value is "NO".
NOTE: If set, then a job's QOS can not be used to exceed partition limits.
NOTE: The partition limits being considered are it's configured MaxMemPerCPU,
MaxMemPerNode, MinNodes, MaxNodes, MaxTime, AllocNodes, AllowAccounts,
AllowGroups, AllowQOS, and QOS usage threshold.
.TP
\fBEpilog\fR
Fully qualified pathname of a script to execute as user root on every
node when a user's job completes (e.g. "/usr/local/slurm/epilog"). A
glob pattern (See \fBglob\fR (7)) may also be used to run more than
one epilog script (e.g. "/etc/slurm/epilog.d/*"). The Epilog script
or scripts may be used to purge files, disable user login, etc.
By default there is no epilog.
See \fBProlog and Epilog Scripts\fR for more information.
.TP
\fBEpilogMsgTime\fR
The number of microseconds that the slurmctld daemon requires to process
an epilog completion message from the slurmd daemons. This parameter can
be used to prevent a burst of epilog completion messages from being sent
at the same time which should help prevent lost messages and improve
throughput for large jobs.
The default value is 2000 microseconds.
For a 1000 node job, this spreads the epilog completion messages out over
two seconds.
.TP
\fBEpilogSlurmctld\fR
Fully qualified pathname of a program for the slurmctld to execute
upon termination of a job allocation (e.g.
"/usr/local/slurm/epilog_controller").
The program executes as SlurmUser, which gives it permission to drain
nodes and requeue the job if a failure occurs (See scontrol(1)).
Exactly what the program does and how it accomplishes this is completely at
the discretion of the system administrator.
Information about the job being initiated, it's allocated nodes, etc. are
passed to the program using environment variables.
See \fBProlog and Epilog Scripts\fR for more information.
.TP
\fBExtSensorsFreq\fR
The external sensors plugin sampling interval.
If \fBExtSensorsType=ext_sensors/none\fR, this parameter is ignored.
For all other values of \fBExtSensorsType\fR, this parameter is the number
of seconds between external sensors samples for hardware components (nodes,
switches, etc.) The default value is zero. This value disables external
sensors sampling. Note: This parameter does not affect external sensors
data collection for jobs/steps.
.TP
\fBExtSensorsType\fR
Identifies the plugin to be used for external sensors data collection.
Slurmctld calls this plugin to collect external sensors data for jobs/steps
and hardware components. In case of node sharing between jobs the reported
values per job/step (through sstat or sacct) may not be accurate. See also
"man ext_sensors.conf".
Configurable values at present are:
.RS
.TP 20
\fBext_sensors/none\fR
No external sensors data is collected.
.TP
\fBext_sensors/rrd\fR
External sensors data is collected from the RRD database.
.RE
.TP
\fBFairShareDampeningFactor\fR
Dampen the effect of exceeding a user or group's fair share of allocated
resources. Higher values will provides greater ability to differentiate
between exceeding the fair share at high levels (e.g. a value of 1 results
in almost no difference between overconsumption by a factor of 10 and 100,
while a value of 5 will result in a significant difference in priority).
The default value is 1.
.TP
\fBFastSchedule\fR
Controls how a node's configuration specifications in slurm.conf are used.
If the number of node configuration entries in the configuration file
is significantly lower than the number of nodes, setting FastSchedule to
1 will permit much faster scheduling decisions to be made.
(The scheduler can just check the values in a few configuration records
instead of possibly thousands of node records.)
Note that on systems with hyper\-threading, the processor count
reported by the node will be twice the actual processor count.
Consider which value you want to be used for scheduling purposes.
.RS
.TP 5
\fB0\fR
Base scheduling decisions upon the actual configuration of each individual
node except that the node's processor count in Slurm's configuration must
match the actual hardware configuration if \fBPreemptMode=suspend,gang\fR
or \fBSelectType=select/cons_res\fR are configured (both of those plugins
maintain resource allocation information using bitmaps for the cores in the
system and must remain static, while the node's memory and disk space can
be established later).
.TP
\fB1\fR (default)
Consider the configuration of each node to be that specified in the
slurm.conf configuration file and any node with less than the
configured resources will be set to DRAIN.
.TP
\fB2\fR
Consider the configuration of each node to be that specified in the
slurm.conf configuration file and any node with less than the
configured resources will \fBnot\fR be set DRAIN.
This option is generally only useful for testing purposes.
.RE
.TP
\fBFirstJobId\fR
The job id to be used for the first submitted to Slurm without a
specific requested value. Job id values generated will incremented by 1
for each subsequent job. This may be used to provide a meta\-scheduler
with a job id space which is disjoint from the interactive jobs.
The default value is 1.
Also see \fBMaxJobId\fR
.TP
\fBGetEnvTimeout\fR
Used for Moab scheduled jobs only. Controls how long job should wait
in seconds for loading the user's environment before attempting to
load it from a cache file. Applies when the srun or sbatch
\fI\-\-get\-user\-env\fR option is used. If set to 0 then always load
the user's environment from the cache file.
The default value is 2 seconds.
.TP
\fBGresTypes\fR
A comma delimited list of generic resources to be managed.
These generic resources may have an associated plugin available to provide
additional functionality.
No generic resources are managed by default.
Insure this parameter is consistent across all nodes in the cluster for
proper operation.
The slurmctld daemon must be restarted for changes to this parameter to become
effective.
.TP
\fBGroupUpdateForce\fR
If set to a non\-zero value, then information about which users are members
of groups allowed to use a partition will be updated periodically, even when
there have been no changes to the /etc/group file.
Otherwise group member information will be updated periodically only after the
/etc/group file is updated
The default value is 0.
Also see the \fBGroupUpdateTime\fR parameter.
.TP
\fBGroupUpdateTime\fR
Controls how frequently information about which users are members of
groups allowed to use a partition will be updated, and how long user
group membership lists will be cached.
The time interval is given in seconds with a default value of 600 seconds and
a maximum value of 4095 seconds.
A value of zero will prevent periodic updating of group membership information.
Also see the \fBGroupUpdateForce\fR parameter.
.TP
\fBHealthCheckInterval\fR
The interval in seconds between executions of \fBHealthCheckProgram\fR.
The default value is zero, which disables execution.
.TP
\fBHealthCheckNodeState\fR
Identify what node states should execute the \fBHealthCheckProgram\fR.
Multiple state values may be specified with a comma separator.
The default value is ANY to execute on nodes in any state.
.RS
.TP 12
\fBALLOC\fR
Run on nodes in the ALLOC state (all CPUs allocated).
.TP
\fBANY\fR
Run on nodes in any state.
.TP
\fBCYCLE\fR
Rather than running the health check program on all nodes at the same time,
cycle through running on all compute nodes through the course of the
\fBHealthCheckInterval\fR. May be combined with the various node state
options.
.TP
\fBIDLE\fR
Run on nodes in the IDLE state.
.TP
\fBMIXED\fR
Run on nodes in the MIXED state (some CPUs idle and other CPUs allocated).
.RE
.TP
\fBHealthCheckProgram\fR
Fully qualified pathname of a script to execute as user root periodically
on all compute nodes that are \fBnot\fR in the NOT_RESPONDING state. This
program may be used to verify the node is fully operational and DRAIN the node
or send email if a problem is detected.
Any action to be taken must be explicitly performed by the program
(e.g. execute
"scontrol update NodeName=foo State=drain Reason=tmp_file_system_full"
to drain a node).
The execution interval is controlled using the \fBHealthCheckInterval\fR
parameter.
Note that the \fBHealthCheckProgram\fR will be executed at the same time
on all nodes to minimize its impact upon parallel programs.
This program is will be killed if it does not terminate normally within
60 seconds.
When slurmd is first started, this program will be run repeatedly until
it returns an exit code of 0, and will block slurmd from registering with
slurmctld until this has been satisfied.
The return code is otherwise ignored.
By default, no program will be executed.
.TP
\fBInactiveLimit\fR
The interval, in seconds, after which a non\-responsive job allocation
command (e.g. \fBsrun\fR or \fBsalloc\fR) will result in the job being
terminated. If the node on which the command is executed fails or the
command abnormally terminates, this will terminate its job allocation.
This option has no effect upon batch jobs.
When setting a value, take into consideration that a debugger using \fBsrun\fR
to launch an application may leave the \fBsrun\fR command in a stopped state
for extended periods of time.
This limit is ignored for jobs running in partitions with the
\fBRootOnly\fR flag set (the scheduler running as root will be
responsible for the job).
The default value is unlimited (zero) and may not exceed 65533 seconds.
.TP
\fBJobAcctGatherType\fR
The job accounting mechanism type.
Acceptable values at present include "jobacct_gather/aix" (for AIX operating
system), "jobacct_gather/linux" (for Linux operating system),
"jobacct_gather/cgroup" and "jobacct_gather/none"
(no accounting data collected).
The default value is "jobacct_gather/none".
"jobacct_gather/cgroup" is a plugin for the Linux operating system
that uses cgroups to collect accounting statistics. The plugin collects the
following statistics: From the cgroup memory subsystem: memory.usage_in_bytes
(reported as 'pages') and rss from memory.stat (reported as 'rss'). From the
cgroup cpuacct subsystem: user cpu time and system cpu time. No value
is provided by cgroups for virtual memory size ('vsize').
In order to use the \fBsstat\fR tool, "jobacct_gather/aix", "jobacct_gather/linux",
or "jobacct_gather/cgroup" must be configured.
.br
\fBNOTE:\fR Changing this configuration parameter changes the contents of
the messages between Slurm daemons. Any previously running job steps are
managed by a slurmstepd daemon that will persist through the lifetime of
that job step and not change it's communication protocol. Only change this
configuration parameter when there are no running job steps.
.TP
\fBJobAcctGatherFrequency\fR
The job accounting and profiling sampling intervals.
The supported format is follows:
.RS
.TP 12
\fBJobAcctGatherFrequency=\fR\fI\fR\fB=\fR\fI\fR
where \fI\fR=\fI\fR specifies the task sampling
interval for the jobacct_gather plugin or a
sampling interval for a profiling type by the
acct_gather_profile plugin. Multiple,
comma-separated \fI\fR=\fI\fR intervals
may be specified. Supported datatypes are as follows:
.RS
.TP
\fBtask=\fI\fR
where \fI\fR is the task sampling interval in seconds
for the jobacct_gather plugins and for task
profiling by the acct_gather_profile plugin.
.TP
\fBenergy=\fI\fR
where \fI\fR is the sampling interval in seconds
for energy profiling using the acct_gather_energy plugin
.TP
\fBnetwork=\fI\fR
where \fI\fR is the sampling interval in seconds
for infiniband profiling using the acct_gather_infiniband
plugin.
.TP
\fBfilesystem=\fI\fR
where \fI\fR is the sampling interval in seconds
for filesystem profiling using the acct_gather_filesystem
plugin.
.TP
.RE
.RE
The default value for task sampling interval
is 30 seconds. The default value for all other intervals is 0.
An interval of 0 disables sampling of the specified type.
If the task sampling interval is 0, accounting
information is collected only at job termination (reducing Slurm
interference with the job).
.br
.br
Smaller (non\-zero) values have a greater impact upon job performance,
but a value of 30 seconds is not likely to be noticeable for
applications having less than 10,000 tasks.
.br
.br
Users can independently override each interval on a per job basis using the
\fB\-\-acctg\-freq\fR option when submitting the job.
.RE
.TP
\fBJobAcctGatherParams\fR
Arbitrary parameters for the job account gather plugin
Acceptable values at present include:
.RS
.TP 20
\fBNoShared\fR
Exclude shared memory from accounting.
.TP
\fBUsePss\fR
Use PSS value instead of RSS to calculate real usage of memory.
The PSS value will be saved as RSS.
.TP
\fBNoOverMemoryKill\fR
Do not kill process that uses more then requested memory.
This parameter should be used with caution as if jobs exceeds
its memory allocation it may affect other processes and/or machine
health.
.RE
.TP
\fBJobCheckpointDir\fR
Specifies the default directory for storing or reading job checkpoint
information. The data stored here is only a few thousand bytes per job
and includes information needed to resubmit the job request, not job's
memory image. The directory must be readable and writable by
\fBSlurmUser\fR, but not writable by regular users. The job memory images
may be in a different location as specified by \fB\-\-checkpoint\-dir\fR
option at job submit time or scontrol's \fBImageDir\fR option.
.TP
\fBJobCompHost\fR
The name of the machine hosting the job completion database.
Only used for database type storage plugins, ignored otherwise.
Also see \fBDefaultStorageHost\fR.
.TP
\fBJobCompLoc\fR
The fully qualified file name where job completion records are written
when the \fBJobCompType\fR is "jobcomp/filetxt" or the database where
job completion records are stored when the \fBJobCompType\fR is a
database or an url with format http://yourelasticserver:port where job
completion records are indexed when the \fBJobCompType\fR is
"jobcomp/elasticsearch".
Also see \fBDefaultStorageLoc\fR.
.TP
\fBJobCompPass\fR
The password used to gain access to the database to store the job
completion data.
Only used for database type storage plugins, ignored otherwise.
Also see \fBDefaultStoragePass\fR.
.TP
\fBJobCompPort\fR
The listening port of the job completion database server.
Only used for database type storage plugins, ignored otherwise.
Also see \fBDefaultStoragePort\fR.
.TP
\fBJobCompType\fR
The job completion logging mechanism type.
Acceptable values at present include "jobcomp/none", "jobcomp/elasticsearch",
"jobcomp/filetxt", "jobcomp/mysql" and "jobcomp/script"".
The default value is "jobcomp/none", which means that upon job completion
the record of the job is purged from the system. If using the accounting
infrastructure this plugin may not be of interest since the information
here is redundant.
The value "jobcomp/elasticsearch" indicates that a record of the job
should be written to an Elasticsearch server specified by the
\fBJobCompLoc\fR parameter.
The value "jobcomp/filetxt" indicates that a record of the job should be
written to a text file specified by the \fBJobCompLoc\fR parameter.
The value "jobcomp/mysql" indicates that a record of the job should be
written to a MySQL or MariaDB database specified by the \fBJobCompLoc\fR
parameter.
The value "jobcomp/script" indicates that a script specified by the
\fBJobCompLoc\fR parameter is to be executed with environment variables
indicating the job information.
.TP
\fBJobCompUser\fR
The user account for accessing the job completion database.
Only used for database type storage plugins, ignored otherwise.
Also see \fBDefaultStorageUser\fR.
.TP
\fBJobContainerType\fR
Identifies the plugin to be used for job tracking.
The slurmd daemon must be restarted for a change in JobContainerType
to take effect.
NOTE: The \fBJobContainerType\fR applies to a job allocation, while
\fBProctrackType\fR applies to job steps.
Acceptable values at present include:
.RS
.TP 20
\fBjob_container/cncu\fR
used only for Cray systems (CNCU = Compute Node Clean Up)
.TP
\fBjob_container/none\fR
used for all other system types
.RE
.TP
\fBJobCredentialPrivateKey\fR
Fully qualified pathname of a file containing a private key used for
authentication by Slurm daemons.
This parameter is ignored if \fBCryptoType=crypto/munge\fR.
.TP
\fBJobCredentialPublicCertificate\fR
Fully qualified pathname of a file containing a public key used for
authentication by Slurm daemons.
This parameter is ignored if \fBCryptoType=crypto/munge\fR.
.TP
\fBJobFileAppend\fR
This option controls what to do if a job's output or error file
exist when the job is started.
If \fBJobFileAppend\fR is set to a value of 1, then append to
the existing file.
By default, any existing file is truncated.
.TP
\fBJobRequeue\fR
This option controls the default ability for batch jobs to be requeued.
Jobs may be requeued explicitly by a system administrator, after node
failure, or upon preemption by a higher priority job.
If \fBJobRequeue\fR is set to a value of 1, then batch job may be requeued
unless explicitly disabled by the user.
If \fBJobRequeue\fR is set to a value of 0, then batch job will not be requeued
unless explicitly enabled by the user.
Use the \fBsbatch\fR \fI\-\-no\-requeue\fR or \fI\-\-requeue\fR
option to change the default behavior for individual jobs.
The default value is 1.
.TP
\fBJobSubmitPlugins\fR
A comma delimited list of job submission plugins to be used.
The specified plugins will be executed in the order listed.
These are intended to be site\-specific plugins which can be used to set
default job parameters and/or logging events.
Sample plugins available in the distribution include "all_partitions",
"defaults", "logging", "lua", and "partition".
For examples of use, see the Slurm code in "src/plugins/job_submit" and
"contribs/lua/job_submit*.lua" then modify the code to satisfy your needs.
Slurm can be configured to use multiple job_submit plugins if desired,
however the lua plugin will only execute one lua script named "job_submit.lua"
located in the default script directory (typically the subdirectory "etc" of
the installation directory).
No job submission plugins are used by default.
.TP
\fBKeepAliveTime\fR
Specifies how long sockets communications used between the srun command and its
slurmstepd process are kept alive after disconnect.
Longer values can be used to improve reliability of communications in the
event of network failures.
The default value leaves the system default value.
The value may not exceed 65533.
.TP
\fBKillOnBadExit\fR
If set to 1, the job will be terminated immediately when one of the
processes is crashed or aborted. With the default value of 0, if one of
the processes is crashed or aborted the other processes will continue
to run. The user can override this configuration parameter by using srun's
\fB\-K\fR, \fB\-\-kill\-on\-bad\-exit\fR.
.TP
\fBKillWait\fR
The interval, in seconds, given to a job's processes between the
SIGTERM and SIGKILL signals upon reaching its time limit.
If the job fails to terminate gracefully in the interval specified,
it will be forcibly terminated.
The default value is 30 seconds.
The value may not exceed 65533.
.TP
\fBNodeFeaturesPlugins\fR
Identifies the plugins to be used for support of node features which can
change through time. For example, a node which might be booted with various
BIOS setting. This is supported through the use of a node's active_features
and available_features information.
Acceptable values at present include:
.RS
.TP 20
\fBnode_features/knl_cray\fR
used only for Intel Knights Landing processors (KNL) on Cray systems
.RE
.TP
\fBLaunchParameters\fR
Identifies options to the job launch plugin.
Acceptable values include:
.RS
.TP 12
\fBtest_exec\fR
Validate the executable command's existence prior to attempting launch on
the compute nodes
.RE
.TP
\fBLaunchType\fR
Identifies the mechanism to be used to launch application tasks.
Acceptable values include:
.RS
.TP 15
\fBlaunch/aprun\fR
For use with Cray systems with ALPS and the default value for those systems
.TP
\fBlaunch/poe\fR
For use with IBM Parallel Environment (PE) and the default value for systems
with the IBM NRT library installed.
.TP
\fBlaunch/runjob\fR
For use with IBM BlueGene/Q systems and the default value for those systems
.TP
\fBlaunch/slurm\fR
For all other systems and the default value for those systems
.RE
.TP
\fBLicenses\fR
Specification of licenses (or other resources available on all
nodes of the cluster) which can be allocated to jobs.
License names can optionally be followed by a colon
and count with a default count of one.
Multiple license names should be comma separated (e.g.
"Licenses=foo:4,bar").
Note that Slurm prevents jobs from being scheduled if their
required license specification is not available.
Slurm does not prevent jobs from using licenses that are
not explicitly listed in the job submission specification.
.TP
\fBLogTimeFormat\fR
Format of the timestamp in slurmctld and slurmd log files. Accepted
values are "iso8601", "iso8601_ms", "rfc5424", "rfc5424_ms", "clock",
"short" and "thread_id". The values ending in "_ms" differ from the ones without
in that fractional seconds with millisecond precision are printed. The
default value is "iso8601_ms". The "rfc5424" formats are the same as
the "iso8601" formats except that the timezone value is also
shown. The "clock" format shows a timestamp in microseconds retrieved
with the C standard clock() function. The "short" format is a short
date and time format. The "thread_id" format shows the timestamp
in the C standard ctime() function form without the year but
including the microseconds, the daemon's process ID and the current thread name
and ID.
.TP
\fBMailProg\fR
Fully qualified pathname to the program used to send email per user request.
The default value is "/usr/bin/mail".
.TP
\fBMaxArraySize\fR
The maximum job array size.
The maximum job array task index value will be one less than MaxArraySize
to allow for an index value of zero.
Configure MaxArraySize to 0 in order to disable job array use.
The value may not exceed 4000001.
The value of \fBMaxJobCount\fR should be much larger than \fBMaxArraySize\fR.
The default value is 1001.
.TP
\fBMaxJobCount\fR
The maximum number of jobs Slurm can have in its active database
at one time. Set the values of \fBMaxJobCount\fR and \fBMinJobAge\fR
to insure the slurmctld daemon does not exhaust its memory or other
resources. Once this limit is reached, requests to submit additional
jobs will fail. The default value is 10000 jobs.
NOTE: Each task of a job array counts as one job even though they will not
occupy separate job records until modified or initiated.
Performance can suffer with more than a few hundred thousand jobs.
Setting per MaxSubmitJobs per user is generally valuable to prevent a single
user from filling the system with jobs.
This is accomplished using Slurm's database and configuring enforcement of
resource limits.
This value may not be reset via "scontrol reconfig".
It only takes effect upon restart of the slurmctld daemon.
.TP
\fBMaxJobId\fR
The maximum job id to be used for jobs submitted to Slurm without a
specific requested value EXCEPT for jobs visible between clusters.
Job id values generated will incremented by 1 for each subsequent job.
Once \fBMaxJobId\fR is reached, the next job will be assigned \fBFirstJobId\fR.
The default value is 2,147,418,112 (0x7fff0000).
Jobs visible across clusters will always have a job ID of 2,147,483,648 or higher.
Also see \fBFirstJobId\fR.
.TP
\fBMaxMemPerCPU\fR
Maximum real memory size available per allocated CPU in megabytes.
Used to avoid over\-subscribing memory and causing paging.
\fBMaxMemPerCPU\fR would generally be used if individual processors
are allocated to jobs (\fBSelectType=select/cons_res\fR).
The default value is 0 (unlimited).
Also see \fBDefMemPerCPU\fR and \fBMaxMemPerNode\fR.
\fBMaxMemPerCPU\fR and \fBMaxMemPerNode\fR are mutually exclusive.
NOTE: Enforcement of memory limits currently requires enabling of
accounting, which samples memory use on a periodic basis (data need
not be stored, just collected).
NOTE: If a job specifies a memory per CPU limit that exceeds this system limit,
that job's count of CPUs per task will automatically be increased. This may
result in the job failing due to CPU count limits.
.TP
\fBMaxMemPerNode\fR
Maximum real memory size available per allocated node in megabytes.
Used to avoid over\-subscribing memory and causing paging.
\fBMaxMemPerNode\fR would generally be used if whole nodes
are allocated to jobs (\fBSelectType=select/linear\fR) and
resources are over\-subscribed (\fBOverSubscribe=yes\fR or
\fBOverSubscribe=force\fR).
The default value is 0 (unlimited).
Also see \fBDefMemPerNode\fR and \fBMaxMemPerCPU\fR.
\fBMaxMemPerCPU\fR and \fBMaxMemPerNode\fR are mutually exclusive.
NOTE: Enforcement of memory limits currently requires enabling of
accounting, which samples memory use on a periodic basis (data need
not be stored, just collected).
.TP
\fBMaxStepCount\fR
The maximum number of steps that any job can initiate. This parameter
is intended to limit the effect of bad batch scripts.
The default value is 40000 steps.
.TP
\fBMaxTasksPerNode\fR
Maximum number of tasks Slurm will allow a job step to spawn
on a single node. The default \fBMaxTasksPerNode\fR is 512.
May not exceed 65533.
.TP
\fBMCSParameters\fR
MCS = Multi-Category Security
MCS Plugin Parameters.
The supported parameters are specific to the \fBMCSPlugin\fR.
Changes to this value take effect when the Slurm daemons are reconfigured.
More information about MCS is available here
.
.TP
\fBMCSPlugin\fR
MCS = Multi-Category Security : associate a security label to jobs and ensure
that nodes can only be shared among jobs using the same security label.
Acceptable values include:
.RS
.TP 12
\fBmcs/none\fR
is the default value.
No security label associated with jobs,
no particular security restriction when sharing nodes among jobs.
.TP
\fBmcs/group\fR
only users with the same group can share the nodes.
.TP
\fBmcs/user\fR
a node cannot be shared with other users.
.RE
.TP
\fBMemLimitEnforce\fR
If set to "no" then Slurm will not terminate the job or the job step
if they exceeds the value requested using the \-\-mem\-per\-cpu option of
salloc/sbatch/srun. This is useful if jobs need to specify \-\-mem\-per\-cpu
for scheduling but they should not be terminate if they exceed the
estimated value. The default value is 'yes', terminate the job/step
if exceed the requested memory.
.TP
\fBMessageTimeout\fR
Time permitted for a round\-trip communication to complete
in seconds. Default value is 10 seconds. For systems with
shared nodes, the slurmd daemon could be paged out and
necessitate higher values.
.TP
\fBMinJobAge\fR
The minimum age of a completed job before its record is purged from
Slurm's active database. Set the values of \fBMaxJobCount\fR and
to insure the slurmctld daemon does not exhaust
its memory or other resources. The default value is 300 seconds.
A value of zero prevents any job record purging.
In order to eliminate some possible race conditions, the minimum non\-zero
value for \fBMinJobAge\fR recommended is 2.
.TP
\fBMpiDefault\fR
Identifies the default type of MPI to be used.
Srun may override this configuration parameter in any case.
Currently supported versions include:
\fBlam\fR,
\fBmpich1_p4\fR,
\fBmpich1_shmem\fR,
\fBmpichgm\fR,
\fBmpichmx\fR,
\fBmvapich\fR,
\fBnone\fR (default, which works for many other versions of MPI) and
\fBopenmpi\fR.
\fBpmi2\fR,
More information about MPI use is available here
.
.TP
\fBMpiParams\fR
MPI parameters.
Used to identify ports used by OpenMPI only and the input format is
"ports=12000\-12999" to identify a range of communication ports to be used.
.TP
\fBMsgAggregationParams\fR
Message aggregation parameters. Message aggregation
is an optional feature that may improve system performance by reducing
the number of separate messages passed between nodes. The feature
works by routing messages through one or more message collector
nodes between their source and destination nodes. At each
collector node, messages with the same destination received
during a defined message collection window are packaged into a single
composite message. When the window expires, the composite message
is sent to the next collector node on
the route to its destination. The route between each source
and destination node is provided by the Route plugin. When a
composite message is received at its destination node, the
original messages are extracted and processed as if they
had been sent directly.
.br
.br
Currently, the only message types supported by message
aggregation are the node registration, batch script completion,
step completion, and epilog complete messages.
.br
.br
The format for this parameter is as follows:
.RE
.RS
.TP 12
\fBMsgAggregationParams=\fR\fI
.TP
\fBFrontendAddr\fR
Name that a frontend node should be referred to in establishing
a communications path. This name will be used as an
argument to the gethostbyname() function for identification.
As with \fBFrontendName\fR, list the individual node addresses rather than
using a hostlist expression.
The number of \fBFrontendAddr\fR records per line must equal the number of
\fBFrontendName\fR records per line (i.e. you can't map to node names to
one address).
\fBFrontendAddr\fR may also contain IP addresses.
By default, the \fBFrontendAddr\fR will be identical in value to
\fBFrontendName\fR.
.TP
\fBPort\fR
The port number that the Slurm compute node daemon, \fBslurmd\fR, listens
to for work on this particular frontend node. By default there is a single port
number for all \fBslurmd\fR daemons on all frontend nodes as defined by the
\fBSlurmdPort\fR configuration parameter. Use of this option is not generally
recommended except for development or testing purposes.
\fBNote\fR: On Cray systems, Realm-Specific IP Addressing (RSIP) will
automatically try to interact with anything opened on ports 8192\-60000.
Configure Port to use a port outside of the configured SrunPortRange and
RSIP's port range.
.TP
\fBReason\fR
Identifies the reason for a frontend node being in state "DOWN", "DRAINED"
"DRAINING", "FAIL" or "FAILING".
Use quotes to enclose a reason having more than one word.
.TP
\fBState\fR
State of the frontend node with respect to the initiation of user jobs.
Acceptable values are "DOWN", "DRAIN", "FAIL", "FAILING" and "UNKNOWN".
"DOWN" indicates the frontend node has failed and is unavailable to be
allocated work.
"DRAIN" indicates the frontend node is unavailable to be allocated work.
"FAIL" indicates the frontend node is expected to fail soon, has
no jobs allocated to it, and will not be allocated to any new jobs.
"FAILING" indicates the frontend node is expected to fail soon, has
one or more jobs allocated to it, but will not be allocated to any new jobs.
"UNKNOWN" indicates the frontend node's state is undefined (BUSY or IDLE),
but will be established when the \fBslurmd\fR daemon on that node registers.
The default value is "UNKNOWN".
Also see the \fBDownNodes\fR parameter below.
For example: "FrontendName=frontend[00\-03] FrontendAddr=efrontend[00\-03]
State=UNKNOWN" is used to define four front end nodes for running slurmd
daemons.
.LP
The partition configuration permits you to establish different job
limits or access controls for various groups (or partitions) of nodes.
Nodes may be in more than one partition, making partitions serve
as general purpose queues.
For example one may put the same set of nodes into two different
partitions, each with different constraints (time limit, job sizes,
groups allowed to use the partition, etc.).
Jobs are allocated resources within a single partition.
Default values can be specified with a record in which
\fBPartitionName\fR is "DEFAULT".
The default entry values will apply only to lines following it in the
configuration file and the default values can be reset multiple times
in the configuration file with multiple entries where "PartitionName=DEFAULT".
The "PartitionName=" specification must be placed on every line
describing the configuration of partitions.
Each line where \fBPartitionName\fR is "DEFAULT" will replace or add to previous
default values and not a reinitialize the default values.
A single partition name can not appear as a PartitionName value in more than
one line (duplicate partition name records will be ignored).
If a partition that is in use is deleted from the configuration and slurm
is restarted or reconfigured (scontrol reconfigure), jobs using the partition
are canceled.
\fBNOTE:\fR Put all parameters for each partition on a single line.
Each line of partition configuration information should
represent a different partition.
The partition configuration file contains the following information:
.TP
\fBAllocNodes\fR
Comma separated list of nodes from which users can submit jobs in the
partition.
Node names may be specified using the node range expression syntax
described above.
The default value is "ALL".
.TP
\fBAllowAccounts\fR
Comma separated list of accounts which may execute jobs in the partition.
The default value is "ALL".
\fBNOTE:\fR If AllowAccounts is used then DenyAccounts will not be enforced.
Also refer to DenyAccounts.
.TP
\fBAllowGroups\fR
Comma separated list of group names which may execute jobs in the partition.
If \fBat least\fR one group associated with the user attempting to execute the
job is in AllowGroups, he will be permitted to use this partition.
Jobs executed as user root can use any partition without regard to
the value of AllowGroups.
If user root attempts to execute a job as another user (e.g. using
srun's \-\-uid option), this other user must be in one of groups
identified by AllowGroups for the job to successfully execute.
The default value is "ALL".
\fBNOTE:\fR For performance reasons, Slurm maintains a list of user IDs
allowed to use each partition and this is checked at job submission time.
This list of user IDs is updated when the \fBslurmctld\fR daemon is restarted,
reconfigured (e.g. "scontrol reconfig") or the partition's \fBAllowGroups\fR
value is reset, even if is value is unchanged
(e.g. "scontrol update PartitionName=name AllowGroups=group").
For a user's access to a partition to change, both his group membership must
change and Slurm's internal user ID list must change using one of the methods
described above.
.TP
\fBAllowQos\fR
Comma separated list of Qos which may execute jobs in the partition.
Jobs executed as user root can use any partition without regard to
the value of AllowQos.
The default value is "ALL".
\fBNOTE:\fR If AllowQos is used then DenyQos will not be enforced.
Also refer to DenyQos.
.TP
\fBAlternate\fR
Partition name of alternate partition to be used if the state of this partition
is "DRAIN" or "INACTIVE."
.TP
\fBDefault\fR
If this keyword is set, jobs submitted without a partition
specification will utilize this partition.
Possible values are "YES" and "NO".
The default value is "NO".
.TP
\fBDefMemPerCPU\fR
Default real memory size available per allocated CPU in megabytes.
Used to avoid over\-subscribing memory and causing paging.
\fBDefMemPerCPU\fR would generally be used if individual processors
are allocated to jobs (\fBSelectType=select/cons_res\fR).
If not set, the \fBDefMemPerCPU\fR value for the entire cluster will be used.
Also see \fBDefMemPerNode\fR and \fBMaxMemPerCPU\fR.
\fBDefMemPerCPU\fR and \fBDefMemPerNode\fR are mutually exclusive.
NOTE: Enforcement of memory limits currently requires enabling of
accounting, which samples memory use on a periodic basis (data need
not be stored, just collected).
.TP
\fBDefMemPerNode\fR
Default real memory size available per allocated node in megabytes.
Used to avoid over\-subscribing memory and causing paging.
\fBDefMemPerNode\fR would generally be used if whole nodes
are allocated to jobs (\fBSelectType=select/linear\fR) and
resources are over\-subscribed (\fBOverSubscribe=yes\fR or
\fBOverSubscribe=force\fR).
If not set, the \fBDefMemPerNode\fR value for the entire cluster will be used.
Also see \fBDefMemPerCPU\fR and \fBMaxMemPerNode\fR.
\fBDefMemPerCPU\fR and \fBDefMemPerNode\fR are mutually exclusive.
NOTE: Enforcement of memory limits currently requires enabling of
accounting, which samples memory use on a periodic basis (data need
not be stored, just collected).
.TP
\fBDenyAccounts\fR
Comma separated list of accounts which may not execute jobs in the partition.
By default, no accounts are denied access
\fBNOTE:\fR If AllowAccounts is used then DenyAccounts will not be enforced.
Also refer to AllowAccounts.
.TP
\fBDenyQos\fR
Comma separated list of Qos which may not execute jobs in the partition.
By default, no QOS are denied access
\fBNOTE:\fR If AllowQos is used then DenyQos will not be enforced.
Also refer AllowQos.
.TP
\fBDefaultTime\fR
Run time limit used for jobs that don't specify a value. If not set
then MaxTime will be used.
Format is the same as for MaxTime.
.TP
\fBDisableRootJobs\fR
If set to "YES" then user root will be prevented from running any jobs
on this partition.
The default value will be the value of \fBDisableRootJobs\fR set
outside of a partition specification (which is "NO", allowing user
root to execute jobs).
.TP
\fBExclusiveUser\fR
If set to "YES" then nodes will be exclusively allocated to users.
Multiple jobs may be run for the same user, but only one user can be active
at a time.
This capability is also available on a per-job basis by using the
\fB\-\-exclusive=user\fR option.
.TP
\fBGraceTime\fR
Specifies, in units of seconds, the preemption grace time
to be extended to a job which has been selected for preemption.
The default value is zero, no preemption grace time is allowed on
this partition.
Once a job has been selected for preemption, it's end time is set to the
current time plus GraceTime. The job is immediately sent SIGCONT and SIGTERM
signals in order to provide notification of its imminent termination.
This is followed by the SIGCONT, SIGTERM and SIGKILL signal sequence upon
reaching its new end time.
(Meaningful only for PreemptMode=CANCEL)
.TP
\fBHidden\fR
Specifies if the partition and its jobs are to be hidden by default.
Hidden partitions will by default not be reported by the Slurm APIs or commands.
Possible values are "YES" and "NO".
The default value is "NO".
Note that partitions that a user lacks access to by virtue of the
\fBAllowGroups\fR parameter will also be hidden by default.
.TP
\fBLLN\fR
Schedule resources to jobs on the least loaded nodes (based upon the number
of idle CPUs). This is generally only recommended for an environment with
serial jobs as idle resources will tend to be highly fragmented, resulting
in parallel jobs being distributed across many nodes.
Note that node \fBWeight\fR takes precedence over how many idle resources are
on each node.
Also see the \fBSelectParameters\fR configuration parameter \fBCR_LLN\fR to
use the least loaded nodes in every partition.
.TP
\fBMaxCPUsPerNode\fR
Maximum number of CPUs on any node available to all jobs from this partition.
This can be especially useful to schedule GPUs. For example a node can be
associated with two Slurm partitions (e.g. "cpu" and "gpu") and the
partition/queue "cpu" could be limited to only a subset of the node's CPUs,
insuring that one or more CPUs would be available to jobs in the "gpu"
partition/queue.
.TP
\fBMaxMemPerCPU\fR
Maximum real memory size available per allocated CPU in megabytes.
Used to avoid over\-subscribing memory and causing paging.
\fBMaxMemPerCPU\fR would generally be used if individual processors
are allocated to jobs (\fBSelectType=select/cons_res\fR).
If not set, the \fBMaxMemPerCPU\fR value for the entire cluster will be used.
Also see \fBDefMemPerCPU\fR and \fBMaxMemPerNode\fR.
\fBMaxMemPerCPU\fR and \fBMaxMemPerNode\fR are mutually exclusive.
NOTE: Enforcement of memory limits currently requires enabling of
accounting, which samples memory use on a periodic basis (data need
not be stored, just collected).
.TP
\fBMaxMemPerNode\fR
Maximum real memory size available per allocated node in megabytes.
Used to avoid over\-subscribing memory and causing paging.
\fBMaxMemPerNode\fR would generally be used if whole nodes
are allocated to jobs (\fBSelectType=select/linear\fR) and
resources are over\-subscribed (\fBOverSubscribe=yes\fR or
\fBOverSubscribe=force\fR).
If not set, the \fBMaxMemPerNode\fR value for the entire cluster will be used.
Also see \fBDefMemPerNode\fR and \fBMaxMemPerCPU\fR.
\fBMaxMemPerCPU\fR and \fBMaxMemPerNode\fR are mutually exclusive.
NOTE: Enforcement of memory limits currently requires enabling of
accounting, which samples memory use on a periodic basis (data need
not be stored, just collected).
.TP
\fBMaxNodes\fR
Maximum count of nodes which may be allocated to any single job.
For BlueGene systems this will be a c\-nodes count and will be converted
to a midplane count with a reduction in resolution.
The default value is "UNLIMITED", which is represented internally as \-1.
This limit does not apply to jobs executed by SlurmUser or user root.
.TP
\fBMaxTime\fR
Maximum run time limit for jobs.
Format is minutes, minutes:seconds, hours:minutes:seconds,
days\-hours, days\-hours:minutes, days\-hours:minutes:seconds or
"UNLIMITED".
Time resolution is one minute and second values are rounded up to
the next minute.
This limit does not apply to jobs executed by SlurmUser or user root.
.TP
\fBMinNodes\fR
Minimum count of nodes which may be allocated to any single job.
For BlueGene systems this will be a c\-nodes count and will be converted
to a midplane count with a reduction in resolution.
The default value is 1.
This limit does not apply to jobs executed by SlurmUser or user root.
.TP
\fBNodes\fR
Comma separated list of nodes (or base partitions for BlueGene systems)
which are associated with this partition.
Node names may be specified using the node range expression syntax
described above. A blank list of nodes
(i.e. "Nodes= ") can be used if one wants a partition to exist,
but have no resources (possibly on a temporary basis).
A value of "ALL" is mapped to all nodes configured in the cluster.
.TP
\fBOverSubscribe\fR
Controls the ability of the partition to execute more than one job at a
time on each resource (node, socket or core depending upon the value
of \fBSelectTypeParameters\fR).
If resources are to be over\-subscribed, avoiding memory over\-subscription
is very important.
\fBSelectTypeParameters\fR should be configured to treat
memory as a consumable resource and the \fB\-\-mem\fR option
should be used for job allocations.
Sharing of resources is typically useful only when using gang scheduling
(\fBPreemptMode=suspend,gang\fR).
Possible values for \fBOverSubscribe\fR are "EXCLUSIVE", "FORCE", "YES", and "NO".
Note that a value of "YES" or "FORCE" can negatively impact performance
for systems with many thousands of running jobs.
The default value is "NO".
For more information see the following web pages:
.br
.na
\fIhttps://slurm.schedmd.com/cons_res.html\fR,
.br
\fIhttps://slurm.schedmd.com/cons_res_share.html\fR,
.br
\fIhttps://slurm.schedmd.com/gang_scheduling.html\fR, and
.br
\fIhttps://slurm.schedmd.com/preempt.html\fR.
.ad
.RS
.TP 12
\fBEXCLUSIVE\fR
Allocates entire nodes to jobs even with select/cons_res configured.
Jobs that run in partitions with "OverSubscribe=EXCLUSIVE" will have
exclusive access to all allocated nodes.
.TP
\fBFORCE\fR
Makes all resources in the partition available for sharing
without any means for users to disable it.
May be followed with a colon and maximum number of jobs in
running or suspended state.
For example "OverSubscribe=FORCE:4" enables each node, socket or
core to execute up to four jobs at once.
Recommended only for BlueGene systems configured with
small blocks or for systems running
with gang scheduling (\fBPreemptMode=suspend,gang\fR).
NOTE: \fIPreemptType=QOS\fR will permit one additional job to be run
on the partition if started due to job preemption. For example, a configuration
of \fIOverSubscribe=FORCE:1\fR will only permit one job per resources normally,
but a second job can be started if done so through preemption based upon QOS.
The use of \fIPreemptType=QOS\fR and \fIPreemptType=Suspend\fR only applies
with \fISelectType=cons_res\fR.
.TP
\fBYES\fR
Makes all resources in the partition available for sharing upon request by
the job.
Resources will only be over\-subscribed when explicitly requested
by the user using the "\-\-share" option on job submission.
May be followed with a colon and maximum number of jobs in
running or suspended state.
For example "OverSubscribe=YES:4" enables each node, socket or
core to execute up to four jobs at once.
Recommended only for systems running with gang scheduling
(\fBPreemptMode=suspend,gang\fR).
.TP
\fBNO\fR
Selected resources are allocated to a single job. No resource will be
allocated to more than one job.
.RE
.TP
\fBPartitionName\fR
Name by which the partition may be referenced (e.g. "Interactive").
This name can be specified by users when submitting jobs.
If the \fBPartitionName\fR is "DEFAULT", the values specified
with that record will apply to subsequent partition specifications
unless explicitly set to other values in that partition record or
replaced with a different set of default values.
Each line where \fBPartitionName\fR is "DEFAULT" will replace or add to previous
default values and not a reinitialize the default values.
.TP
\fBPreemptMode\fR
Mechanism used to preempt jobs from this partition when
\fBPreemptType=preempt/partition_prio\fR is configured.
This partition specific \fBPreemptMode\fR configuration parameter will override
the \fBPreemptMode\fR configuration parameter set for the cluster as a whole.
The cluster\-level \fBPreemptMode\fR must include the GANG option if
\fBPreemptMode\fR is configured to SUSPEND for any partition.
The cluster\-level \fBPreemptMode\fR must not be OFF if \fBPreemptMode\fR
is enabled for any partition.
See the description of the cluster\-level \fBPreemptMode\fR configuration
parameter above for further information.
.TP
\fBPriorityJobFactor\fR
Partition factor used by priority/multifactor plugin in calculating job priority.
The value may not exceed 65533.
Also see PriorityTier.
.TP
\fBPriorityTier\fR
Jobs submitted to a partition with a higher priority tier value will be
dispatched before pending jobs in partition with lower priority tier value and,
if possible, they will preempt running jobs from partitions with lower priority
tier values.
Note that a partition's priority tier takes precedence over a job's priority.
The value may not exceed 65533.
Also see PriorityJobFactor.
.TP
\fBQOS\fR
Used to extend the limits available to a QOS on a partition. Jobs will not be
associated to this QOS outside of being associated to the partition. They
will still be associated to their requested QOS.
By default, no QOS is used.
\fBNOTE:\fR If a limit is set in both the Partition's QOS and the Job's QOS
the Partition QOS will be honored unless the Job's QOS has the
\fBOverPartQOS\fR flag set in which the Job's QOS will have priority.
.TP
\fBReqResv\fR
Specifies users of this partition are required to designate a reservation
when submitting a job. This option can be useful in restricting usage
of a partition that may have higher priority or additional resources to be
allowed only within a reservation.
Possible values are "YES" and "NO".
The default value is "NO".
.TP
\fBRootOnly\fR
Specifies if only user ID zero (i.e. user \fIroot\fR) may allocate resources
in this partition. User root may allocate resources for any other user,
but the request must be initiated by user root.
This option can be useful for a partition to be managed by some
external entity (e.g. a higher\-level job manager) and prevents
users from directly using those resources.
Possible values are "YES" and "NO".
The default value is "NO".
.TP
\fBSelectTypeParameters\fR
Partition\-specific resource allocation type.
This option replaces the global \fBSelectTypeParameters\fR value.
Supported values are \fBCR_Core\fR, \fBCR_Core_Memory\fR, \fBCR_Socket\fR and
\fBCR_Socket_Memory\fR.
Use requires the system\-wide \fBSelectTypeParameters\fR value be set.
.TP
\fBShared\fR
The \fBShared\fR configuration parameter has been replaced by the
\fBOverSubscribe\fR parameter described above.
.TP
\fBState\fR
State of partition or availability for use. Possible values
are "UP", "DOWN", "DRAIN" and "INACTIVE". The default value is "UP".
See also the related "Alternate" keyword.
.RS
.TP 10
\fBUP\fP
Designates that new jobs may queued on the partition, and that
jobs may be allocated nodes and run from the partition.
.TP
\fBDOWN\fP
Designates that new jobs may be queued on the partition, but
queued jobs may not be allocated nodes and run from the partition. Jobs
already running on the partition continue to run. The jobs
must be explicitly canceled to force their termination.
.TP
\fBDRAIN\fP
Designates that no new jobs may be queued on the partition (job
submission requests will be denied with an error message), but jobs
already queued on the partition may be allocated nodes and run.
See also the "Alternate" partition specification.
.TP
\fBINACTIVE\fP
Designates that no new jobs may be queued on the partition,
and jobs already queued may not be allocated nodes and run.
See also the "Alternate" partition specification.
.RE
.TP
\fBTRESBillingWeights\fR
TRESBillingWeights is used to define the billing weights of each TRES type that
will be used in calculating the usage of a job.
Billing weights are specified as a comma\-separated list of
\fI\fR=\fI\fR pairs.
Any TRES Type is available for billing. Note that the base unit for memory and
burst buffers is megabytes.
By default the billing of TRES is calculated as the sum of all TRES types
multiplied by their corresponding billing weight.
The weighted amount of a resource can be adjusted by adding a suffix of K,M,G,T
or P after the billing weight. For example, a memory weight of "mem=.25" on a
job allocated 8GB will be billed 2048 (8192MB *.25) units. A memory weight of
"mem=.25G" on the same job will be billed 2 (8192MB * (.25/1024)) units.
When a job is allocated 1 CPU and 8 GB of memory on a partition configured with
TRESBillingWeights="CPU=1.0,Mem=0.25G,GRES/gpu=2.0", the billable TRES will be:
(1*1.0) + (8*0.25) + (0*2.0) = 3.0.
If PriorityFlags=MAX_TRES is configured, the billable TRES is calculated as the
MAX of individual TRES' on a node (e.g. cpus, mem, gres) plus the sum of all
global TRES' (e.g. licenses). Using the same example above the billable TRES
will be MAX(1*1.0, 8*0.25) + (0*2.0) = 2.0.
If TRESBillingWeights is not defined then the job is billed against the total
number of allocated CPUs.
\fBNOTE:\fR TRESBillingWeights is only used when calculating fairshare and
doesn't affect job priority directly as it is currently not used for the size of
the job. If you want TRES' to play a role in the job's priority then refer to
the PriorityWeightTRES option.
.RE
.SH "Prolog and Epilog Scripts"
There are a variety of prolog and epilog program options that
execute with various permissions and at various times.
The four options most likely to be used are:
\fBProlog\fR and \fBEpilog\fR (executed once on each compute node
for each job) plus \fBPrologSlurmctld\fR and \fBEpilogSlurmctld\fR
(executed once on the \fBControlMachine\fR for each job).
NOTE: Standard output and error messages are normally not preserved.
Explicitly write output and error messages to an appropriate location
if you wish to preserve that information.
NOTE: By default the Prolog script is ONLY run on any individual
node when it first sees a job step from a new allocation; it does not
run the Prolog immediately when an allocation is granted. If no job steps
from an allocation are run on a node, it will never run the Prolog for that
allocation. This Prolog behaviour can be changed by the
\fBPrologFlags\fR parameter. The Epilog, on the other hand, always
runs on every node of an allocation when the allocation is released.
If the Epilog fails (returns a non\-zero exit code), this will result in the
node being set to a DRAIN state.
If the EpilogSlurmctld fails (returns a non\-zero exit code), this will only
be logged.
If the Prolog fails (returns a non\-zero exit code), this will result in the
node being set to a DRAIN state and the job being requeued in a held state
unless \fBnohold_on_prolog_fail\fR is configured in
\fBSchedulerParameters\fR.
If the PrologSlurmctld fails (returns a non\-zero exit code), this will result
in the job requeued to executed on another node if possible. Only batch jobs
can be requeued.
Interactive jobs (salloc and srun) will be cancelled if the
PrologSlurmctld fails.
Information about the job is passed to the script using environment
variables.
Unless otherwise specified, these environment variables are available
to all of the programs.
.TP
\fBBASIL_RESERVATION_ID\fR
Basil reservation ID.
Available on Cray systems with ALPS only.
.TP
\fBMPIRUN_PARTITION\fR
BlueGene partition name.
Available on BlueGene systems only.
.TP
\fBSLURM_ARRAY_JOB_ID\fR
If this job is part of a job array, this will be set to the job ID.
Otherwise it will not be set.
To reference this specific task of a job array, combine
SLURM_ARRAY_JOB_ID with SLURM_ARRAY_TASK_ID
(e.g. "scontrol update ${SLURM_ARRAY_JOB_ID}_{$SLURM_ARRAY_TASK_ID} ...");
Available in \fBPrologSlurmctld\fR and \fBEpilogSlurmctld\fR only.
.TP
\fBSLURM_ARRAY_TASK_ID\fR
If this job is part of a job array, this will be set to the task ID.
Otherwise it will not be set.
To reference this specific task of a job array, combine
SLURM_ARRAY_JOB_ID with SLURM_ARRAY_TASK_ID
(e.g. "scontrol update ${SLURM_ARRAY_JOB_ID}_{$SLURM_ARRAY_TASK_ID} ...");
Available in \fBPrologSlurmctld\fR and \fBEpilogSlurmctld\fR only.
.TP
\fBSLURM_ARRAY_TASK_MAX\fR
If this job is part of a job array, this will be set to the maximum
task ID.
Otherwise it will not be set.
Available in \fBPrologSlurmctld\fR and \fBEpilogSlurmctld\fR only.
.TP
\fBSLURM_ARRAY_TASK_MIN\fR
If this job is part of a job array, this will be set to the minimum
task ID.
Otherwise it will not be set.
Available in \fBPrologSlurmctld\fR and \fBEpilogSlurmctld\fR only.
.TP
\fBSLURM_ARRAY_TASK_STEP\fR
If this job is part of a job array, this will be set to the step
size of task IDs.
Otherwise it will not be set.
Available in \fBPrologSlurmctld\fR and \fBEpilogSlurmctld\fR only.
.TP
\fBSLURM_CLUSTER_NAME\fR
Name of the cluster executing the job.
.TP
\fBSLURM_JOB_ACCOUNT\fR
Account name used for the job.
Available in \fBPrologSlurmctld\fR and \fBEpilogSlurmctld\fR only.
.TP
\fBSLURM_JOB_CONSTRAINTS\fR
Features required to run the job.
Available in \fBProlog\fR, \fBPrologSlurmctld\fR and \fBEpilogSlurmctld\fR only.
.TP
\fBSLURM_JOB_DERIVED_EC\fR
The highest exit code of all of the job steps.
Available in \fBEpilogSlurmctld\fR only.
.TP
\fBSLURM_JOB_EXIT_CODE\fR
The exit code of the job script (or salloc). The value is the status
as returned by the wait() system call (See wait(2))
Available in \fBEpilogSlurmctld\fR only.
.TP
\fBSLURM_JOB_EXIT_CODE2\fR
The exit code of the job script (or salloc). The value has the format
:. The first number is the exit code, typically as set by the
exit() function. The second number of the signal that caused the process to
terminate if it was terminated by a signal.
Available in \fBEpilogSlurmctld\fR only.
.TP
\fBSLURM_JOB_GID\fR
Group ID of the job's owner.
Available in \fBPrologSlurmctld\fR and \fBEpilogSlurmctld\fR only.
.TP
\fBSLURM_JOB_GPUS\fR
GPU IDs allocated to the job (if any).
Available in the \fBProlog\fR only.
.TP
\fBSLURM_JOB_GROUP\fR
Group name of the job's owner.
Available in \fBPrologSlurmctld\fR and \fBEpilogSlurmctld\fR only.
.TP
\fBSLURM_JOB_ID\fR
Job ID.
CAUTION: If this job is the first task of a job array, then Slurm commands using
this job ID will refer to the entire job array rather than this specific task
of the job array.
.TP
\fBSLURM_JOB_NAME\fR
Name of the job.
Available in \fBPrologSlurmctld\fR and \fBEpilogSlurmctld\fR only.
.TP
\fBSLURM_JOB_NODELIST\fR
Nodes assigned to job. A Slurm hostlist expression.
"scontrol show hostnames" can be used to convert this to a
list of individual host names.
Available in \fBPrologSlurmctld\fR and \fBEpilogSlurmctld\fR only.
.TP
\fBSLURM_JOB_PARTITION\fR
Partition that job runs in.
Available in \fBProlog\fR, \fBPrologSlurmctld\fR and \fBEpilogSlurmctld\fR only.
.TP
\fBSLURM_JOB_UID\fR
User ID of the job's owner.
.TP
\fBSLURM_JOB_USER\fR
User name of the job's owner.
.SH "NETWORK TOPOLOGY"
Slurm is able to optimize job allocations to minimize network contention.
Special Slurm logic is used to optimize allocations on systems with a
three\-dimensional interconnect (BlueGene, etc.)
and information about configuring those systems are available on
web pages available here: .
For a hierarchical network, Slurm needs to have detailed information
about how nodes are configured on the network switches.
.LP
Given network topology information, Slurm allocates all of a job's
resources onto a single leaf of the network (if possible) using a best\-fit
algorithm.
Otherwise it will allocate a job's resources onto multiple leaf switches
so as to minimize the use of higher\-level switches.
The \fBTopologyPlugin\fR parameter controls which plugin is used to
collect network topology information.
The only values presently supported are
"topology/3d_torus" (default for IBM BlueGene and
Cray XT/XE systems, performs best\-fit logic over three\-dimensional topology),
"topology/none" (default for other systems,
best\-fit logic over one\-dimensional topology),
"topology/tree" (determine the network topology based
upon information contained in a topology.conf file,
see "man topology.conf" for more information).
Future plugins may gather topology information directly from the network.
The topology information is optional.
If not provided, Slurm will perform a best\-fit algorithm assuming the
nodes are in a one\-dimensional array as configured and the communications
cost is related to the node distance in this array.
.SH "RELOCATING CONTROLLERS"
If the cluster's computers used for the primary or backup controller
will be out of service for an extended period of time, it may be
desirable to relocate them.
In order to do so, follow this procedure:
.LP
1. Stop the Slurm daemons
.br
2. Modify the slurm.conf file appropriately
.br
3. Distribute the updated slurm.conf file to all nodes
.br
4. Restart the Slurm daemons
.LP
There should be no loss of any running or pending jobs.
Insure that any nodes added to the cluster have the current
slurm.conf file installed.
.LP
\fBCAUTION:\fR If two nodes are simultaneously configured as the
primary controller (two nodes on which \fBControlMachine\fR specify
the local host and the \fBslurmctld\fR daemon is executing on each),
system behavior will be destructive.
If a compute node has an incorrect \fBControlMachine\fR or
\fBBackupController\fR parameter, that node may be rendered
unusable, but no other harm will result.
.SH "EXAMPLE"
.LP
#
.br
# Sample /etc/slurm.conf for dev[0\-25].llnl.gov
.br
# Author: John Doe
.br
# Date: 11/06/2001
.br
#
.br
ControlMachine=dev0
.br
ControlAddr=edev0
.br
BackupController=dev1
.br
BackupAddr=edev1
.br
#
.br
AuthType=auth/munge
.br
Epilog=/usr/local/slurm/epilog
.br
Prolog=/usr/local/slurm/prolog
.br
FastSchedule=1
.br
FirstJobId=65536
.br
InactiveLimit=120
.br
JobCompType=jobcomp/filetxt
.br
JobCompLoc=/var/log/slurm/jobcomp
.br
KillWait=30
.br
MaxJobCount=10000
.br
MinJobAge=3600
.br
PluginDir=/usr/local/lib:/usr/local/slurm/lib
.br
ReturnToService=0
.br
SchedulerType=sched/backfill
.br
SlurmctldLogFile=/var/log/slurm/slurmctld.log
.br
SlurmdLogFile=/var/log/slurm/slurmd.log
.br
SlurmctldPort=7002
.br
SlurmdPort=7003
.br
SlurmdSpoolDir=/usr/local/slurm/slurmd.spool
.br
StateSaveLocation=/usr/local/slurm/slurm.state
.br
SwitchType=switch/none
.br
TmpFS=/tmp
.br
WaitTime=30
.br
JobCredentialPrivateKey=/usr/local/slurm/private.key
.br
.na
JobCredentialPublicCertificate=/usr/local/slurm/public.cert
.ad
.br
#
.br
# Node Configurations
.br
#
.br
NodeName=DEFAULT CPUs=2 RealMemory=2000 TmpDisk=64000
.br
NodeName=DEFAULT State=UNKNOWN
.br
NodeName=dev[0\-25] NodeAddr=edev[0\-25] Weight=16
.br
# Update records for specific DOWN nodes
.br
DownNodes=dev20 State=DOWN Reason="power,ETA=Dec25"
.br
#
.br
# Partition Configurations
.br
#
.br
PartitionName=DEFAULT MaxTime=30 MaxNodes=10 State=UP
.br
PartitionName=debug Nodes=dev[0\-8,18\-25] Default=YES
.br
PartitionName=batch Nodes=dev[9\-17] MinNodes=4
.br
PartitionName=long Nodes=dev[9\-17] MaxTime=120 AllowGroups=admin
.SH "INCLUDE MODIFIERS"
The "include" key word can be used with modifiers within the specified
pathname. These modifiers would be replaced with cluster name or other
information depending on which modifier is specified. If the included file
is not an absolute path name (i.e. it does not start with a slash), it will
searched for in the same directory as the slurm.conf file.
.TP
\fB%c\fR
Cluster name specified in the slurm.conf will be used.
.TP
\fBEXAMPLE\fR
.RE
ClusterName=linux
.RE
include /home/slurm/etc/%c_config
.RE
# Above line interpreted as
.RE
# "include /home/slurm/etc/linux_config"
.SH "FILE AND DIRECTORY PERMISSIONS"
There are three classes of files:
Files used by \fBslurmctld\fR must be accessible by user \fBSlurmUser\fR
and accessible by the primary and backup control machines.
Files used by \fBslurmd\fR must be accessible by user root and
accessible from every compute node.
A few files need to be accessible by normal users on all login and
compute nodes.
While many files and directories are listed below, most of them will
not be used with most configurations.
.TP
\fBAccountingStorageLoc\fR
If this specifies a file, it must be writable by user \fBSlurmUser\fR.
The file must be accessible by the primary and backup control machines.
It is recommended that the file be readable by all users from login and
compute nodes.
.TP
\fBEpilog\fR
Must be executable by user root.
It is recommended that the file be readable by all users.
The file must exist on every compute node.
.TP
\fBEpilogSlurmctld\fR
Must be executable by user \fBSlurmUser\fR.
It is recommended that the file be readable by all users.
The file must be accessible by the primary and backup control machines.
.TP
\fBHealthCheckProgram\fR
Must be executable by user root.
It is recommended that the file be readable by all users.
The file must exist on every compute node.
.TP
\fBJobCheckpointDir\fR
Must be writable by user \fBSlurmUser\fR and no other users.
The file must be accessible by the primary and backup control machines.
.TP
\fBJobCompLoc\fR
If this specifies a file, it must be writable by user \fBSlurmUser\fR.
The file must be accessible by the primary and backup control machines.
.TP
\fBJobCredentialPrivateKey\fR
Must be readable only by user \fBSlurmUser\fR and writable by no other users.
The file must be accessible by the primary and backup control machines.
.TP
\fBJobCredentialPublicCertificate\fR
Readable to all users on all nodes.
Must not be writable by regular users.
.TP
\fBMailProg\fR
Must be executable by user \fBSlurmUser\fR.
Must not be writable by regular users.
The file must be accessible by the primary and backup control machines.
.TP
\fBProlog\fR
Must be executable by user root.
It is recommended that the file be readable by all users.
The file must exist on every compute node.
.TP
\fBPrologSlurmctld\fR
Must be executable by user \fBSlurmUser\fR.
It is recommended that the file be readable by all users.
The file must be accessible by the primary and backup control machines.
.TP
\fBResumeProgram\fR
Must be executable by user \fBSlurmUser\fR.
The file must be accessible by the primary and backup control machines.
.TP
\fBSallocDefaultCommand\fR
Must be executable by all users.
The file must exist on every login and compute node.
.TP
\fBslurm.conf\fR
Readable to all users on all nodes.
Must not be writable by regular users.
.TP
\fBSlurmctldLogFile\fR
Must be writable by user \fBSlurmUser\fR.
The file must be accessible by the primary and backup control machines.
.TP
\fBSlurmctldPidFile\fR
Must be writable by user root.
Preferably writable and removable by \fBSlurmUser\fR.
The file must be accessible by the primary and backup control machines.
.TP
\fBSlurmdLogFile\fR
Must be writable by user root.
A distinct file must exist on each compute node.
.TP
\fBSlurmdPidFile\fR
Must be writable by user root.
A distinct file must exist on each compute node.
.TP
\fBSlurmdSpoolDir\fR
Must be writable by user root.
A distinct file must exist on each compute node.
.TP
\fBSrunEpilog\fR
Must be executable by all users.
The file must exist on every login and compute node.
.TP
\fBSrunProlog\fR
Must be executable by all users.
The file must exist on every login and compute node.
.TP
\fBStateSaveLocation\fR
Must be writable by user \fBSlurmUser\fR.
The file must be accessible by the primary and backup control machines.
.TP
\fBSuspendProgram\fR
Must be executable by user \fBSlurmUser\fR.
The file must be accessible by the primary and backup control machines.
.TP
\fBTaskEpilog\fR
Must be executable by all users.
The file must exist on every compute node.
.TP
\fBTaskProlog\fR
Must be executable by all users.
The file must exist on every compute node.
.TP
\fBUnkillableStepProgram\fR
Must be executable by user \fBSlurmUser\fR.
The file must be accessible by the primary and backup control machines.
.SH "LOGGING"
.LP
Note that while Slurm daemons create log files and other files as needed,
it treats the lack of parent directories as a fatal error.
This prevents the daemons from running if critical file systems are
not mounted and will minimize the risk of cold\-starting (starting
without preserving jobs).
.LP
Log files and job accounting files,
may need to be created/owned by the "SlurmUser" uid to be successfully
accessed. Use the "chown" and "chmod" commands to set the ownership
and permissions appropriately.
See the section \fBFILE AND DIRECTORY PERMISSIONS\fR for information
about the various files and directories used by Slurm.
.LP
It is recommended that the logrotate utility be used to insure that
various log files do not become too large.
This also applies to text files used for accounting,
process tracking, and the slurmdbd log if they are used.
.LP
Here is a sample logrotate configuration. Make appropriate site modifications
and save as /etc/logrotate.d/slurm on all nodes.
See the \fBlogrotate\fR man page for more details.
.LP
##
.br
# Slurm Logrotate Configuration
.br
##
.br
/var/log/slurm/*log {
.br
compress
.br
missingok
.br
nocopytruncate
.br
nocreate
.br
nodelaycompress
.br
nomail
.br
notifempty
.br
noolddir
.br
rotate 5
.br
sharedscripts
.br
size=5M
.br
create 640 slurm root
.br
postrotate
.br
/etc/init.d/slurm reconfig
.br
endscript
.br
}
.br
.SH "COPYING"
Copyright (C) 2002\-2007 The Regents of the University of California.
Produced at Lawrence Livermore National Laboratory (cf, DISCLAIMER).
.br
Copyright (C) 2008\-2010 Lawrence Livermore National Security.
.br
Copyright (C) 2010-2016 SchedMD LLC.
.LP
This file is part of Slurm, a resource management program.
For details, see .
.LP
Slurm is free software; you can redistribute it and/or modify it under
the terms of the GNU General Public License as published by the Free
Software Foundation; either version 2 of the License, or (at your option)
any later version.
.LP
Slurm is distributed in the hope that it will be useful, but WITHOUT ANY
WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
details.
.SH "FILES"
/etc/slurm.conf
.SH "SEE ALSO"
.LP
\fBbluegene.conf\fR(5), \fBcgroup.conf\fR(5), \fBgethostbyname\fR (3),
\fBgetrlimit\fR (2), \fBgres.conf\fR(5), \fBgroup\fR (5), \fBhostname\fR (1),
\fBscontrol\fR(1), \fBslurmctld\fR(8), \fBslurmd\fR(8),
\fBslurmdbd\fR(8), \fBslurmdbd.conf\fR(5), \fBsrun(1)\fR,
\fBspank(8)\fR, \fBsyslog\fR (2), \fBtopology.conf\fR(5)