Configurations

Attention

The outputs below are displayed as an example of format and not for their content: the latter may change according to the computing platform maintenance modifications.

The user is invited to run the commands himself on the interactive servers to obtain up-to-date information.

User information

There is a distinction between the notion of group and account. The first is the Unix group corresponding to the collaboration the user is member of. It therefore corresponds to an experience or a collaboration in which the user participates. The account corresponds to the entity that will be charged for the resources the job will use.

To display all the accounts a user is attached to and the QoS the accounts are allowed to:

% sacctmgr show user withassoc <userid> format=Account,QOS%30

<userid> being the user id.

Note

Generally, the sacctmgr command allows to display and modify all the information related to the accounts. For more details on the command, please refer to the help sacctmgr -h.

By default the active account will be set on the user’s main group. For confirmation, or to switch from one default account to another, please refer to the syntax suggested in Account management to change temporarly the main group. To submit on a different account without modyifing the main group, use the -A | --account= option.

Partitions

The partition is a computational resource organizing the nodes together in the same logical entity defined by one or more given specifications (whether physical or linked to resources).

To get a quick overview of the different partitions, you may use the sinfo command:

% sinfo
PARTITION       AVAIL  TIMELIMIT  NODES  STATE NODELIST
htc*               up   infinite      1   drng ccwslurm0130
htc*               up   infinite    139    mix ccwslurm[...]
htc*               up   infinite     50  alloc ccwslurm[...]
htc_interactive    up   infinite      1    mix ccwislurm0001
htc_interactive    up   infinite      1   idle ccwislurm0002
htc_highmem        up   infinite      1    mix ccwmslurm0001
gpu                up   infinite      6    mix ccwgslurm[0002,0100-0104]
gpu_interactive    up   infinite      2    mix ccwgislurm[0001,0100]
hpc                up   infinite      2  alloc ccwpslurm[0001-0002]
flash              up   infinite      1    mix ccwslurm0001
htc_daemon         up   infinite      1    mix ccwslurm0001
dask               up   infinite    139    mix ccwslurm[...]

There are three major distinct partitions: htc, hpc and gpu, as well as their equivalents for interactive jobs: htc_interactive, hpc_interactive and gpu_interactive. Each of these partitions corresponds to one of the three computing platforms described on the page concerning the computing platform.

The flash partition, dedicates a whole node to job testing and debug. This partition is limited to 1 hour by its qos.

The htc_highmem partition is dedicated to jobs that need huge memory and is allowed a higher memory limit per jobs.

The htc_daemon partition generally allows you to run monitoring or orchestrating jobs: very long, but limited in resources. This partition is limited by its qos to 10 jobs per user.

The dask partition is dedicated to the Dask functionality on the Jupyter Notebook Platform. This partition shares the same compute servers as htc.

Note

In a simple way, single-core and multi-core jobs will be executed in the htc partition, parallel jobs using InfiniBand in the hpc partition, and access to the GPUs will be done through the gpu partition. Access to this last partition is restricted and depends on the resources request made by your computing group. Please contact user support for any additional information.

Details on submission resource limitations are described in the Required parameter limits paragraph.

The sinfo command also indicates the job execution time restriction, the computation servers belonging to each of these partitions and their states.

The main options of the sinfo command are:

-a

displays all computing servers

-d

displays all off line computing servers

-l

displays the output in a long format

-p <partition>

displays the information for a specific partition

-O "<output fields>"

displays in the output the mentioned fields. For the list of fields, please run the command man sinfo

To display and read a partition detailed configuration you may use scontrol:

% scontrol show partition
PartitionName=htc
   AllowGroups=ALL AllowAccounts=ALL AllowQos=ALL
   AllocNodes=ALL Default=YES QoS=N/A
   DefaultTime=NONE DisableRootJobs=YES ExclusiveUser=NO GraceTime=0 Hidden=NO
   MaxNodes=1 MaxTime=UNLIMITED MinNodes=0 LLN=YES MaxCPUsPerNode=UNLIMITED
   NodeSets=htc
   Nodes=ccwslurm[0002-0143]
   PriorityJobFactor=1 PriorityTier=1 RootOnly=NO ReqResv=NO OverSubscribe=NO
   OverTimeLimit=NONE PreemptMode=OFF
   State=UP TotalCPUs=120 TotalNodes=3 SelectTypeParameters=NONE
   JobDefaults=(null)
   DefMemPerCPU=1000 MaxMemPerNode=UNLIMITED
   TRES=cpu=21376,mem=64458872M,node=382,billing=21376

   [...]
The command gives the main characteristics of the partitions:
  • authorized groups and accounts,

  • the associated qualities of service (see below),

  • the available resources in the partition,

  • the limits in terms of partition resources.

Note

In practice, when submitting a job, we can specify the partition and the account to use with the options --partition and --account respectively. Without any specification, Slurm will opt for the default partition, i.e. htc, and the user’s main account.

Quality of service

The quality of service (QoS), is a rule associated with a partition or a job that allows it to be altered. It can for example modify the priority of a job, or limit the allocated resources. The command scontrol showed in the partitions paragraph allows also to view the QoS implemented on a given partition.

In order to list the available QoS, you may use the command sacctmgr:

% sacctmgr show qos format=Name,Priority,MaxWall,MaxSubmitPU
     Name   Priority     MaxWall MaxSubmitPU
---------- ---------- ----------- -----------
    normal          0  7-00:00:00        3000
     flash          0    01:00:00          10
       gpu          0  7-00:00:00         100
    daemon          0 90-00:00:00          10
      dask       1000    08:00:00

Here, we have restricted the output to only the name, priority, execution time and maximum limit of submitted jobs per user fields using the format option.

The normal QoS is applied by default to all jobs on. It therefore limits the execution time to a maximum of 7 days. The gpu QoS has the same time limitation but is limited to 100 submitted job at a time. The flash QoS limits the execution time to 1 hour for a number of simultaneous jobs limited to 10 per user. The daemon QoS is used with the htc_daemon partition and is useful to execute light processes that need to run during a long time. It is also limited to 10 jobs per user. To summarize:

normal

is used with htc, htc_interactive and hpc partitions.

gpu

is used with gpu and gpu_interactive partitions.

flash

is used only with flash partition.

daemon

is used only with htc_daemon partition.

Note

As a result, upon submission simply set the partition and the QoS will be automatically set.

Nodes

Nodes are the physical machines hosting the computing resources such as CPU and memory. To obtain detailed information of each node on the computing platform, use the command below (example with the node ccwslurm0003):

% scontrol show node ccwslurm0002
NodeName=ccwslurm0002 Arch=x86_64 CoresPerSocket=1
   CPUAlloc=64 CPUTot=64 CPULoad=40.44
   AvailableFeatures=htc
   ActiveFeatures=htc
   Gres=(null)
   NodeAddr=ccwslurm0002 NodeHostName=ccwslurm0002 Version=21.08.8-2
   OS=Linux 3.10.0-1160.80.1.el7.x86_64 #1 SMP Tue Nov 8 15:48:59 UTC 2022
   RealMemory=192932 AllocMem=143640 FreeMem=19263 Sockets=64 Boards=1
   State=ALLOCATED ThreadsPerCore=1 TmpDisk=0 Weight=1 Owner=N/A MCS_label=N/A
   Partitions=htc,dask
   BootTime=2022-11-21T10:23:40 SlurmdStartTime=2022-11-21T10:23:56
   LastBusyTime=2022-11-21T11:25:35
   CfgTRES=cpu=64,mem=192932M,billing=64
   AllocTRES=cpu=64,mem=143640M
   CapWatts=n/a
   CurrentWatts=0 AveWatts=0
   ExtSensorsJoules=n/s ExtSensorsWatts=0 ExtSensorsTemp=n/s