Running your code¶
SLURM commands guide¶
Basic Usage¶
The SLURM documentation provides extensive information on the available commands to query the cluster status or submit jobs.
Below are some basic examples of how to use SLURM.
Submitting jobs¶
Batch job¶
In order to submit a batch job, you have to create a script containing the main command(s) you would like to execute on the allocated resources/nodes.
Your job script is then submitted to SLURM with sbatch.
The working directory of the job will be the one where your executed sbatch.
Tip
Slurm directives can be specified on the command line alongside sbatch or
inside the job script with a line starting with #SBATCH.
Interactive job¶
Workload managers usually run batch jobs to avoid having to watch its progression and let the scheduler run it as soon as resources are available. If you want to get access to a shell while leveraging cluster resources, you can submit an interactive jobs where the main executable is a shell with the srun or salloc commands.
Will start an interactive job on the first node available with the default
resources set in SLURM (1 task/1 CPU). srun accepts the same arguments as
sbatch with the exception that the environment is not passed.
Tip
To pass your current environment to an interactive job, add
--preserve-env to srun.
salloc can also be used and is mostly a wrapper around srun if provided
without more info but it gives more flexibility if for example you want to get
an allocation on multiple nodes.
Job submission arguments¶
In order to accurately select the resources for your job, several arguments are available. The most important ones are:
| Argument | Description |
|---|---|
-n, --ntasks=<number> |
The number of task in your script, usually =1 |
-c, --cpus-per-task=<ncpus> |
The number of cores for each task |
-t, --time=<time> |
Time requested for your job |
--mem=<size[units]> |
Memory requested for all your tasks |
--gres=<list> |
Select generic resources such as GPUs for your job: --gres=gpu:GPU_MODEL |
Tip
Always consider requesting the adequate amount of resources to improve the scheduling of your job (small jobs always run first).
Checking job status¶
To display jobs currently in queue, use squeue and to get only your jobs type
Note
The maximum number of jobs able to be submitted to the system per user is 1000 (MaxSubmitJobs=1000) at any given time from the given association. If this limit is reached, new submission requests will be denied until existing jobs in this association complete.
Removing a job¶
To cancel your job simply use scancel
Partitioning¶
Since we don't have many GPUs on the cluster, resources must be shared as fairly
as possible. The --partition=/-p flag of SLURM allows you to set the
priority you need for a job. Each job assigned with a priority can preempt jobs
with a lower priority: unkillable > main > long. Once preempted, your job is
killed without notice and is automatically re-queued on the same partition until
resources are available. (To leverage a different preemption mechanism, see the
Handling preemption )
| Flag | Max Resource Usage | Max Time | Note |
|---|---|---|---|
--partition=unkillable |
6 CPUs, mem=32G, 1 GPU | 2 days | |
--partition=unkillable-cpu |
2 CPUs, mem=16G | 2 days | CPU-only jobs |
--partition=short-unkillable |
mem=1000G, 4 GPUs | 3 hours (!) | Large but short jobs. Restricted to 4-GPU nodes only |
--partition=main |
8 CPUs, mem=48G, 2 GPUs | 5 days | |
--partition=main-cpu |
8 CPUs, mem=64G | 5 days | CPU-only jobs |
--partition=long |
no limit of resources | 7 days | |
--partition=long-cpu |
no limit of resources | 7 days | CPU-only jobs |
Important: H100 GPUs Partition Restrictions
H100 GPUs are ONLY available in the short-unkillable partition.
The short-unkillable partition is restricted to 4-GPU nodes only,
specifically:
- cn-g nodes: A100 80GB GPUs (4 GPUs per node)
- cn-l nodes: L40S GPUs (4 GPUs per node)
As an exception, it also contains the H100 nodes:
- cn-n nodes: H100 GPUs (8 GPUs per node, but only 4 can be used per job)
For a complete list of node specifications and GPU details, see Node profile description.
About outdated partitions (cpu_jobs, cpu_jobs_low, etc.)
Historically, before the 2022 introduction of CPU-only nodes (e.g. the cn-f
series), CPU jobs ran side-by-side with the GPU jobs on GPU nodes. To prevent
them obstructing any GPU job, they were always lowest-priority and preemptible.
This was implemented by automatically assigning them to one of the now-obsolete
partitions cpu_jobs, cpu_jobs_low or cpu_jobs_low-grace.
Do not use these partition names anymore. Prefer the *-cpu partition
names defined above.
For backwards-compatibility purposes, the legacy partition names are translated
to their effective equivalent long-cpu, but they will eventually be removed
entirely.
Note
As a convenience, should you request the unkillable, main or long
partition for a CPU-only job, the partition will be translated to its -cpu
equivalent automatically.
For instance, to request an unkillable job with 1 GPU, 4 CPUs, 10G of RAM and 12h of computation do:
You can also make it an interactive job using salloc:
The Mila cluster has many different types of nodes/GPUs. To request a specific type of node/GPU, you can add specific feature requirements to your job submission command.
To access those special nodes you need to request them explicitly by adding the
flag --constraint=<name>. The full list of nodes in the Mila Cluster can be
accessed at Node profile description.
Examples:
To request a machine with 2 GPUs using NVLink, you can use
To request a DGX system with 8 A100 GPUs, you can use
| Feature | Particularities |
|---|---|
| 12gb/32gb/40gb/48gb/80gb | Request a specific amount of GPU memory |
| volta/turing/ampere | Request a specific GPU architecture |
| nvlink | Machine with GPUs using the NVLink interconnect technology |
| dgx | NVIDIA DGX system with DGX OS |
Partition details and GPU availability¶
The following table provides a quick reference guide for choosing partitions and understanding GPU availability:
| Partition | When to use | Available GPUs |
|---|---|---|
unkillable |
High-priority jobs that cannot be interrupted. Maximum 2 days runtime. | All GPU types |
short-unkillable |
Large, short jobs (3 hours max) that need high priority and cannot be interrupted. | Restricted to 4-GPU nodes only |
main |
Standard priority jobs with moderate runtime needs (5 days max). | All GPU types |
long |
Long-running jobs (7 days max) that can tolerate preemption. | All GPU types except H100 |
*-cpu |
CPU-only jobs (no GPU required). | N/A (CPU-only nodes) |
Information on partitions/nodes¶
sinfo provides most of the
information about available nodes and partitions/queues to submit jobs to.
Partitions are a group of nodes usually sharing similar features. On a partition, some job limits can be applied which will override those asked for a job (i.e. max time, max CPUs, etc...)
To display available partitions, simply use
To display available nodes and their status, you can use
And to get statistics on a job running or terminated, use sacct with some of
the fields you want to display
Or to get the list of all your previous jobs, use the --start=YYYY-MM-DD flag. You can check sacct(1) for further information about additional time formats.
scontrol can be used to
provide specific information on a job (currently running or recently terminated)
Or more info on a node and its resources
Useful Commands¶
| Get an interactive job and give you a shell. (ssh like) CPU only | |
|---|---|
| Get an interactive job with one GPU, 2 CPUs and 12000 MB RAM | |
|---|---|
| start a batch job (same options as salloc) | |
|---|---|
| Re-attach a dropped interactive job | |
|---|---|
| status of all nodes | |
|---|---|
| List GPU type and FEATURES that you can request | |
|---|---|
| (Custom) List available gpus | |
|---|---|
| Cancel a job | |
|---|---|
| summary status of all active jobs | |
|---|---|
| summary status of all YOUR active jobs | |
|---|---|
| summary status of a specific job | |
|---|---|
| status of all jobs including requested resources (see the SLURM squeue doc for all output options) | |
|---|---|
| Detailed status of a running job | |
|---|---|
| Get the node where a finished job ran | |
|---|---|
| Find info about old jobs | |
|---|---|
| List of current and recent jobs | |
|---|---|
Special GPU requirements¶
Specific GPU architecture and memory can be easily requested through the
--gres flag by using either
--gres=gpu:architecture:number--gres=gpu:memory:number--gres=gpu:model:number
Example:
To request 1 GPU with at least 48GB of memory use
The full list of GPU and their features can be accessed here.
Example script¶
Here is a sbatch script that follows good practices on the Mila cluster:
Note
This example is a bit outdated and uses Conda. In practice, we now recommend that you use uv to manage your Python environments. See the Minimal Examples Section for more information.