Computing infrastructure and policies

This section seeks to provide factual information and policies on the Mila cluster computing environments.

Roles and authorizations

There are mainly two types of researchers statuses at Mila :

  1. Core researchers

  2. Affiliated researchers

This is determined by Mila policy. Core researchers have access to the Mila computing cluster. See your supervisor’s Mila status to know what is your own status.

Overview of available computing resources at Mila

The Mila cluster is to be used for regular development and relatively small number of jobs (< 5). It is a heterogeneous cluster. It uses SLURM to schedule jobs.

Mila cluster versus Compute Canada clusters

There are a lot of commonalities between the Mila cluster and the clusters from Compute Canada (CC). At the time being, the CC clusters where we have a large allocation of resources are beluga, cedar and graham. We also have comparable computational resources in the Mila cluster, with more to come.

The main distinguishing factor is that we have more control over our own cluster than we have over the ones at Compute Canada. Notably, also, the compute nodes in the Mila cluster all have unrestricted access to the Internet, which is not the case in general for CC clusters (although cedar does allow it).

At the current time of this writing (June 2021), Mila students are advised to use a healthy diet of a mix of Mila and CC clusters. This is especially true in times when your favorite cluster is oversubscribed, because you can easily switch over to a different one if you are used to it.

Guarantees about one GPU as absolute minimum

There are certain guarantees that the Mila cluster tries to honor when it comes to giving at minimum one GPU per student, all the time, to be used in interactive mode. This is strictly better than “one GPU per student on average” because it’s a floor meaning that, at any time, you should be able to ask for your GPU, right now, and get it (although it might take a minute for the request to be processed by SLURM).

Interactive sessions are possible on the CC clusters, and there are generally special rules that allow you to get resources more easily if you request them for a very short duration (for testing code before queueing long jobs). You do not get the same guarantee as on the Mila cluster, however.

Node profile description

Name

GPU

CPUs

Sockets

Cores/Socket

Threads/Core

Memory (GB)

TmpDisk (TB)

Arch

Slurm Features

Model

#

GPU Arch and Memory

GPU Compute Nodes

cn-a[001-011]

RTX8000

8

40

2

20

1

384

3.6

x86_64

turing,48gb

cn-b[001-005]

V100

8

40

2

20

1

384

3.6

x86_64

volta,nvlink,32gb

cn-c[001-040]

RTX8000

8

64

2

32

1

384

3

x86_64

turing,48gb

DGX Systems

cn-d[001-002]

A100

8

128

2

64

1

1024

14

x86_64

ampere,nvlink,40gb

cn-e001

V100

8

40

2

20

1

512

7

x86_64

volta,16gb

cn-e[002-003]

V100

8

40

2

20

1

512

7

x86_64

volta,32gb

Legacy GPU Compute Nodes

kepler5

V100

2

16

2

4

2

256

3.6

x86_64

volta,16gb

TITAN RTX

rtx[1,3-5,7]

titanrtx

2

20

1

10

2

128

0.93

x86_64

turing,24gb

POWER9

power9[1-2]

V100

4

128

2

16

4

586

0.88

power9

volta,nvlink,16gb

Special nodes and outliers

DGX A100

DGX A100 nodes are NVIDIA appliances with 8 NVIDIA A100 Tensor Core GPUs. Each GPU has 40 GB of memory, for a total of 320 GB per appliance. The GPUs are interconnected via 6 NVSwitches which allows 4.8 TB/s bi-directional bandwidth.

In order to run jobs on a DGX A100, add the flags below to your Slurm commands:

--gres=gpu:a100:<number> --reservation=DGXA100

Power9

Power9 nodes are using a different processor instruction set than Intel and AMD (x86_64) based nodes. As such you need to setup your environment again for those nodes specifically.

  • Power9 nodes have 128 threads. (2 processors / 16 cores / 4 way SMT)

  • 4 x V100 SMX2 (16 GB) with NVLink

  • In a Power9 node GPUs and CPUs communicate with each other using NVLink instead of PCIe. This allow them to communicate quickly between each other. More on Large Model Support (LMS)

Power9 nodes have the same software stack as the regular nodes and each software should be included to deploy your environment as on a regular node.

AMD

Warning

As of August 20 2019 the GPUs had to return back to AMD. Mila will get more samples. You can join the amd slack channels to get the latest information

Mila has a few node equipped with MI50 GPUs.

srun --gres=gpu -c 8 --reservation=AMD --pty bash

 first time setup of AMD stack
conda create -n rocm python=3.6
conda activate rocm

pip install tensorflow-rocm
pip install /wheels/pytorch/torch-1.1.0a0+d8b9d32-cp36-cp36m-linux_x86_64.whl

Data sharing policies

Note

/network/scratch aims to support Access Control Lists (ACLs) to allow collaborative work on rapidly changing data, e.g. work in process datasets, model checkpoints, etc…

/network/projects aims to offer a collaborative space for long-term projects. Data that should be kept for a longer period then 90 days can be stored in that location but first a request to Mila’s helpdesk has to be made to create the project directory.

Monitoring

Every compute node on the Mila cluster has a Netdata monitoring daemon allowing you to get a sense of the state of the node. This information is exposed in two ways:

  • For every node, there is a web interface from Netdata itself at <node>.server.mila.quebec:19999. This is accessible only when using the Mila wifi or through SSH tunnelling.

    • SSH tunnelling: on your local machine, run

      • ssh -L 19999:<node>.server.mila.quebec:19999 -p 2222 login.server.mila.quebec

      • or ssh -L 19999:<node>.server.mila.quebec:19999 mila if you have already setup your SSH Login,

    • then open http://localhost:19999 in your browser.

  • The Mila dashboard at dashboard.server.mila.quebec exposes aggregated statistics with the use of grafana. These are collected internally to an instance of prometheus.

In both cases, those graphs are not editable by individual users, but they provide valuable insight into the state of the whole cluster or the individual nodes. One of the important uses is to collect data about the health of the Mila cluster and to sound the alarm if outages occur (e.g. if the nodes crash or if GPUs mysteriously become unavailable for SLURM).

Example with Netdata on cn-c001

For example, if we have a job running on cn-c001, we can type cn-c001.server.mila.quebec:19999 in a browser address bar and the following page will appear.

monitoring.png

Example watching the CPU/RAM/GPU usage

Given that compute nodes are generally shared with other users who are also running jobs at the same time and consuming resources, this is not generally a good way to profile your code in fine details. However, it can still be a very useful source of information for getting an idea of whether the machine that you requested is being used in its full capacity.

Given how expensive the GPUs are, it generally makes sense to try to make sure that this resources is always kept busy.

  • CPU
    • iowait (pink line): High values means your model is waiting on IO a lot (disk or network).

monitoring_cpu.png
  • CPU RAM
    • You can see how much CPU RAM is being used by your script in practice, considering the amount that you requested (e.g. `sbatch --mem=8G ...`).

    • GPU usage is generally more important to monitor than CPU RAM. You should not cut it so close to the limit that your experiments randomly fail because they run out of RAM. However, you should not request blindly 32GB of RAM when you actually require only 8GB.

monitoring_ram.png
  • GPU
    • Monitors the GPU usage using an nvidia-smi plugin for Netdata.

    • Under the plugin interface, select the GPU number which was allocated to you. You can figure this out by running echo $SLURM_JOB_GPUS on the allocated node or, if you have the job ID, scontrol show -d job YOUR_JOB_ID | grep 'GRES' and checking IDX

    • You should make sure you use the GPUs to their fullest capacity.

    • Select the biggest batch size if possible to increase GPU memory usage and the GPU computational load.

    • Spawn multiple experiments if you can fit many on a single GPU. Running 10 independent MNIST experiments on a single GPU will probably take less than 10x the time to run a single one. This assumes that you have more experiments to run, because nothing is gained by gratuitously running experiments.

    • You can request a less powerful GPU and leave the more powerful GPUs to other researchers who have experiments that can make best use of them. Sometimes you really just need a k80 and not a v100.

monitoring_gpu.png
  • Other users or jobs
    • If the node seems unresponsive or slow, it may be useful to check what other tasks are running at the same time on that node. This should not be an issue in general, but in practice it is useful to be able to inspect this to diagnose certain problems.

monitoring_users.png

Example with Mila dashboard

mila_dashboard_2021-06-15.png

Storage

Path

Performance

Usage

Quota (Space/Files)

Backup

Auto-cleanup

/network/datasets/

High

  • Curated raw datasets (read only)

$HOME or /home/mila/<u>/<username>/

Low

  • Personal user space

  • Specific libraries, code, binaries

100GB/1000K

Daily

no

$SCRATCH or /network/scratch/<u>/<username>/

High

  • Temporary job results

  • Processed datasets

  • Optimized for small Files

no

no

90 days

$SLURM_TMPDIR

Highest

  • High speed disk for temporary job results

4TB/-

no

at job end

/network/projects/<groupname>/

Fair

  • Shared space to facilitate collaboration between researchers

  • Long-term project storage

200GB/1000K

Daily

no

$ARCHIVE or /network/archive/<u>/<username>/

Low

  • Long-term personal storage

500GB

no

no

Note

The $HOME file system is backed up once a day. For any file restoration request, file a request to Mila’s IT support with the path to the file or directory to restore, with the required date.

Warning

Currently there is no backup system for any other file systems of the Mila cluster. Storage local to personal computers, Google Drive and other related solutions should be used to backup important data

$HOME

$HOME is appropriate for codes and libraries which are small and read once, as well as the experimental results that would be needed at a later time (e.g. the weights of a network referenced in a paper).

Quotas are enabled on $HOME for both disk capacity (blocks) and number of files (inodes). The limits for blocks and inodes are respectively 100GiB and 1 million per user. The command to check the quota usage from a login node is:

beegfs-ctl --cfgFile=/etc/beegfs/home.d/beegfs-client.conf --getquota --uid $USER

$SCRATCH

$SCRATCH can be used to store processed datasets, work in progress datasets or temporary job results. Its block size is optimized for small files which minimizes the performance hit of working on extracted datasets.

Note

Auto-cleanup: this file system is cleared on a weekly basis, files not used for more than 90 days will be deleted.

$SLURM_TMPDIR

$SLURM_TMPDIR points to the local disk of the node on which a job is running. It should be used to copy the data on the node at the beginning of the job and write intermediate checkpoints. This folder is cleared after each job.

projects

projects can be used for collaborative projects. It aims to ease the sharing of data between users working on a long-term project.

Quotas are enabled on projects for both disk capacity (blocks) and number of files (inodes). The limits for blocks and inodes are respectively 200GiB and 1 million per user and per group.

Note

It is possible to request higher quota limits if the project requires it. File a request to Mila’s IT support.

$ARCHIVE

$ARCHIVE purpose is to store data other than datasets that has to be kept long-term (e.g. generated samples, logs, data relevant for paper submission).

$ARCHIVE is only available on the login nodes. Because this file system is tuned for large files, it is recommended to archive your directories. For example, to archive the results of an experiment in $SCRATCH/my_experiment_results/, run the commands below from a login node:

cd $SCRATCH
tar cJf $ARCHIVE/my_experiment_results.tar.xz --xattrs my_experiment_results

Disk capacity quotas are enabled on $ARCHIVE. The soft limit per user is 500GB, the hard limit is 550GB. The grace time is 7 days. This means that one can use more than 500GB for 7 days before the file system enforces quota. However, it is not possible to use more than 550GB. The command to check the quota usage from a login node is df:

df -h $ARCHIVE

Note

There is NO backup of this file system.

datasets

datasets contains curated datasets to the benefit of the Mila community. To request the addition of a dataset or a preprocessed dataset you think could benefit the research of others, you can fill this form.

Datasets in datasets/restricted are restricted and require an explicit request to gain access. Please submit a support ticket mentioning the dataset’s access group (ex.: scannet_users), your cluster’s username and the approbation of the group owner. You can find the dataset’s access group by listing the content of /network/datasets/restricted with the ls command.

Data Transmission

Multiple methods can be used to transfer data to/from the cluster:

  • rsync --bwlimit=10mb; this is the favored method since the bandwidth can be limited to prevent impacting the usage of the cluster: rsync

  • Compute Canada: Globus