NVIDIA’s singularity containers






NVIDIA provides many different docker images containing scientific software through their NGC repository. This software is available for free for NVIDIA’s GPUs and one can register for free to get access to the images.

You can use these images as a starting point for your own GPU images, but do be mindful of NVIDIA’s terms and conditions. If you want to store your own images that are based on NGC images, either use NGC itself or our own Docker registry that is documented on the singularity containers page.

We have converted some of these images with minimal changes into singularity images that are available in Triton.

Currently updated images are:

  • nvidia-tensorflow: Contains tensorflow. Due to major changes that happened between Tensorflow v1 and v2, image versions have either tf1 or tf2 to designate the major version of Tensorflow.

  • nvidia-pytorch: Contains PyTorch.

There are various other images available that can be installed very quickly if required.

Running simple Tensorflow/Keras model with NVIDIA’s containers

Let’s run the MNIST example from Tensorflow’s tutorials:

model = tf.keras.models.Sequential([
  tf.keras.layers.Flatten(input_shape=(28, 28)),
  tf.keras.layers.Dense(512, activation=tf.nn.relu),
  tf.keras.layers.Dense(10, activation=tf.nn.softmax)

The full code for the example is in tensorflow_mnist.py. One can run this example with srun:

wget https://raw.githubusercontent.com/AaltoSciComp/scicomp-docs/master/triton/examples/tensorflow/tensorflow_mnist.py
module load nvidia-tensorflow/20.02-tf1-py3
srun --time=00:15:00 --gres=gpu:1 singularity_wrapper exec python tensorflow_mnist.py

or with sbatch by submitting tensorflow_singularity_mnist.sh:

#SBATCH --gres=gpu:1
#SBATCH --time=00:15:00

module load nvidia-tensorflow/20.02-tf1-py3

singularity_wrapper exec python tensorflow_mnist.py

Do note that by default Keras downloads datasets to $HOME/.keras/datasets.

Running simple PyTorch model with NVIDIA’s containers

Let’s run the MNIST example from PyTorch’s tutorials:

class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.conv1 = nn.Conv2d(1, 20, 5, 1)
        self.conv2 = nn.Conv2d(20, 50, 5, 1)
        self.fc1 = nn.Linear(4*4*50, 500)
        self.fc2 = nn.Linear(500, 10)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        x = F.max_pool2d(x, 2, 2)
        x = F.relu(self.conv2(x))
        x = F.max_pool2d(x, 2, 2)
        x = x.view(-1, 4*4*50)
        x = F.relu(self.fc1(x))
        x = self.fc2(x)
        return F.log_softmax(x, dim=1)

The full code for the example is in pytorch_mnist.py. One can run this example with srun:

wget https://raw.githubusercontent.com/AaltoSciComp/scicomp-docs/master/triton/examples/pytorch/pytorch_mnist.py
module load nvidia-pytorch/20.02-py3
srun --time=00:15:00 --gres=gpu:1 singularity_wrapper exec python pytorch_mnist.py

or with sbatch by submitting pytorch_singularity_mnist.sh:

#SBATCH --gres=gpu:1
#SBATCH --time=00:15:00

module load nvidia-pytorch/20.02-py3

singularity_wrapper exec python pytorch_mnist.py

The Python-script will download the MNIST dataset to data folder.