This page will explain how to run Matlab jobs on triton, and introduce important details about Matlab on triton. (Note: We used to have the Matlab Distributed Computing Server (MDCS), but because of low use we no longer have a license. You can still run in parallel on one node, with up to 40 cores.)

Important notes

Matlab writes session data, compiled code and additional toolboxes to ~/.matlab. This can quicky fill up your $HOME quota. To fix this we recommend that you replace the folder with a symlink that points to a directory in your working directory.

rsync -lrt ~/.matlab/ $WRKDIR/matlab-config/ && rm -r ~/.matlab
ln -sT $WRKDIR/matlab-config ~/.matlab
quotafix -gs --fix $WRKDIR/matlab-config

If you run parallel code in matlab, keep in mind, that matlab uses your home folder as storage for the worker files, so if you run multiple jobs you have to keep the worker folders seperate To address this, you need to specify the worker location ( the JobStorageLocation field of the parallel cluster) to a location unique to the job

% Initialize the parallel pool

% Create a temporary folder for the workers working on this job,
% in order not to conflict with other jobs.

% set the worker storage location of the cluster

To address the latter, the number of parallel workers needs to explicitly be provided when initializing the parallel pool:

% get the number of workers based on the available CPUS from SLURM
num_workers = str2double(getenv('SLURM_CPUS_PER_TASK'));

% start the parallel pool

Here we provide a small script, that does all those steps for you.

Interactive usage

Interactive usage is currently available via the sinteractive tool. Do not use the cluster front-end for this, but connect to a node with sinteractive The login node is only meant for submitting jobs/compiling. To run an interactive session with a user interface run the following commands from a terminal.

ssh -X
module load matlab
matlab &

Simple serial script

Running a simple Matlab job is easy through the slurm queue. A sample slurm script is provided below:

#!/bin/bash -l
#SBATCH --time=00:05:00
#SBATCH --mem=100M
#SBATCH -o serial_Matlab.out
module load matlab
srun matlab -nojvm -nosplash -r "serial_Matlab($n,$m) ; exit(0)"

The above script can then be saved as a file (e.g. and the job can be submitted with sbatch The actual calculation is done in serial_Matlab.m-file:

function C = serial_Matlab(n,m)


                C=A*(B.') + 2.0*C
        catch error

Remember to always set exit into your slurm script so that the program quits once the function serial_Matlab has finished. Using a try-catch-statement will allow your job to finish in case of any error within the program. If you don’t do this, Matlab will drop into interactive mode and do nothing while your job wastes time.

NOTE: Starting from version r2019a the launch options -r ...; exit(0) can be easily replaced with the -batch option which automatically exits matlab at the end of the command that is passed (see here for details). So the last command from the slurm script above for Matlab r2019a will look like:

srun matlab -nojvm -nosplash -batch "serial_Matlab($n,$m);"

Running Matlab Array jobs

The most common way to utilize Matlab is to write a single .M-file that can be used to run tasks as a non-interactive batch job. These jobs are then submitted as independent tasks and when the heavy part is done, the results are collected for analysis. For these kinds of jobs the Slurm array jobs is the best choice; For more information on array jobs see Array jobs in the Triton user guide.

Here is an example of testing multiple mutation rates for a genetic algorithm. First, the matlab code.

% set the mutation rate
mutationRate = str2double(getenv('SLURM_ARRAY_TASK_ID'))/100;
opts = optimoptions('ga','MutationFcn', {@mutationuniform, mutationRate});

% Set population size and end criteria
opts.PopulationSize = 100;
opts.MaxStallGenerations = 50;
opts.MaxGenerations = 200000;

%set the range for all genes
opts.InitialPopulationRange = [-20;20];

% define number of variables (genes)
numberOfVariables = 6;

[x,Fval,exitFlag,Output] = ga(@fitness,numberOfVariables,[],[],[], ...

output = [4,-2,3.5,5,-11,-4.7] * x'

save(['MutationJob' getenv('i') '.mat'], 'output');


function fit = fitness(x)
    output = [4,-2,3.5,5,-11,-4.7] * x';
    fit = abs(output - 44);

We run this code with the following slurm script using sbatch

#SBATCH --time=00:30:00
#SBATCH --array=1-100
#SBATCH --mem=500M
#SBATCH --output=r_array_%a.out

module load matlab

srun matlab -nodisplay -r serial

Collecting the results

Finally a wrapper script to read in the .mat files and plots the resulting values

function collectResults(maxMutationRate) 
   for index=1:maxMutationRate
      % read the output from the jobs
      filename = strcat( 'MutationJob', int2str( index ) );
      load( filename );


Seeding the random number generator

Note that by default MATLAB always initializes the random number generator with a constant value. Thus if you launch several matlab instances e.g. to calculate distinct ensembles, then you need to seed the random number generator such that it’s distinct for each instance. In order to do this, you can call the rng() function, passing the value of $SLURM_ARRAY_TASK_ID to it.

Parallel Matlab with Matlab’s internal parallelization

Matlab has internal parallelization that can be activated by requesting more than one cpu per task in the Slurm script and using the matlab_multithread to start the interpreter.

#SBATCH --time=00:15:00
#SBATCH --mem=500M
#SBATCH --cpus-per-task=4
#SBATCH --output=ParallelOut

module load matlab

srun matlab_multithread -nodisplay -r parallel_fun

An example function is provided in this script

#SBATCH --time=00:15:00
#SBATCH --mem=500M
#SBATCH --cpus-per-task=4
#SBATCH --output=ParallelOut

module load matlab

srun matlab_multithread -nodisplay -r parallel_fun

Parallel Matlab with parpool

Often one uses Matlab’s parallel pool for parallelization. When using parpool one needs to specify the number of workers. This number should match the number of CPUs requested. parpool uses JVM so when launching the interpreter one needs to use -nodisplay instead of -nojvm. Example

Slurm script:

#SBATCH --time=00:15:00
#SBATCH --mem=500M
#SBATCH --cpus-per-task=4
#SBATCH --output=matlab_parallel.out

module load matlab

srun matlab_multithread -nodisplay -r parallel

An example function is provided in this script

% Create matrices to invert
mat = rand(1000,1000,6);

parfor i=1:size(mat,3)
    invMats(:,:,i) = inv(mat(:,:,i))
% And now, we proceed to build the averages of each set of inverted matrices
% each time leaving out one.

parfor i=1:size(invMats,3)
    usedelements = true(size(invMats,3),1)
    usedelements(i) = false
    res(:,:,i) = inv(mean(invMats(:,:,usedelements),3));
% end the program

Parallel matlab in exclusive mode

#!/bin/bash -l
#SBATCH --time=00:15:00
#SBATCH --exclusive
#SBATCH -o parallel_Matlab3.out

export OMP_NUM_THREADS=$(nproc)

module load matlab/r2017b
matlab_multithread -nosplash -r "parallel_Matlab3($OMP_NUM_THREADS) ; exit(0)"


function parallel_Matlab3(n)
        % Try-catch expression that quits the Matlab session if your code crashes
                % Initialize the parallel pool
                % Ensure that workers don't overlap with other jobs on the cluster
                % The actual program calls from matlab's example.
                % The path for r2017b
                addpath(strcat(matlabroot, '/examples/distcomp/main'));
                % The path for r2016b
                % addpath(strcat(matlabroot, '/examples/distcomp'));
        catch error
                disp('Error occured');

Hints for Condor users

The above example also works (even nicer way) for condor.

A wrapper script to execute matlab on the department workstation.

#!/bin/bash -l
# a wrapper to run Matlab with condor
matlab -nojvm -r "run($block,$pointsPerBlock,$totalBlocks)"

Condor submission script

Condor actually contains ArrayJob functionality that makes the task easier.

## Condor submit description (script) file for my_program.exe.
## 1. Specify the [path and] name for the executable file...
Executable =
## 2. Specify Condor execution environment.
Universe = vanilla
notify   = Error
## 3. Specify remote execution machines running Linux (required)...
Requirements = ((OpSys == "Linux") || (OpSysName == "Ubuntu"))
## 4. Define input files and arguments
#Input = stdin.txt.$(Process)
Arguments = $(Process)
## 5. Define output/error/log files
Output = log/stdout.$(Process).txt
Error  = log/stderr.$(Process).txt
Log    = log/log.$(Process).txt
## 6. Tell Condor which files need to be transferred and when.
Transfer_input_files = run.m
Transfer_output_files = output-$(Process).mat
Transfer_executable = true
Should_transfer_files = YES
When_to_transfer_output = ON_EXIT
## 7. Add 10 copies of the job to the queue
Queue 10

FAQ / troubleshooting

If things randomly don’t work, you can try removing or moving either the ~/.matlab directory or ~/.matlab/Rxxxxy directory to see if it’s caused by configuration.

Random error messages about things not loading and/or something (Matlab Live Editor maybe) doesn’t work: ls *.m, do you have any unexpected files like pathdef.m in there? Remove them.

Also, check your home quota. Often .matlab gets large and fills up your home directory. Check the answer at the very top of the page, under “Matlab Configuration”.