Serial Jobs¶
Video
Watch this in our courses: 2022 February, 2021 January
Abstract
Batch scripts let you run work non-interactively, which is important for scaling. You create a batch script, which runs in the background. You come back later and see the results.
Example batch script, submit with
sbatch the_script.sh
:#!/bin/bash -l #SBATCH --time=01:00:00 #SBATCH --cpus-per-task=2 #SBATCH --mem=4G module load anaconda python my_script.py
See the quick reference for complete list of options.
Prerequisites¶
Why batch scripts?¶
You learned, in Slurm: the queuing system, how all Triton users must do their computation by submitting jobs to the Slurm batch system to ensure efficient resource sharing. This lets you run many things at once without having to watch each one separately - the true power of the cluster.
A batch script is simply a shell script (remember Using the cluster from a shell?), where you put your resource requests and job steps.
Your first job script¶
A job script is simply a shell script (Bash). And so the first line in
the script should be the shebang directive (#!
)
followed by the full path to the executable binary of the shell’s
interpreter, which is Bash in our case. What then follow are the
resource requests, and then the job steps.
Let’s take a look at the following script
#!/bin/bash
#SBATCH --time=00:05:00
#SBATCH --mem-per-cpu=100M
#SBATCH --output=pi.out
echo "Hello $USER! You are on node $HOSTNAME. The time is $(date)."
# For the next line to work, you need to be in the
# hpc-examples directory.
srun python slurm/pi.py 10000
Let’s name it run-pi.sh
(create a file using your editor of choice,
e.g. nano
; write the script above and save it)
The symbol #
is a comment in the bash script, and Slurm
understands #SBATCH
as parameters, determining the resource
requests. Here, we have requested a time limit of 5 minutes, along
with 100 MB of RAM per CPU.
Resource requests are followed by job steps, which are the actual
tasks to be done. Each srun
within the a slurm script is a job
step, and appears as a separate row in your history - which is useful
for monitoring.
Having written the script, you need to submit the job to Slum through
the sbatch
command. Since the command is python slurm/pi.py
,
you need to be in the hpc-examples directory from our sample
project:
$ cd hpc-examples # wherever you have hpc-examples
$ sbatch run-pi.sh
Submitted batch job 52428672
Warning
You must use sbatch
, not bash
to submit the job since it is
Slurm that understands the SBATCH
directives, not Bash.
When the job enters the queue successfully, the response that the job has been submitted is printed in your terminal, along with the jobid assigned to the job.
You can check the status of you jobs using slurm q
/slurm queue
(or
squeue -u $USER
):
$ slurm q
JOBID PARTITION NAME TIME START_TIME STATE NODELIST(REASON)
52428672 debug run-pi.sh 0:00 N/A PENDING (None)
Once the job is completed successfully, the state changes to
COMPLETED and the output is then saved to pi.out
in the
current directory. You can also wildcards like %u
for your
username and %j
for the jobid in the output file name. See the
documentation of sbatch
for a full list of available wildcards.
Setting resource parameters¶
The resources were discussed in Slurm: the queuing system, and barely need to be
mentioned again here - the point is they are the same. For example,
you might use --mem=5G
or --time=5:00:00
. Always keep the
reference page close for looking these up.
Checking your jobs¶
Once you submit your jobs, it goes into a queue. The two most useful
commands to see the status of your jobs with are slurm q
/slurm
queue
and slurm h
/slurm history
(or squeue -u $USER
and
sacct -u $USER
).
More information is in the monitoring tutorial.
Cancelling a job¶
You can cancel jobs with scancel JOBID
. To obtain job id, use the
monitoring commands.
Full reference¶
The reference page contains it all, or expand it below.
Slurm quick ref
Command |
Description |
---|---|
|
submit a job to queue (see standard options below) |
|
Within a running job script/environment: Run code using the allocated resources (see options below) |
|
On frontend: submit to queue, wait until done, show output. (see options below) |
|
Submit job, wait, provide shell on node for interactive playing (X forwarding works, default partition interactive). Exit shell when done. (see options below) |
|
(advanced) Another way to run interactive jobs, no X forwarding but simpler. Exit shell when done. |
|
Cancel a job in queue |
|
(advanced) Allocate resources from frontend node. Use |
|
View/modify job and slurm configuration |
Command |
Option |
Description |
---|---|---|
|
|
time limit |
|
time limit, days-hours |
|
|
job partition. Usually leave off and things are auto-detected. |
|
|
request n MB of memory per core |
|
|
request n MB memory per node |
|
|
Allocate *n* CPU’s for each task. For multithreaded jobs. (compare ``–ntasks``: ``-c`` means the number of cores for each process started.) |
|
|
allocate minimum of n, maximum of m nodes. |
|
|
allocate resources for and start n tasks (one task=one process started, it is up to you to make them communicate. However the main script runs only on first node, the sub-processes run with “srun” are run this many times.) |
|
|
short job name |
|
|
print output into file output |
|
|
print errors into file error |
|
|
allocate exclusive access to nodes. For large parallel jobs. |
|
|
request feature (see |
|
|
Run job multiple times, use variable |
|
|
request a GPU, or |
|
|
request nodes that have disks, |
|
|
notify of events: |
|
|
whome to send the email |
|
|
|
Print allocated nodes (from within script) |
Exercises¶
The scripts you need for the following exercises can be found in this git
repository: hpc-examples.
You can clone the repository by running
git clone https://github.com/AaltoSciComp/hpc-examples.git
. This repository
will be used for most of the tutorial exercises.
Serial-1: Basic batch job
Submit a batch job that just runs hostname
and pi.py
.
Set time to 1 hour and 15 minutes, memory to 500MB.
Change the job’s name and output file.
Check the output. Does the printed hostname match the one given by
slurm history
/sacct -u $USER
?
Serial-2: Submitting and cancelling a job
Create a batch script which does nothing (or some pointless
operation for a while), for example sleep 300
(this shell
command does nothing for 300 seconds). Check the queue to see when
it starts running. Then, cancel the job. What output is produced?
Serial-3: Checking output
Create a slurm script that runs the following program:
for i in $(seq 30); do
date
sleep 10
done
Submit the job to the queue.
Log out from Triton. Log back in and use
slurm queue
/squeue -u $USER
to check the job status.Use
cat NAME_OF_OUTPUTFILE
to check at the output periodically. You can usetail -f NAME_OF_OUTPUTFILE
to view the progress in real time as new lines are added (Control-C to cancel)Cancel the job once you’re finished.
Serial-4: Constrain to a certain CPU architecture
Modify the script from exercise #1 to run on only one type of CPU
using the --constraint
option. Hint: check Triton quick reference
Serial-5: Why you use sbatch
, not bash
.
(Advanced) What happens if you submit a batch script with bash
instead of sbatch
? Does it appear to run? Does it use all the
Slurm options?
Solution
It looks like it runs, but actually is only running on the login
node! If you used srun python3 slurm/pi.py 10000
, then it
would request a Slurm allocation, but not use any of the
#SBATCH
parameters, so might not request the resources you
need.:
$ bash run-pi.sh
Calculating Pi via 10000 stochastic trials
{"successes": 7815, "pi_estimate": 3.126, "iterations": 10000}
(advanced) Serial-6: Interpreters other than bash
(Advanced) Create a batch script that runs in another language
using a different #!
line.
Does it run? What are some of the advantages and problems here?
(advanced) Serial-7: Job environment variables.
Either make a sbatch
script that runs the command env | sort
, or
use srun env | sort
. The env
utility prints all
environment variables, and sort
sorts it (and |
connects
the output of env
to the input of sort
.)
This will show all of the environment variables that are set in the
job. Note the ones that start with SLURM_
. Notice how they
reflect the job parameters. You can use these in your jobs if
needed (for example, a job that will adapt to the number of
allocated CPUs).
What’s next?¶
There are various tools one can use to do job monitoring.