They are many ways to submit Slurm jobs in parallel, here I will share the one that I used the most. This template can be looped through a list of entries and submit them all at once. It is especially practical when you need to run hundreds of samples at the same time.
Pay attention to the administrative limits superimposed by your admin, 500 jobs are usually the limit they gave us.
You can loop within your slurm submission script to request multiple sessions or parallel within your code, but when dealing with large number of samples, I like my way better since I have better control over individual jobs and combining with parallel within each of those sections will powers it up even more). If one node mysteriously fails (which can happen especially when you run hundreds of samples), I can easily monitor which one and resubmit it. Please feel free to choose whatever you like, whichever way works for you should be the best way.
You will need two files, one is the loop function, another is your slurm template and here is the usage:
– Have your sample list as a txt file with one column containing your sample names, in this template it is noted as
– Have your
yourSlurmScript.sh composed well, replace places where your sample name will go with “Z”. (you can use a character that is not present in your
yourSlurmScript.sh, I find that capitalized “Z” never present in my code, “X” is also a common choice)
– Put your
yourSlurmScript.sh file name into the
batchSubmit.sh script, and run as below:
./batchSubmit.sh. # you can change the name to whatever you want
1. Loop function,
Something very important here, ALWAYS
rsync your files into your node assigned tmp folder and run your job there, don’t use
cp especially when your jobs are “heavy”. Or I promise you your server admin will ask you out for a serious talk…
The two above scripts can be download here: