CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > Software User Forums > OpenFOAM > OpenFOAM Running, Solving & CFD

SLURM job submission steps!

Register Blogs Members List Search Today's Posts Mark Forums Read

Like Tree16Likes

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   July 19, 2019, 19:01
Default
  #21
Senior Member
 
Join Date: Aug 2015
Posts: 494
Rep Power: 15
clapointe is on a distinguished road
Ah, so the point is that you were running them sequentially, but want to farm them out as individual jobs? You want to look into slurm job arrays. I don't have any scripts handy, but someone else might. Otherwise, a quick google reveals a ton of hits.

Caelan
clapointe is offline   Reply With Quote

Old   July 19, 2019, 19:12
Default
  #22
New Member
 
Sumathigokul
Join Date: Jul 2019
Posts: 4
Rep Power: 7
Sgvaa is on a distinguished road
Yeah, I want to submit all job separately to save their outputs as separate files, but not necessary that jobs need to be sequentially submitted..
Sgvaa is offline   Reply With Quote

Old   July 19, 2019, 19:24
Default
  #23
New Member
 
Sumathigokul
Join Date: Jul 2019
Posts: 4
Rep Power: 7
Sgvaa is on a distinguished road
Quote:
Originally Posted by clapointe View Post
If they are to be run sequentially, then you can combine them to form one (long) script. Or write a script that calls the others. In pseudo code :

Code:
slurm stuff here

load modules

source script 1
source script 2
source script 3
...
source script 50
Caelan
I tried following, but again created as single job only.

#!/bin/bash
#SBATCH --job-name=XXX
#SBATCH --cpus-per-task=1
#SBATCH --output=XXX.txt
#SBATCH --partition=long
#SBATCH --time=00:12:00
#SBATCH --mail-type=BEGIN,END,FAIL,TIME_LIMIT
#SBATCH --signal=XXX

load modules

/folder1/folder2/folder3 sbatch layer30.sbatch
/folder1/folder2/folder3 sbatch layer31.sbatch
Sgvaa is offline   Reply With Quote

Old   November 17, 2019, 12:51
Default
  #24
New Member
 
Daniel Duque
Join Date: Jan 2011
Location: ETSIN, Madrid
Posts: 28
Rep Power: 16
dduque is on a distinguished road
I have finally been able to run in parallel within a slurm cluster with this two-file procedure !!
dduque is offline   Reply With Quote

Old   November 17, 2019, 12:56
Smile
  #25
New Member
 
Daniel Duque
Join Date: Jan 2011
Location: ETSIN, Madrid
Posts: 28
Rep Power: 16
dduque is on a distinguished road
Quote:
Originally Posted by AshaEgreck View Post

mpirun --oversubscribe -np 4 buoyantBoussinesqPimpleFoam -parallel >



...




blockMesh
setFields
decomposePar

srun -n 1 my_mpi_application




Yes, this two-file procedure has finally done the trick on a slurm cluster !




(Posted twice, but I've been unable to delete previous post ... )
dduque is offline   Reply With Quote

Old   April 10, 2020, 15:04
Default
  #26
Member
 
Guanjiang Chen
Join Date: Apr 2020
Location: Bristol, United Kingdom
Posts: 54
Rep Power: 6
guanjiang.chen is on a distinguished road
Quote:
Originally Posted by SymplPilot View Post
Hi everyone,

Our cluster uses the SLURM batch system also. However, the "srun" command is recommended to run all jobs. For example, to run a Foam application on 16 cores one has to use something like:

srun -n 16 xxxFoam -parallel

This is a problem, since OpenFOAM expects to see:

mpirun -np 16 xxxFoam -parallel

I wonder if anybody knows how to fix OpenFOAM to run with "srun" and "-n" as opposed to "mpirun" and "-np" ? Thanks for the help.

-SP
Have you solved this problem?
I use "srun --mpi=pmi2 pisoFoam -parallel" or "srun --mpi=pmi2 -n 26 pisoFoam -parallel"
guanjiang.chen is offline   Reply With Quote

Old   June 7, 2020, 12:51
Default My sh file can work, but may be need to make some changes for different cases.
  #27
Member
 
Guanjiang Chen
Join Date: Apr 2020
Location: Bristol, United Kingdom
Posts: 54
Rep Power: 6
guanjiang.chen is on a distinguished road
#!/bin/bash
## Submission script for Cluster
#SBATCH --job-name=test
#SBATCH --time=0:01:00 # hh:mm:ss
#SBATCH --nodes=4
#SBATCH --ntasks-per-node=28
#SBATCH --cpus-per-task=1
#SBATCH --mem-per-cpu=1000 # megabytes
#SBATCH --partition=cpu

## system error message output file
## leave %j as it's being replaced by JOB ID number
#SBATCH -e foamjobname.std.err_%j

## system message output file
#SBATCH -o foamjobname.std.out_%j

## send mail after job is finished
#SBATCH --mail-type=end
#SBATCH --mail-user=1224133639@qq.com

# Load modules required for runtime e.g.
source /mnt/storage/scratch/va19337/OpenFOAM/OpenFOAM-6/etc/bashrc WM_LABEL_SIZE=64 WM_MPLIB=OPENMPI FOAMY_HEX_MESH=yes


##### WM_MPLIB = SYSTEMOPENMPI | OPENMPI | SYSTEMMPI | MPICH | MPICH-GM | HPMPI
###### | MPI | FJMPI | QSMPI | SGIMPI | INTELMPI


srun hostname > all_nodes

export I_MPI_PROCESS_MANAGER=mpd


#decomposePar

echo $LD_LIBRARY_PATH > all_nodes1
which mpirun > all_nodes2
echo $I_MPI_PMI_LIBRARY

## run my MPI executable
##mpirun -np 112 pisoFoam -parallel
mpirun --hostfile all_nodes -np 112 icoFoam -parallel
guanjiang.chen is offline   Reply With Quote

Reply

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
HPC manager job submission in powershell ACmate Siemens 0 January 30, 2015 05:11
OpenFOAM Job Submission dancfd OpenFOAM Running, Solving & CFD 2 August 29, 2011 21:46
fluent job submission Sisir FLUENT 0 March 22, 2007 01:13
submission of Fluent job in parallel processor Sisir Kumar Nayak FLUENT 0 February 6, 2007 10:06
cfd job Dr. Don I anyanwu Main CFD Forum 20 May 17, 1999 16:13


All times are GMT -4. The time now is 21:09.