|
[Sponsors] |
July 19, 2019, 19:01 |
|
#21 |
Senior Member
Join Date: Aug 2015
Posts: 494
Rep Power: 15 |
Ah, so the point is that you were running them sequentially, but want to farm them out as individual jobs? You want to look into slurm job arrays. I don't have any scripts handy, but someone else might. Otherwise, a quick google reveals a ton of hits.
Caelan |
|
July 19, 2019, 19:12 |
|
#22 |
New Member
Sumathigokul
Join Date: Jul 2019
Posts: 4
Rep Power: 7 |
Yeah, I want to submit all job separately to save their outputs as separate files, but not necessary that jobs need to be sequentially submitted..
|
|
July 19, 2019, 19:24 |
|
#23 | |
New Member
Sumathigokul
Join Date: Jul 2019
Posts: 4
Rep Power: 7 |
Quote:
#!/bin/bash #SBATCH --job-name=XXX #SBATCH --cpus-per-task=1 #SBATCH --output=XXX.txt #SBATCH --partition=long #SBATCH --time=00:12:00 #SBATCH --mail-type=BEGIN,END,FAIL,TIME_LIMIT #SBATCH --signal=XXX load modules /folder1/folder2/folder3 sbatch layer30.sbatch /folder1/folder2/folder3 sbatch layer31.sbatch |
||
November 17, 2019, 12:51 |
|
#24 |
New Member
Daniel Duque
Join Date: Jan 2011
Location: ETSIN, Madrid
Posts: 28
Rep Power: 16 |
I have finally been able to run in parallel within a slurm cluster with this two-file procedure !!
|
|
November 17, 2019, 12:56 |
|
#25 | |
New Member
Daniel Duque
Join Date: Jan 2011
Location: ETSIN, Madrid
Posts: 28
Rep Power: 16 |
Quote:
Yes, this two-file procedure has finally done the trick on a slurm cluster ! (Posted twice, but I've been unable to delete previous post ... ) |
||
April 10, 2020, 15:04 |
|
#26 | |
Member
Guanjiang Chen
Join Date: Apr 2020
Location: Bristol, United Kingdom
Posts: 54
Rep Power: 6 |
Quote:
I use "srun --mpi=pmi2 pisoFoam -parallel" or "srun --mpi=pmi2 -n 26 pisoFoam -parallel" |
||
June 7, 2020, 12:51 |
My sh file can work, but may be need to make some changes for different cases.
|
#27 |
Member
Guanjiang Chen
Join Date: Apr 2020
Location: Bristol, United Kingdom
Posts: 54
Rep Power: 6 |
#!/bin/bash
## Submission script for Cluster #SBATCH --job-name=test #SBATCH --time=0:01:00 # hh:mm:ss #SBATCH --nodes=4 #SBATCH --ntasks-per-node=28 #SBATCH --cpus-per-task=1 #SBATCH --mem-per-cpu=1000 # megabytes #SBATCH --partition=cpu ## system error message output file ## leave %j as it's being replaced by JOB ID number #SBATCH -e foamjobname.std.err_%j ## system message output file #SBATCH -o foamjobname.std.out_%j ## send mail after job is finished #SBATCH --mail-type=end #SBATCH --mail-user=1224133639@qq.com # Load modules required for runtime e.g. source /mnt/storage/scratch/va19337/OpenFOAM/OpenFOAM-6/etc/bashrc WM_LABEL_SIZE=64 WM_MPLIB=OPENMPI FOAMY_HEX_MESH=yes ##### WM_MPLIB = SYSTEMOPENMPI | OPENMPI | SYSTEMMPI | MPICH | MPICH-GM | HPMPI ###### | MPI | FJMPI | QSMPI | SGIMPI | INTELMPI srun hostname > all_nodes export I_MPI_PROCESS_MANAGER=mpd #decomposePar echo $LD_LIBRARY_PATH > all_nodes1 which mpirun > all_nodes2 echo $I_MPI_PMI_LIBRARY ## run my MPI executable ##mpirun -np 112 pisoFoam -parallel mpirun --hostfile all_nodes -np 112 icoFoam -parallel |
|
Thread Tools | Search this Thread |
Display Modes | |
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
HPC manager job submission in powershell | ACmate | Siemens | 0 | January 30, 2015 05:11 |
OpenFOAM Job Submission | dancfd | OpenFOAM Running, Solving & CFD | 2 | August 29, 2011 21:46 |
fluent job submission | Sisir | FLUENT | 0 | March 22, 2007 01:13 |
submission of Fluent job in parallel processor | Sisir Kumar Nayak | FLUENT | 0 | February 6, 2007 10:06 |
cfd job | Dr. Don I anyanwu | Main CFD Forum | 20 | May 17, 1999 16:13 |