|
[Sponsors] |
multiple -parallel cases on single node with single mpirun |
|
LinkBack | Thread Tools | Search this Thread | Display Modes |
June 3, 2019, 14:11 |
multiple -parallel cases on single node with single mpirun
|
#1 |
Member
Rishikesh
Join Date: Apr 2016
Posts: 63
Rep Power: 10 |
OpenMPI allows execution of programs in Multiple Programs Multiple Data (MPMD) setup. The required syntax is
Code:
mpirun -np x <program 1> : -np y <program 2> Code:
[0] [0] [0] --> FOAM FATAL ERROR: [0] number of processor directories = 2 is not equal to the number of processors = 4 [0] FOAM parallel run exiting [0] Code:
mpirun \ -np 2 interFoam -parallel -case $PBS_O_WORKDIR/2pIFaxis> $PBS_O_WORKDIR/2pIFaxis/log2pIF : \ -np 2 interFlow -parallel -case $PBS_O_WORKDIR/2pISOaxis > $PBS_O_WORKDIR/2pISOaxis/log2pIso Each case folder has 2 processor* directories (and hence a total of 4 processors needed). If I run multiple SERIAL cases in this manner, there is no issue. Individually, both the cases run properly. Why is this miscommunication happening with the solver with -parallel and how to resolve it? If I understand correctly, OMPI is supposed to launch one solver and then the other according to its default processor allotment policy (--map-by core). Otherwise, if I use two separate mpirun commands, it seems that the two programs compete for resources on the same node (wallTime is almost exactly 2*executionTime) Details: OpenFOAM-6 running on HPC cluster with OpenMPI-1.10.4 I used the following for reference: https://www.open-mpi.org/faq/?category=running#mpmd-run Last edited by mrishi; June 4, 2019 at 07:39. |
|
June 3, 2019, 14:26 |
|
#2 |
Member
Rishikesh
Join Date: Apr 2016
Posts: 63
Rep Power: 10 |
One possible way I have managed to do this is using multiple mpirun commands:
Code:
mpirun --bind-to none -np 2 interFoam -parallel -case $PBS_O_WORKDIR/2pIFaxis> $PBS_O_WORKDIR/2pIFaxis/log2pIF & mpirun --bind-to none -np 2 interFlow -parallel -case $PBS_O_WORKDIR/2pISOaxis > $PBS_O_WORKDIR/2pISOaxis/log2pIso However, the method mentioned in my first post seems more elegant with possibilities to bind processes to sockets, while the lifeblood of this second method is the fact that things can float around. I'll appreciate any help to make the first method work (or the best way to go about this). |
|
Tags |
mpirun, parallel computation |
Thread Tools | Search this Thread |
Display Modes | |
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
Lagrangian particle tracking cannot be run in parallel for the cases with AMI patches | Armin.Sh | OpenFOAM Running, Solving & CFD | 7 | March 28, 2021 23:33 |
OF-3.0.1 parallel unstable but works well in single processor | yeharav | OpenFOAM Bugs | 6 | January 25, 2017 10:31 |
Error when using mpirun for parallel case | mfoster | OpenFOAM Running, Solving & CFD | 10 | July 7, 2015 13:28 |
Fluent cases in parallel across multiple machines | Riaan | FLUENT | 3 | April 11, 2005 12:51 |
P4 1.5 or Dual P3 800EB on Gibabyte board | Danial | FLUENT | 4 | September 12, 2001 12:44 |