CFD Online Logo CFD Online URL
Home > Forums > Software User Forums > OpenFOAM > OpenFOAM Running, Solving & CFD

multiple -parallel cases on single node with single mpirun

Register Blogs Members List Search Today's Posts Mark Forums Read

LinkBack Thread Tools Search this Thread Display Modes
Old   June 3, 2019, 13:11
Question multiple -parallel cases on single node with single mpirun
Join Date: Apr 2016
Posts: 52
Rep Power: 6
mrishi is on a distinguished road
OpenMPI allows execution of programs in Multiple Programs Multiple Data (MPMD) setup. The required syntax is
mpirun -np x <program 1> : -np y <program 2>
If I supply program 1 and program 2 as two separate solvers, along with relevant paths to cases (which are located in subdirectories), I get the following error:
[0] number of processor directories = 2 is not equal to the number of processors = 4
FOAM parallel run exiting
My job submission command is:
 mpirun \
        -np 2 interFoam -parallel -case $PBS_O_WORKDIR/2pIFaxis> $PBS_O_WORKDIR/2pIFaxis/log2pIF : \   

        -np 2 interFlow -parallel -case $PBS_O_WORKDIR/2pISOaxis > $PBS_O_WORKDIR/2pISOaxis/log2pIso
where $PBS_O_WORKDIR points to directory from where call is made to the cluster's job scheduler (hence no hostfile supplied).

Each case folder has 2 processor* directories (and hence a total of 4 processors needed). If I run multiple SERIAL cases in this manner, there is no issue. Individually, both the cases run properly. Why is this miscommunication happening with the solver with -parallel and how to resolve it?

If I understand correctly, OMPI is supposed to launch one solver and then the other according to its default processor allotment policy (--map-by core).

Otherwise, if I use two separate mpirun commands, it seems that the two programs compete for resources on the same node (wallTime is almost exactly 2*executionTime)

Details: OpenFOAM-6 running on HPC cluster with OpenMPI-1.10.4
I used the following for reference:

Last edited by mrishi; June 4, 2019 at 06:39.
mrishi is offline   Reply With Quote

Old   June 3, 2019, 13:26
Join Date: Apr 2016
Posts: 52
Rep Power: 6
mrishi is on a distinguished road
One possible way I have managed to do this is using multiple mpirun commands:

mpirun --bind-to none -np 2 interFoam -parallel -case $PBS_O_WORKDIR/2pIFaxis> $PBS_O_WORKDIR/2pIFaxis/log2pIF &
mpirun --bind-to none -np 2 interFlow -parallel -case $PBS_O_WORKDIR/2pISOaxis > $PBS_O_WORKDIR/2pISOaxis/log2pIso
where using bind-to none seems critical to ensure that the processes are moved around. Without this argument, both the solvers appear to attach to the same cores, which shows up as wallTime = 2*executionTime in the log files (as noted above).

However, the method mentioned in my first post seems more elegant with possibilities to bind processes to sockets, while the lifeblood of this second method is the fact that things can float around.

I'll appreciate any help to make the first method work (or the best way to go about this).
mrishi is offline   Reply With Quote


mpirun, parallel computation

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On

Similar Threads
Thread Thread Starter Forum Replies Last Post
Lagrangian particle tracking cannot be run in parallel for the cases with AMI patches Armin.Sh OpenFOAM Running, Solving & CFD 6 December 15, 2017 10:48
OF-3.0.1 parallel unstable but works well in single processor yeharav OpenFOAM Bugs 6 January 25, 2017 09:31
Error when using mpirun for parallel case mfoster OpenFOAM Running, Solving & CFD 10 July 7, 2015 12:28
Fluent cases in parallel across multiple machines Riaan FLUENT 3 April 11, 2005 11:51
P4 1.5 or Dual P3 800EB on Gibabyte board Danial FLUENT 4 September 12, 2001 11:44

All times are GMT -4. The time now is 14:37.