Openfoam running extremely slowly using multiple nodes on HPC
Hi Foamers,
I recently complied OpenMPI and OF2006 under my own account of our HPC. The program was successfully installed and can be run in parallel mode. However, I found that the simulation is extremely slow when running the case using multiple nodes. The problem didn't appear when the case is run locally or using only one node. For example, when I run the dam-break case using 4 processors, I found that if only one node is used (namely, #SBATCH --nodes=1, #SBATCH --ntasks-per-node=4), the time used is 47.34s; if two nodes are used (namely, #SBATCH --nodes=2, #SBATCH --ntasks-per-node=2), the time used is 381.22s. The time increased by approximately 8 times. It can be easily deduced that the problem is due to the communication of nodes. The administrater suggested I use interMPI but I haven't tried until now because the HPC is temporarily down due to some reason. However, I am not sure if using another mpi can address this problem. Could someone give me some advice? The node information is as follows: Code:
u01 Code:
mpirun --mca btl_tcp_if_include "ip address“ --mca btl '^openib' -np 4 interIsoFoam -parallel 2>&1 > log.interIsoFoam Thanks! wdx |
OK, I solve the problem.
It is indeed caused by the use of OpenMPI. I installed intelMPI and found that the time required decreased to 73s when 2 nodes are used. Although this time is still longer than that on one node, it has been improved significantly. |
Got the same problem, did you manage to solve it?
|
Pls try larger cell count (i.e. finer mesh) to avoid interprocessor communication (i.e. data transfer between processor during the linear solve) to dominate computation on each processor (i.e. local linear solves).
|
All times are GMT -4. The time now is 04:11. |