CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > Software User Forums > REEF3D

Runs on multiple nodes are 2x slower than on one node

Register Blogs Members List Search Today's Posts Mark Forums Read

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   January 16, 2024, 15:34
Unhappy Runs on multiple nodes are 2x slower than on one node
  #1
New Member
 
Ivan Pombo
Join Date: Jan 2024
Posts: 1
Rep Power: 0
IvanPombo is on a distinguished road
Dear All,

We have been testing locally with the usual Github installation of Reef3D in Linux machines. We have been using one of the Flow around a Circular pier tutorial with the CFD solver to test some runs with different computational configurations.

We have tested running this simulation in two different settings:
- single node: single Intel Xeon machine with 30 CPU cores.
- two nodes: two Intel Xeon machines, each with 30 CPU cores, summing up to 60 cores.

Running the same simulation on each setting, modifying only the M 10 parameter, we are obtaining an absurd computational time with the setting composed of two nodes.

In practice, the simulation on a single node took 1h30mins and on the two nodes is taking more than four hours. While we expected it to not be faster, we didn't expected it to be significantly slower. So we are doubting our installation procedure is somehow wrong to deal with simulations running on multiple nodes.

To clarify, we have tried with smaller machines and different Reef3D simulations and obtained the same behavior. Moreover, we have tested the cluster with OpenFOAM and everything seems to be working as expected.

We are using apptainer to launch the mpi job from outside the apptainer and to all the machines, and have an NFS server in one of them.

Any help is appreciated to figure this out! Thanks in advance!!
IvanPombo is offline   Reply With Quote

Old   January 16, 2024, 16:40
Default
  #2
Member
 
Alexander Hanke
Join Date: Dec 2023
Location: Trondheim
Posts: 42
Rep Power: 3
harenaobsidet is on a distinguished road
Each case has a number of cells per partition under which the CFD calculations take less time than the communication between partitions. CPU communication speed is usually in this order: CCD > CPU > Socket >>> other computer
So you can either just use one CPU or even only parts of one (e.g. SLURM allows partial usage of a node) or increase the number of cells by decreasing the grid spacing if you want to fully utilise your machines.

Hope that helps you.
__________________
Alexander Hanke
Team REEF3D
www.reef3d.com
harenaobsidet is offline   Reply With Quote

Old   January 17, 2024, 11:19
Default
  #3
Super Moderator
 
Hans Bihs
Join Date: Jun 2009
Location: Trondheim, Norway
Posts: 403
Rep Power: 19
valgrinda is on a distinguished road
Hi Ivan,

the tutorial cases have quite coarse resolution, so possibly the total cell count is too low to assure good scaling. Can you try with a large number of cells (try ca. 10 000 cells per core at a minimum). How many cells are you running OpenFOAM at?
__________________
Hans Bihs
Team REEF3D
www.reef3d.com
valgrinda is offline   Reply With Quote

Old   February 7, 2024, 03:34
Default
  #4
Member
 
Felix S.
Join Date: Feb 2021
Location: Germany, Braunschweig
Posts: 88
Rep Power: 6
Felix_Sp is on a distinguished road
Hey there,

I just wanted to add something I experienced on one HPC I ran REEF3D on. Running REEF3D with OpenMPI compiled by gcc resulted in bad scaling similar to your findings.

However, after compiling REEF3D and hypre with icc and intelmpi scaling was just fine. Maybe recompiling with a different compiler might therefore solve your problem?

Good luck anyway!

Last edited by Felix_Sp; February 7, 2024 at 13:43.
Felix_Sp is offline   Reply With Quote

Reply

Tags
mpi, multiple nodes, reef3d

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
Openfoam running extremely slowly using multiple nodes on HPC wdx_cfd OpenFOAM Running, Solving & CFD 3 September 19, 2023 06:17
Hybrid OpenmMP+MPI optimisation geronimo_750 SU2 8 April 6, 2022 08:29
[ICEM] Error in mesh writing helios ANSYS Meshing & Geometry 21 August 19, 2021 15:18
problem with interfoam on multiple nodes gkara OpenFOAM Running, Solving & CFD 1 July 6, 2016 10:17
CFX4.3 -build analysis form Chie Min CFX 5 July 13, 2001 00:19


All times are GMT -4. The time now is 12:03.