CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > Software User Forums > SU2

SU2 code scaling poorly on multiple nodes

Register Blogs Community New Posts Updated Threads Search

View Poll Results: Does SU2 scales well on multiple nodes?
Yes 0 0%
No 0 0%
Dont know 0 0%
Voters: 0. You may not vote on this poll

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   July 18, 2018, 05:22
Default SU2 code scaling poorly on multiple nodes
  #1
New Member
 
Samir Shaikh
Join Date: Jul 2018
Posts: 6
Rep Power: 7
Samirs is on a distinguished road
Hi All,

I have successfully compiled parallel version of SU2 on our HPC cluster having Intel Broadwell nodes. I made changes in parallel_computation.py so as to make mpirun command for running SU2_CFD in parallel. On single node I can see a linear scaling with number of mpi processes but when executing same script through batch mode using SLURM on multiple nodes, performance is degrades.

I tried simulation of Turbulent ONERA_M6 testcase.

Thanks in advance for your suggestions / help in this regard

Attached is slurm script to submit job
Attached Files
File Type: txt slurm_script_su2.txt (667 Bytes, 16 views)
Samirs is offline   Reply With Quote

Old   August 25, 2018, 19:15
Default
  #2
hlk
Senior Member
 
Heather Kline
Join Date: Jun 2013
Posts: 309
Rep Power: 13
hlk is on a distinguished road
You may want to refer to SU2_PY/SU2/run/interface.py to see how the mpi command is called from the python scripts, to make sure that this works with your cluster. You can also set the SU2_MPI_COMMAND in your config file to set something customized without needing to modify the python scripts.



Sometimes multiple nodes each with several processors scales worse than multiple processors within a single node because information now has to travel between multiple nodes rather than just within a single node - and, what sometimes surprises people, is that it actually matters how long the cable is that connects the nodes. However, on most modern clusters it shouldn't be so extreme as to prevent you from benefiting from multiple nodes. If it is an extreme difference you can try running other parallel programs that require communication between processes, or contacting your system administrators about what they expect for the difference in communication between vs within nodes, and tips on compiling in a way to optimize for the specific cluster architecture.



Quote:
Originally Posted by Samirs View Post
Hi All,

I have successfully compiled parallel version of SU2 on our HPC cluster having Intel Broadwell nodes. I made changes in parallel_computation.py so as to make mpirun command for running SU2_CFD in parallel. On single node I can see a linear scaling with number of mpi processes but when executing same script through batch mode using SLURM on multiple nodes, performance is degrades.

I tried simulation of Turbulent ONERA_M6 testcase.

Thanks in advance for your suggestions / help in this regard

Attached is slurm script to submit job
hlk is offline   Reply With Quote

Reply

Tags
intel broadwell, intel compiler, su2 aerodynamic noise, su2 examples


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
Autogrid export CGNS mesh only with multiple base nodes??? swm Fidelity CFD 5 January 22, 2017 02:09
multiple zone CGNS file convert to su2? jywang SU2 2 June 20, 2016 05:29
pimpleDyMFoam on multiple nodes amd OpenFOAM Running, Solving & CFD 0 October 9, 2012 04:59
Design Integration with CFD? John C. Chien Main CFD Forum 19 May 17, 2001 15:56
Open source CFD code development, possible? Dr. Yazid Bindar Main CFD Forum 27 July 18, 2000 00:18


All times are GMT -4. The time now is 10:13.