|
[Sponsors] |
slower performance when running more than one job |
|
LinkBack | Thread Tools | Search this Thread | Display Modes |
April 6, 2021, 03:06 |
slower performance when running more than one job
|
#1 |
Member
Jose Daniel
Join Date: Jun 2020
Posts: 36
Rep Power: 6 |
Hello all,
I am running several jobs using: parallel_computation.py -n 6 -f solver_settings > solver_out but when I run more of these (making sure I don't go over the number of cores available) it slows down drastically. Do you know how to make the cores independent from each other? Thank you, JD |
|
April 6, 2021, 04:57 |
|
#2 |
Senior Member
Pedro Gomes
Join Date: Dec 2017
Posts: 466
Rep Power: 14 |
What is drastically? Do you get more throughput (simulations per hour lets say) or less?
Assuming that drastically is really drastic, you probably need to look at how MPI binds processes to cores, look into the --bind-to option. Otherwise, you will never, ever, get that kind of perfect scaling of running two jobs on the same machine in the same time it takes you to run one. CFD in general is bound by memory bandwidth, not by compute power. Once you get to 2-3 cores per memory channel you will not be able to go much faster, you may even go slower overall because there will be more pressure on the CPU cache and the CPU will start running at lower frequency. |
|
April 7, 2021, 03:00 |
|
#3 |
Member
Jose Daniel
Join Date: Jun 2020
Posts: 36
Rep Power: 6 |
Maybe drastically is a little dramatic. The test I've been running (200k cells in 2D, steady, Incompressible RANS, SA w/transition and euler implicit) in 6 cores has an iteration time of around 0.58s. If I start running another one with the same characteristics, the iteration time goes up to 1.35s...
I am using mpich as it is recommended in the SU2 page, compiling the code with: Code:
./meson.py build -Denable-autodiff=true -Dwith-mpi=enabled Code:
mpi_dep = [dependency('mpich', required : get_option('with-mpi'))] Thanks! |
|
April 7, 2021, 06:08 |
|
#4 |
Senior Member
Pedro Gomes
Join Date: Dec 2017
Posts: 466
Rep Power: 14 |
It is a CLI argument of "mpirun", not an SU2 option.
https://www.open-mpi.org/doc/v3.0/man1/mpirun.1.php I don't know how to pass it to parallel_optimization.py, but that script is not really needed with v7 anyway. Something else that may explain the slowdown is if you use virtual threads. That is also generally not good for CFD in general, you should only use physical cores. There are good discussions about these hardware aspects in the hardware section of CFD-online. |
|
Thread Tools | Search this Thread |
Display Modes | |
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
Abysmal performance of 64 cores opteron based workstation for CFD | Fauster | Hardware | 8 | June 4, 2018 11:51 |
[OpenFOAM.com] OpenFOAM install on high performance computing machine running x86_64 linux | gauravsavant | OpenFOAM Installation | 1 | March 8, 2018 16:52 |
Parallel running error (high performance computer) | Aadhavan | OpenFOAM Running, Solving & CFD | 0 | February 3, 2013 12:35 |
Running 2 CFD jobs on one PC | steve podleski | Main CFD Forum | 17 | February 16, 2000 15:40 |
cfd job | Dr. Don I anyanwu | Main CFD Forum | 20 | May 17, 1999 16:13 |