CFD Online Discussion Forums

CFD Online Discussion Forums (https://www.cfd-online.com/Forums/)
-   SU2 (https://www.cfd-online.com/Forums/su2/)
-   -   Different convergence behavior on different computers with single config file (https://www.cfd-online.com/Forums/su2/233630-different-convergence-behavior-different-computers-single-config-file.html)

pdp.aero February 4, 2021 14:43

Different convergence behavior on different computers with single config file
 
2 Attachment(s)
Hi there,

I am running a few standard test cases mostly known as validation and verification cases.

Here I have the ONERA M6 test case which is popular for viscous boundary layer interaction and shocks.

I have generated a course mesh which has no problem and everything is fine with it.

Then I have a config file using JST and Multigrid without CFL adaption.

Here is the issue I hope someone could explain:

Same mesh, as I mentioned above, and same config file on two different computers is leading to different convergence behavior.

1. Comptuter 1: mpirun --use-hwthreads-cpus -np 16 SU2_CFD config_file
It converges in 5K iteration and gets to e-10

2. Computer 2: mpirun -n 8 SU2_CFD config_file
It converges in almost 3k iterations and gets to e-10

Without multigrid they both show same convergence pattern and get to e-10 in almost 15k iterations.

But with MG as I attached two pics they both get down well and get to the same CL and CD but number of iterations are not same!

Assuming the second computer is faster than the first computer per core, we expect the second computer shows less run time (cpu time per iteration) but same number of iterations in overall.

I don't understand why by changing to another computer using same config file and same mesh, number of iterations to get to e-10 differs when one runs SU2 in parallel and switching MG on.

And I do understand that MG is helping us to converge faster by having less iterations by switching solutions between fine and course mesh helping high and low frequency errors damp faster but in this case the MG for both are same. How mpi affects the multigrid in the SU2?

Best,
Pay

pcg February 5, 2021 05:51

We have geometric multigrid in SU2, the agglomeration algorithm operates on the subdomains / partitions created by parmetis.
To my knowledge the algorithm in parmetis does not have any criteria for the quality of the subdomains it creates, which may lead to weird shapes that cannot be coarsened very well...
You will notice that as you increase the number of cores, the agglomeration ratio decreases (coarse grids have more CV's) and the number of coarse grids that can be created decreases as the agglomerations starts failing.

I could not come up with a solution for the multigrid, and so I reduced the partitioning by implementing hybrid parallelization (MPI+threads): https://su2foundation.org/wp-content...0/06/Gomes.pdf
Slide 3 shows the same behaviour you found, slide 4 tells you how to compile and run the code.

pdp.aero February 5, 2021 06:41

Quote:

Originally Posted by pcg (Post 795371)
We have geometric multigrid in SU2, the agglomeration algorithm operates on the subdomains / partitions created by parmetis.
To my knowledge the algorithm in parmetis does not have any criteria for the quality of the subdomains it creates, which may lead to weird shapes that cannot be coarsened very well...
You will notice that as you increase the number of cores, the agglomeration ratio decreases (coarse grids have more CV's) and the number of coarse grids that can be created decreases as the agglomerations starts failing.

I could not come up with a solution for the multigrid, and so I reduced the partitioning by implementing hybrid parallelization (MPI+threads): https://su2foundation.org/wp-content...0/06/Gomes.pdf
Slide 3 shows the same behaviour you found, slide 4 tells you how to compile and run the code.

Thank you for sharing the presentation here, it clarifies a lot... I have to give it a try to see how it works and probably I’ll write here later...

Best,
Pay


All times are GMT -4. The time now is 19:39.