CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > Software User Forums > SU2

Different convergence behavior on different computers with single config file

Register Blogs Community New Posts Updated Threads Search

Like Tree1Likes
  • 1 Post By pcg

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   February 4, 2021, 14:43
Default Different convergence behavior on different computers with single config file
  #1
Senior Member
 
Pay D.
Join Date: Aug 2011
Posts: 166
Blog Entries: 1
Rep Power: 14
pdp.aero is on a distinguished road
Hi there,

I am running a few standard test cases mostly known as validation and verification cases.

Here I have the ONERA M6 test case which is popular for viscous boundary layer interaction and shocks.

I have generated a course mesh which has no problem and everything is fine with it.

Then I have a config file using JST and Multigrid without CFL adaption.

Here is the issue I hope someone could explain:

Same mesh, as I mentioned above, and same config file on two different computers is leading to different convergence behavior.

1. Comptuter 1: mpirun --use-hwthreads-cpus -np 16 SU2_CFD config_file
It converges in 5K iteration and gets to e-10

2. Computer 2: mpirun -n 8 SU2_CFD config_file
It converges in almost 3k iterations and gets to e-10

Without multigrid they both show same convergence pattern and get to e-10 in almost 15k iterations.

But with MG as I attached two pics they both get down well and get to the same CL and CD but number of iterations are not same!

Assuming the second computer is faster than the first computer per core, we expect the second computer shows less run time (cpu time per iteration) but same number of iterations in overall.

I don't understand why by changing to another computer using same config file and same mesh, number of iterations to get to e-10 differs when one runs SU2 in parallel and switching MG on.

And I do understand that MG is helping us to converge faster by having less iterations by switching solutions between fine and course mesh helping high and low frequency errors damp faster but in this case the MG for both are same. How mpi affects the multigrid in the SU2?

Best,
Pay
Attached Images
File Type: jpg 1st_comp.jpg (64.2 KB, 11 views)
File Type: jpg 2nd_comp.jpg (73.9 KB, 10 views)
pdp.aero is offline   Reply With Quote

Old   February 5, 2021, 05:51
Default
  #2
pcg
Senior Member
 
Pedro Gomes
Join Date: Dec 2017
Posts: 465
Rep Power: 13
pcg is on a distinguished road
We have geometric multigrid in SU2, the agglomeration algorithm operates on the subdomains / partitions created by parmetis.
To my knowledge the algorithm in parmetis does not have any criteria for the quality of the subdomains it creates, which may lead to weird shapes that cannot be coarsened very well...
You will notice that as you increase the number of cores, the agglomeration ratio decreases (coarse grids have more CV's) and the number of coarse grids that can be created decreases as the agglomerations starts failing.

I could not come up with a solution for the multigrid, and so I reduced the partitioning by implementing hybrid parallelization (MPI+threads): https://su2foundation.org/wp-content...0/06/Gomes.pdf
Slide 3 shows the same behaviour you found, slide 4 tells you how to compile and run the code.
pdp.aero likes this.
pcg is offline   Reply With Quote

Old   February 5, 2021, 06:41
Default
  #3
Senior Member
 
Pay D.
Join Date: Aug 2011
Posts: 166
Blog Entries: 1
Rep Power: 14
pdp.aero is on a distinguished road
Quote:
Originally Posted by pcg View Post
We have geometric multigrid in SU2, the agglomeration algorithm operates on the subdomains / partitions created by parmetis.
To my knowledge the algorithm in parmetis does not have any criteria for the quality of the subdomains it creates, which may lead to weird shapes that cannot be coarsened very well...
You will notice that as you increase the number of cores, the agglomeration ratio decreases (coarse grids have more CV's) and the number of coarse grids that can be created decreases as the agglomerations starts failing.

I could not come up with a solution for the multigrid, and so I reduced the partitioning by implementing hybrid parallelization (MPI+threads): https://su2foundation.org/wp-content...0/06/Gomes.pdf
Slide 3 shows the same behaviour you found, slide 4 tells you how to compile and run the code.
Thank you for sharing the presentation here, it clarifies a lot... I have to give it a try to see how it works and probably I’ll write here later...

Best,
Pay
pdp.aero is offline   Reply With Quote

Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
[swak4Foam] groovyBC in openFOAM-2.0 for parabolic velocity bc ofslcm OpenFOAM Community Contributions 25 March 6, 2017 10:03
[foam-extend.org] problem when installing foam-extend-1.6 Thomas pan OpenFOAM Installation 7 September 9, 2015 21:53
[Other] Adding solvers from DensityBasedTurbo to foam-extend 3.0 Seroga OpenFOAM Community Contributions 9 June 12, 2015 17:18
[swak4Foam] Error bulding swak4Foam sfigato OpenFOAM Community Contributions 18 August 22, 2013 12:41
friction forces icoFoam ofslcm OpenFOAM 3 April 7, 2012 10:57


All times are GMT -4. The time now is 01:06.