CFD Online Discussion Forums

CFD Online Discussion Forums (https://www.cfd-online.com/Forums/)
-   SU2 (https://www.cfd-online.com/Forums/su2/)
-   -   Multigrid Stability Issues (https://www.cfd-online.com/Forums/su2/143813-multigrid-stability-issues.html)

ThomasHermann November 3, 2014 11:34

Multigrid Stability Issues
 
What is the best way to diagnose multigrid stability issues? We would like to develop a public model that demonstrates the issues, can be used to diagnose the issues, and facilitates making a more robust multigrid solver. We're running version 3.2.3, but have been tracking SU2 since version 3.2.0.

We've been evaluating SU2 to support analysis of a business jet in transonic flow. We'd like to use multigrid to improve and accelerate the solution convergence. In our evaluation, we've run into stability issues with multigrid and are not sure at this point how to diagnose the issues. We're using the following models to evaluate SU2.
  • Onera M6 hybrid model with 258,969 cells
  • Proprietary wing-only semi-span model with 5.8 million cells
  • Proprietary wing-fuselage-nacelle symmetric model with 9.5 million cells

We've encountered the following issues.

Sensitivity to Mesh

For example, multigrid parameters that work for the proprietary wing-only semi-span model do not translate to the Onera M6 and vice versa. Multi-grid parameters that work for the wing-only semi-span model do not translate to the wing-fuselage-nacelle model that has an identical wing mesh. The sensitivity to mesh is strong enough that multigrid parameters that work on the wing-only mesh fail to work with a volume mesh that is generated from an identical surface mesh with refined values for Y+ and BL growth rate.

Sensitivity to Number of Partitions

We've observed that changing the number of mesh partitions effects the convergence. A model that converges for a given number of cores will diverge simply by changing the number of cores. A colleague has repeatedly demonstrated this with the RANS flat plate model.

We are evaluating SU2 on the following platforms.
  • Workstation with RHEL7, 8 cores and 32G
  • HPC Cluster with RHEL6, SU2 compiled against the Rocks OpenMPI library.

HPC Performance Issue

On the HPC Cluster, we don't see an improvement in performance when the analysis is distributed to more than 1 node. Each node has 32 CPUs and 64G. We'd like to diagnose whether this is a limitation of the interconnect, a bottleneck in the communication code, or some combination.

Thanks,

Tom H.

ThomasHermann November 5, 2014 17:18

This question was address on This week in SU2 (11-4-2014).


All times are GMT -4. The time now is 09:11.