CFD Online Discussion Forums

CFD Online Discussion Forums (
-   OpenFOAM Running, Solving & CFD (
-   -   Speedup with GAMG for simplefoam forward Step (

tutlhino June 11, 2007 18:33

Hey all, I'm currentley worki
Hey all,
I'm currentley working on the tutorial case in of simpleFoam. For a comparison of the calculation time with cfx, I have set the solver to GAMG such that both use approximately the same Solver. But instead of a speedup, the calculations need more time than with the default PCG/PBiCG-Solvers- I took the following setting:

tolerance 1e-08;
relTol 0;

smoother GaussSeidel;

cacheAgglomeration true;

nCellsInCoarsestLevel 10;

agglomerator faceAreaPair;
mergeLevels 1;

I tried to change the smallest number of cells and 10 was the best setting for that case (I tried different dense meshes).
Do you have any suggestions why the time is worse or how I could improve it? By the way, does it make sense to use a preconditioner, 'cause in the tutorial case they don't?


msrinath80 June 12, 2007 02:25

Why do you need tolerance 1e-
Why do you need tolerance 1e-08? Won't 1e-06 do? How bad is your mesh? Do you use non-conformal faces (i.e. hanging nodes)? AMG will give very good results provided the discretization is decent and the case is fairly large (1 million plus is good). You can try to reduce the iteration count by using the DICGaussSeidel smoother. However, it will increase your computational expense a bit.

Personally, I have found great improvement in the multigrid solver. It beats the pants out of ICCG for my vortex shedding case; upto 3 times as fast!

tutlhino June 12, 2007 03:31

The setting is just for a time
The setting is just for a time measurement, so tolerance doesn't matter as long as I use everywhere the same. Well I didn't change the tutorial case of the forward Step. I just increased the number of elements in all directions, so doubled it every time. So there should be no hanging nodes, it's a fairly simple mesh. What do you mean with descent discretization, the discretization methods!?

But I'll try your recommendations and increase the mesh density!

Thanks a lot!

msrinath80 June 12, 2007 03:35

Apologies. I should have elabo
Apologies. I should have elaborated. By decent discretization, I refer to pure orthogonal grids with 0 skewness. Check out this[1] thread for more details.


msrinath80 June 12, 2007 05:22

Here is a recent OpenFOAM pres
Here[1] is a recent OpenFOAM presentation on the multi-grid solver.

Check out slide 29 'Computational examples' to see what I meant earlier.


tutlhino June 24, 2007 10:09

Thanks a lot for the link to t
Thanks a lot for the link to the presentation!
I've run now several simulations up to 2 mio elements.But I realized that with increasing the number of elements I also have to change
nCellsInCoarsestLevel, 'cause otherwise the time for GAMG is even higher than with others like PCG.

Is there an approximate formula which tells me what is the best setting of nCellsInCoarsestLevel according to a certain number of cells? Something like:

nCellsInCoarsestLevel ~ C *#Cells

with C being a constant (e.g. 0.1...)


msrinath80 June 24, 2007 11:26

Check the older posts in the f
Check the older posts in the forum[1,2]. If I recall correctly, I think the recommendation was anything from a dozen to a couple hundred cells in Serial mode and around 20-30 cells in Parallel mode. This is of course the recommendation for OpenFOAM 1.3 (i.e. AMG solver). I am not sure if it can be readily translated to the GAMG solver in OpenFOAM 1.4. On second thoughts however, I think you are doing the right thing, i.e. experimenting with different values.



msrinath80 June 24, 2007 11:30

Oh, and by the way, if you use
Oh, and by the way, if you use a uniformly spaced purely orthogonal grid, I've noticed that changing

mergeLevels 1;


mergeLevels 2; or even mergeLevels 3; speeds up the GAMG solver.

alberto June 24, 2007 11:58

How these changes affect the f
How these changes affect the final result, if they do? What about stability of the solution?


msrinath80 June 24, 2007 21:44

To my knowledge, multi-grid is
To my knowledge, multi-grid is merely a solution approach. Deep down it also uses conventional solvers such as the conjugate gradient. So I don't quite see why the final result would be affected unless you change tolerance and/or reltol to other values. As for the stability of the numerical solution, it is primarily affected by the choice of numerical schemes used for space/time discretization.

Someone please correct me if I'm wrong!

All times are GMT -4. The time now is 19:42.