CFD Online Discussion Forums

CFD Online Discussion Forums (https://www.cfd-online.com/Forums/)
-   OpenFOAM Running, Solving & CFD (https://www.cfd-online.com/Forums/openfoam-solving/)
-   -   pressure eq. "converges" after few time steps (https://www.cfd-online.com/Forums/openfoam-solving/81461-pressure-eq-converges-after-few-time-steps.html)

FelixL February 7, 2011 09:08

Quote:

Originally Posted by vkrastev (Post 293974)
Hi Felix,
I've tried both PCG and GAMG as solvers for p, but my experience goes in a different direction than yours...In particular, if the relTol parameter for p starts to be quite low (I usually use something like 10^-04/10^-05) the GAMG solver is much faster than the PCG one. Can you explain for which cases you have found an opposite behavior?
Thanks

V.

Hello, V,


these were simple aerodynamic cases (2D, incompressible, RANS) and mostly hexa meshes. The case I was looking deeper into the performance of the linear solvers was a ground effect study of two interacting airfoils. The speedup I obtained with PCG was impressive, I was able to reduce the simulation time from 3 days (finest grid) with GAMG to less than 12 hours with PCG and otherwise same settings.

Why do you use such low relative tolerances - any particular reason behind that?


Greetings,
Felix.

francescomarra February 7, 2011 09:08

Quote:

Hi Franco, and thanks for your comment. How can I set the max iteration number? Is there a missing parameter on my fvSolution where can I set it?
Hi Maddalena,

It should be possible by adding, in fvSolution, the following keyword:
Code:

maxIter  300;
For instance:
Code:

solvers
{
    p
    {
        solver          PCG;
        preconditioner  DIC;
        tolerance      1e-06;
        relTol          0.1;
        maxIter        1;
    }
...

1000 iterations is the default value.

Regards,
Franco

tcarrigan February 7, 2011 09:19

Just curious, have you tried using leastSquares for the gradScheme?

I did some 2D calculations for a NACA airfoil using both structured and unstructured grids. I too suffered convergence issues when running the calculation for the unstructured case. However, switching the gradScheme to a cellLimited leastSquares happened to solve the problem.

Let me know if this works.

maddalena February 7, 2011 09:58

3 Attachment(s)
Ok, first results using all you suggested above. I have not get to steady state, but at least I have a sort of solution now. :)
All goes fine until some boundary epsilon and k appears. They prevent my pressure residual to get under 0.1. How can I solve it? using lower relax factor on them?

Quote:

Originally Posted by tcarrigan (Post 293988)
I did some 2D calculations for a NACA airfoil using both structured and unstructured grids. I too suffered convergence issues when running the calculation for the unstructured case. However, switching the gradScheme to a cellLimited leastSquares happened to solve the problem.

Travis, what is the advantage of a cellLimited leastSquares on gradScheme?

mad

vkrastev February 7, 2011 10:11

Quote:

Originally Posted by FelixL (Post 293985)
Hello, V,


these were simple aerodynamic cases (2D, incompressible, RANS) and mostly hexa meshes. The case I was looking deeper into the performance of the linear solvers was a ground effect study of two interacting airfoils. The speedup I obtained with PCG was impressive, I was able to reduce the simulation time from 3 days (finest grid) with GAMG to less than 12 hours with PCG and otherwise same settings.

Why do you use such low relative tolerances - any particular reason behind that?


Greetings,
Felix.

Well, your results are really interesting... However, my cases are slightly different (incompressible and RANS too, but 3D, with a single object placed near a solid ground and with tetra/prisms meshes), and I have observed that also for not so low relTol values (about 10^-02) GAMG does a much faster job than PCG. About the relTol values, I have to admit that I haven't made any systematic study about an optimum value, but as the combination of smooth solver (for U and turbulent quantities) and GAMG (for p) allows to decrease them without much additional cost I do prefer to maintain maybe a little lower values than It should be necessary (10^-02/10^-03 for U and turbulent quantities, 10^-04/10^-05 for p).

Best Regards

V.

arjun February 8, 2011 00:02

Quote:

Originally Posted by vkrastev (Post 293972)
and then I've tried it for my runs: with domains of a few milions of cells (1.5 up to 5 milions) I can tell you that passing from 1000 to 50 cells does not have any effect on the solver efficiency, but this doesn't prove that such a criterion is universally correct or not .....

There is a confusion. I think you mean to say that you did not observe any efficiency change in CFD solution and NOT the matrix solver.

There are chances that you do not observe efficiency changes. I will try to explain the reason.

First matrix solver is sensetive to direct solve and number of equations. Why is it so? It is related to the main reason why multigrid algorithms work for the first place.
simple rule of thumb is that larger the equations in direct solver faster the convergence multigrid would show. (it might work against it though some times, but it rarely does in properly implemented multigrid code).

to show this I will show you an example. The mesh is 60 x 60 x 60 and Poisson equation. We will use smoothed aggregation multigrid (of Vanek et al). (I haven't used my additive corrective multigrid code for loong time so i will not waste time searching it).

Here is how multgrid levels are generated in this case.

ncells = 216000

Size [ 0 ] = 216000
Size [ 1 ] = 25939
Size [ 2 ] = 611
Size [ 3 ] = 10
Size [ 4 ] = 1

Max AMG levels = 4


For this problem initial residual is 1000.

Here is how convergence went for this:

Res start = 1000
[1 ] Res = 1652.59 ratio 0.605112
[2 ] Res = 633.144 ratio 1.57942
[3 ] Res = 254.718 ratio 3.9259
[4 ] Res = 110.734 ratio 9.03068
[5 ] Res = 51.5134 ratio 19.4124
[6 ] Res = 25.3378 ratio 39.4668
[7 ] Res = 13.0345 ratio 76.7195
[8 ] Res = 6.96472 ratio 143.581
[9 ] Res = 3.84651 ratio 259.976
[10 ] Res = 2.18456 ratio 457.759
[11 ] Res = 1.26968 ratio 787.598
[12 ] Res = 0.751132 ratio 1331.32
[13 ] Res = 0.449974 ratio 2222.35
[14 ] Res = 0.271864 ratio 3678.31
[15 ] Res = 0.165201 ratio 6053.23
[16 ] Res = 0.100779 ratio 9922.68
[17 ] Res = 0.0616454 ratio 16221.8
[18 ] Res = 0.03778 ratio 26469
[19 ] Res = 0.0231871 ratio 43127.5
[20 ] Res = 0.0142468 ratio 70191.2
[21 ] Res = 0.00876183 ratio 114131
[22 ] Res = 0.00539288 ratio 185430
[23 ] Res = 0.00332168 ratio 301052
[24 ] Res = 0.00204727 ratio 488455
[25 ] Res = 0.0012626 ratio 792017
[26 ] Res = 0.00077915 ratio 1.28345e+006


It took 26 iterations to drop error by factor of 1.28345e+006

Now lets fix level 2 as direct solve. That is the level with 611 will be solved directly. This is how convergence went for this case:

Res start = 1000
[1 ] Res = 1625.02 ratio 0.615378
[2 ] Res = 512.776 ratio 1.95017
[3 ] Res = 167.994 ratio 5.95259
[4 ] Res = 57.2169 ratio 17.4774
[5 ] Res = 20.0184 ratio 49.9541
[6 ] Res = 7.11943 ratio 140.461
[7 ] Res = 2.56062 ratio 390.53
[8 ] Res = 0.928676 ratio 1076.8
[9 ] Res = 0.338887 ratio 2950.83
[10 ] Res = 0.124259 ratio 8047.74
[11 ] Res = 0.0457355 ratio 21864.9
[12 ] Res = 0.0168867 ratio 59218.1
[13 ] Res = 0.00625179 ratio 159954
[14 ] Res = 0.00232032 ratio 430975
[15 ] Res = 0.00086324 ratio 1.15843e+006


You see by increasing direct solve size i could do the same thing in 15 iterations.

So there are two things:
(a) Direct solve takes time
(b) By increasing direct solve size you can speed up convergence.

A good choice of direct solver size would be when the time saved in convergence is more than the time lost in direct solve. So sometimes they can cancel each other out.


This is the reason you might not have noticed the efficiency change. If you really want to observe the change then try putting the direct solve size to very large , say 100000 or so.

arjun February 8, 2011 00:20

Quote:

Originally Posted by FelixL (Post 293969)
Another thing: have you tried a different solver for p than GAMG? I recently made the experience that PCG can be a lot faster than GAMG for certain cases. Which may sound a bit weird for all the benefits multigrid methods have, but it's just my experience. Give it a shot, if you haven't already!


Greetings,
Felix.

I think your observations are not out of line. They are pretty much correct. For some cases when matrix sizes are small enough CG based solvers CAN be faster than some multigrid solvers.

The main issue is that multigrid is single word BUT it represents a whole world of matrix solvers. Some multigrids have issues and thats why a lot of research is going on in this area. But some of modern multigrid solvers are really very impressive.

Good read would be this

http://neumann.math.tufts.edu/~scott/research/aSA2.pdf

just to see which direction we are heading.

makaveli_lcf February 8, 2011 01:32

maddalena

one reason for you residuals not going very low can be, that your solution is of transient nature.

BTW, what the reason to set different non-orthogongonal correction (limited 0.5 an limited 0.33) for k and epsilon?

My point of view, I would try to converge all with the first order, and then switch to the "second" order linearUpwind.

maddalena February 8, 2011 02:38

Good morning!
Quote:

Originally Posted by makaveli_lcf (Post 294100)
one reason for you residuals not going very low can be, that your solution is of transient nature.

I do not thing that the solution is transient. there are lots of vortex forming in the geometry, due to the complex flow path, but all in all I believe that the case is steady state. The last night the simulation run without problems until time 1700, this moring I had an acceptable velocity and pressure field although the pressure residuals (the highest one) did not go under 0.02. At least I have a solution now! :)
Quote:

Originally Posted by makaveli_lcf (Post 294100)
BTW, what the reason to set different non-orthogongonal correction (limited 0.5 an limited 0.33) for k and epsilon? My point of view, I would try to converge all with the first order, and then switch to the "second" order linearUpwind.

This is something it has been suggested to me some time ago, and, since it seemed to be working, I never changed back.
My task of today is trying to reduce the pressure residual, and the first step can be to use first order everywhere on laplacian. And maybe trying least square on grad U, as suggested by Travis.
Thanks to your support, I will keep you informed!

mad

makaveli_lcf February 8, 2011 03:44

maddalena

did you try potentialFoam on your setup? Is it converging till required accuracy?

maddalena February 8, 2011 03:54

Quote:

Originally Posted by makaveli_lcf (Post 294118)
did you try potentialFoam on your setup? Is it converging till required accuracy?

No, I did not. The simulation I run uses a slightly modified version of simpleFoam, for which potentialFoam gives no results. However, all the observations we made yesterday applies without variation because the modified version of simpleFoam is used only in the first steps of the simulation itself.

mad

makaveli_lcf February 8, 2011 04:05

I have mentioned potentialFoam, because it gives some tips about your required non-orthogonal corrections and gives a good first approximation to start with.

vkrastev February 8, 2011 05:00

Quote:

Originally Posted by arjun (Post 294092)
There is a confusion. I think you mean to say that you did not observe any efficiency change in CFD solution and NOT the matrix solver.

There are chances that you do not observe efficiency changes. I will try to explain the reason.

First matrix solver is sensetive to direct solve and number of equations. Why is it so? It is related to the main reason why multigrid algorithms work for the first place.
simple rule of thumb is that larger the equations in direct solver faster the convergence multigrid would show. (it might work against it though some times, but it rarely does in properly implemented multigrid code).

to show this I will show you an example. The mesh is 60 x 60 x 60 and Poisson equation. We will use smoothed aggregation multigrid (of Vanek et al). (I haven't used my additive corrective multigrid code for loong time so i will not waste time searching it).

Here is how multgrid levels are generated in this case.

ncells = 216000

Size [ 0 ] = 216000
Size [ 1 ] = 25939
Size [ 2 ] = 611
Size [ 3 ] = 10
Size [ 4 ] = 1

Max AMG levels = 4


For this problem initial residual is 1000.

Here is how convergence went for this:

Res start = 1000
[1 ] Res = 1652.59 ratio 0.605112
[2 ] Res = 633.144 ratio 1.57942
[3 ] Res = 254.718 ratio 3.9259
[4 ] Res = 110.734 ratio 9.03068
[5 ] Res = 51.5134 ratio 19.4124
[6 ] Res = 25.3378 ratio 39.4668
[7 ] Res = 13.0345 ratio 76.7195
[8 ] Res = 6.96472 ratio 143.581
[9 ] Res = 3.84651 ratio 259.976
[10 ] Res = 2.18456 ratio 457.759
[11 ] Res = 1.26968 ratio 787.598
[12 ] Res = 0.751132 ratio 1331.32
[13 ] Res = 0.449974 ratio 2222.35
[14 ] Res = 0.271864 ratio 3678.31
[15 ] Res = 0.165201 ratio 6053.23
[16 ] Res = 0.100779 ratio 9922.68
[17 ] Res = 0.0616454 ratio 16221.8
[18 ] Res = 0.03778 ratio 26469
[19 ] Res = 0.0231871 ratio 43127.5
[20 ] Res = 0.0142468 ratio 70191.2
[21 ] Res = 0.00876183 ratio 114131
[22 ] Res = 0.00539288 ratio 185430
[23 ] Res = 0.00332168 ratio 301052
[24 ] Res = 0.00204727 ratio 488455
[25 ] Res = 0.0012626 ratio 792017
[26 ] Res = 0.00077915 ratio 1.28345e+006


It took 26 iterations to drop error by factor of 1.28345e+006

Now lets fix level 2 as direct solve. That is the level with 611 will be solved directly. This is how convergence went for this case:

Res start = 1000
[1 ] Res = 1625.02 ratio 0.615378
[2 ] Res = 512.776 ratio 1.95017
[3 ] Res = 167.994 ratio 5.95259
[4 ] Res = 57.2169 ratio 17.4774
[5 ] Res = 20.0184 ratio 49.9541
[6 ] Res = 7.11943 ratio 140.461
[7 ] Res = 2.56062 ratio 390.53
[8 ] Res = 0.928676 ratio 1076.8
[9 ] Res = 0.338887 ratio 2950.83
[10 ] Res = 0.124259 ratio 8047.74
[11 ] Res = 0.0457355 ratio 21864.9
[12 ] Res = 0.0168867 ratio 59218.1
[13 ] Res = 0.00625179 ratio 159954
[14 ] Res = 0.00232032 ratio 430975
[15 ] Res = 0.00086324 ratio 1.15843e+006


You see by increasing direct solve size i could do the same thing in 15 iterations.

So there are two things:
(a) Direct solve takes time
(b) By increasing direct solve size you can speed up convergence.

A good choice of direct solver size would be when the time saved in convergence is more than the time lost in direct solve. So sometimes they can cancel each other out.


This is the reason you might not have noticed the efficiency change. If you really want to observe the change then try putting the direct solve size to very large , say 100000 or so.

Really interesting explanation! However, when I say that passing from 1000 to 50 cells in coarsest level (which, following your post, means lowering the size of direct solution procedure) I didn't see any significant changes in efficiency, is because of two factors: the first is the time required for reaching a given convergence criterion, which all by itself is not sufficient to isolate the two concurring effects introduced above (reduction of iterations vs. additional time required for direct solution); the second seems much less ambiguous as the number of GAMG iterations reported by the code to reach the convergence criterion remain the same in both cases. So, if the time required is the same and the number of iterations doesn't change, maybe we can guess that the size and complexity of my cases render them quite insensitive to such a change (from 1000 to 50 or vice-versa). Please correct me if I'm missing something else

Best Regards

V.

FelixL February 8, 2011 05:22

Quote:

Originally Posted by arjun (Post 294093)
I think your observations are not out of line. They are pretty much correct. For some cases when matrix sizes are small enough CG based solvers CAN be faster than some multigrid solvers.

The main issue is that multigrid is single word BUT it represents a whole world of matrix solvers. Some multigrids have issues and thats why a lot of research is going on in this area. But some of modern multigrid solvers are really very impressive.

Good read would be this

http://neumann.math.tufts.edu/~scott/research/aSA2.pdf

just to see which direction we are heading.


Hello, Arjun,


thanks for the text recommendation, I will have a look into it.

Yeah, I know MG methods are a complex topic and there are many different directions evolving at the moment. I was working with the DLR TAU code (a code of the german aerospace center used both in research and industry) and the multigrid method used there was really, really helpful to save resources.

I'm pretty sure, GAMG won't be the only option of MG approaches in OF, so I'm very much looking forward to the upcoming updates.


Greetings,
Felix.

makaveli_lcf February 8, 2011 09:19

2 Attachment(s)
maddalena

hope I found the reason for your poor pressure residuals.
Look at the residual level for my pressure equation test with

1. laplacian corrected
2. laplacian limited 0.5

Attachment 6407
Attachment 6408

vkrastev February 8, 2011 09:30

Quote:

Originally Posted by makaveli_lcf (Post 294215)
maddalena

hope I found the reason for your poor pressure residuals.
Look at the residual level for my pressure equation test with

1. laplacian corrected
2. laplacian limited 0.5

Attachment 6407
Attachment 6408

This is really, really interesting...

maddalena February 8, 2011 10:00

Quote:

Originally Posted by makaveli_lcf (Post 294215)
maddalena

hope I found the reason for your poor pressure residuals.
Look at the residual level for my pressure equation test with

1. laplacian corrected
2. laplacian limited 0.5

Attachment 6407
Attachment 6408

:eek: I hope I can get there one day...

FelixL February 8, 2011 11:53

2 Attachment(s)
Hello, all,


I can reproduce this behaviour using one of my aerodynamic cases with different laplacian schemes and otherwise same settings (see the attachements).

It has to be noted that the aerodynamic coefficients differ only by max. 0.1%, so this incomplete convergence of pressure - though not really good-looking - maybe has a minor influence on the result (at least for my simple 2D case). But I was able to get the resiudal of p below 1e-3 for the limited laplacian case, so maybe this is already accurate enough, I can't tell. A deeper investigation would be interesting.


Greetings,
Felix.

makaveli_lcf February 9, 2011 02:01

maddalena

the same behavior of the pressure residuals I got when changing gradSchemes from linear (or leastSquares) gradient scheme to its limited version (cell/face(MD)Limited)

another my observation was, that leastSquares (not limited) scheme for gradients gave more smooth and physical solution, while one from the linear scheme was distorted by skewed cells of the unstructured grid part.

So if you are using limited version of pressure laplacian and gradient discretization, the order of 10^-2 - 10^-3 might be normal for your pressure residuals. On the one hand, introducing limiting, you provide more physically correct solution due to its boundness. On the other hand, that would result in convergence issues.

You can read more about accuracy/convergence in so called "Gamma paper" http://powerlab.fsb.hr/ped/kturbo/Op...GammaPaper.pdf

PS. By the way, Maddalena, thank you for rising this topic, it helped me to discover some important points for my self)))
PS1. Hope you understood, that I suggested to try non limited scheme versions to achieve desired convergence criterion.

maddalena February 9, 2011 03:19

Thank you!
 
From my point of view, this is one of the most interesting thread about schemes and convergence of the OF forum! All the suggestions have been demonstrated widely. Indeed, they have been not "limited" to the only: Do this because I know it works, but people showed that what they say is true for specific reasons, with specific test cases.
Of course, the thread is open for similar contribution in the future, hoping that the discussion level remains the same.

Thank you, FOAMers!

maddalena


All times are GMT -4. The time now is 11:50.