CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > Software User Forums > OpenFOAM > OpenFOAM Running, Solving & CFD

How much of the continuity error is acceptable for DNS?

Register Blogs Members List Search Today's Posts Mark Forums Read

Like Tree3Likes
  • 2 Post By haakon
  • 1 Post By haakon

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   May 21, 2013, 21:21
Default How much of the continuity error is acceptable for DNS?
  #1
Member
 
Jack
Join Date: Dec 2011
Posts: 94
Rep Power: 14
ripperjack is on a distinguished road
Hi all,

I am simulating DNS channel flows based on the buoyantBuosinesqPimpleFoam. I found that if I set the the p tolerance=1e-6 in system/fvSolution, the calculation is very slow and the problem is that the iteration No. for p is very large (more than 200 iterations!), which can be seen as follow. My time step is 0.001.
I want to speed up the simulation. One possible way is to decrease the p tolerance to 1e-5. But I want to know, if I do this, how can I ensure the continuity? I have no idea what time step continuity errors means, I check the source codes and it seems that this error also include time step value and there are sum local, global, and cumulative errors, which one is more important?? Anyway, my question is: how much of the continuity error is acceptable for a DNS simulation? Many thanks, guys!
Code:
Courant Number mean: 0.288069 max: 0.61355
DILUPBiCG:  Solving for Ux, Initial residual = 0.00630291, Final residual = 1.07118e-07, No Iterations 3
DILUPBiCG:  Solving for Uy, Initial residual = 0.0432376, Final residual = 6.85184e-07, No Iterations 3
DILUPBiCG:  Solving for Uz, Initial residual = 0.0399243, Final residual = 6.44705e-07, No Iterations 3
DILUPBiCG:  Solving for rho, Initial residual = 0.0135317, Final residual = 2.42912e-07, No Iterations 3
DICPCG:  Solving for p_rgh, Initial residual = 0.791976, Final residual = 4.79033e-05, No Iterations 79
time step continuity errors : sum local = 2.55569e-08, global = 5.12662e-20, cumulative = 1.1254e-19
DICPCG:  Solving for p_rgh, Initial residual = 0.0616171, Final residual = 9.11894e-07, No Iterations 219
time step continuity errors : sum local = 3.40954e-10, global = 5.92299e-20, cumulative = 1.7177e-19
ripperjack is offline   Reply With Quote

Old   May 22, 2013, 04:03
Default
  #2
Senior Member
 
Join Date: Dec 2011
Posts: 111
Rep Power: 19
haakon will become famous soon enough
I really do not think increasing the tolerances is a good idea. I have made some test (for the cases I am working on), and generally i need at least 1e-7 for pressure and 1e-6 for other quantities. If I increase to 1e-6 for pressure, I get results that differs in the order of magnitude 10%.

I think you just have to face that the pressure equation is a tough nut to crack. It is by far the most time-consuming part of the solution process, but on the other hand, having a good and correct pressure field is an absolute necessity.

What I recommend you to do is to use the GAMG solver for the pressure instead of the PCG solver, unless you are doing large, parallel computations on a cluster with more than, say ~100 processes (for that case PCG is good). For normal workstations ans parallel computations with 1-4 compute nodes, GAMG is a lot faster.

I have found the following settings to suit my needs pretty well:
Code:
    p
    {
        solver           GAMG;
        preconditioner   FDIC;

        tolerance        1e-07;
        relTol           0.05;

        smoother         symGaussSeidel;
        nPreSweeps       1;
        nPostSweeps      2;

        cacheAgglomeration true;

        nCellsInCoarsestLevel 50;
        agglomerator     faceAreaPair;
        mergeLevels      2;
    }
    
    pFinal
    {
        solver           GAMG;
        preconditioner   FDIC;

        tolerance        1e-07;
        relTol           0;

        smoother         symGaussSeidel;
        nPreSweeps       1;
        nPostSweeps      2;

        cacheAgglomeration true;

        nCellsInCoarsestLevel 50;
        agglomerator     faceAreaPair;
        mergeLevels      2;
    }
but your computer, setup and simulations might of course require a completely different setup.
zhernadi and sharonyue like this.
haakon is offline   Reply With Quote

Old   May 22, 2013, 10:53
Default
  #3
Member
 
Jack
Join Date: Dec 2011
Posts: 94
Rep Power: 14
ripperjack is on a distinguished road
Quote:
Originally Posted by haakon View Post
I really do not think increasing the tolerances is a good idea. I have made some test (for the cases I am working on), and generally i need at least 1e-7 for pressure and 1e-6 for other quantities. If I increase to 1e-6 for pressure, I get results that differs in the order of magnitude 10%.

I think you just have to face that the pressure equation is a tough nut to crack. It is by far the most time-consuming part of the solution process, but on the other hand, having a good and correct pressure field is an absolute necessity.

What I recommend you to do is to use the GAMG solver for the pressure instead of the PCG solver, unless you are doing large, parallel computations on a cluster with more than, say ~100 processes (for that case PCG is good). For normal workstations ans parallel computations with 1-4 compute nodes, GAMG is a lot faster.

I have found the following settings to suit my needs pretty well:
Code:
    p
    {
        solver           GAMG;
        preconditioner   FDIC;

        tolerance        1e-07;
        relTol           0.05;

        smoother         symGaussSeidel;
        nPreSweeps       1;
        nPostSweeps      2;

        cacheAgglomeration true;

        nCellsInCoarsestLevel 50;
        agglomerator     faceAreaPair;
        mergeLevels      2;
    }
    
    pFinal
    {
        solver           GAMG;
        preconditioner   FDIC;

        tolerance        1e-07;
        relTol           0;

        smoother         symGaussSeidel;
        nPreSweeps       1;
        nPostSweeps      2;

        cacheAgglomeration true;

        nCellsInCoarsestLevel 50;
        agglomerator     faceAreaPair;
        mergeLevels      2;
    }
but your computer, setup and simulations might of course require a completely different setup.
Hi Haakon,

Excellent! I have tried the GAMG solver, I changed the smoother option to GaussSeidel because there is a error message if I used symGaussSeidel (my OF version is 2.1.1). Anyway, it worked very well! The iteration No. have decreased to about 50, much faster! Thanks for your recommendation!

Best regards,
Ping
ripperjack is offline   Reply With Quote

Old   May 22, 2013, 12:33
Default
  #4
Senior Member
 
Join Date: Dec 2011
Posts: 111
Rep Power: 19
haakon will become famous soon enough
Glad I could help you. BTW: I do not think that you should care about the number of iterations needed to solve the equations at each time step. The PCG algorithm is completely different from the GAMG, and you cannot in any way compare the number of PCG iterations with the number of GAMG iterations. This is also true when it comes to the computational demand, if you reduce the number of iterations from 200 with PCG to 20 with GAMG, that does not mean that you are using less CPU cycles to solve the problem. This is because one GAMG cycle is more comprehensive and brings you closer to the "true" answer than one PCG cycle.
sharonyue likes this.
haakon is offline   Reply With Quote

Old   September 10, 2013, 11:24
Default control solution
  #5
Member
 
Join Date: Oct 2012
Posts: 47
Rep Power: 13
sh.d is on a distinguished road
Hi
I dont know a bout some variables in fv solution
What is nPreSweeps nPostSweeps nFinestSweeps ?
Can you explain to me how these variables are defined?
sh.d is offline   Reply With Quote

Old   September 10, 2013, 13:11
Default
  #6
Member
 
Eysteinn Helgason
Join Date: Sep 2009
Location: Gothenburg, Sweden
Posts: 53
Rep Power: 16
eysteinn is on a distinguished road
Quote:
Originally Posted by sh.d View Post
Hi
I dont know a bout some variables in fv solution
What is nPreSweeps nPostSweeps nFinestSweeps ?
Can you explain to me how these variables are defined?
Hi,
I think this is what you are looking for.

The User Guide: Sections 4.5.1.3 and 4.5.1.4

"The user must also pecify the number of sweeps, by the nSweeps keyword,
before the residual is recalculated, following the tolerance parameters. "

"The number of sweeps used by the smoother at different levels of mesh density
are specified by the nPreSweeps, nPostSweeps and nFinestSweeps keywords.
The nPreSweeps entry is used as the algorithm is coarsening the mesh,
nPostSweeps is used as the algorithm is refining, and nFinestSweeps is used
when the solution is at its finest level."

/Eysteinn
eysteinn is offline   Reply With Quote

Old   October 19, 2015, 13:54
Default
  #7
mgg
New Member
 
Join Date: Nov 2012
Posts: 27
Rep Power: 13
mgg is on a distinguished road
Hallo Ping,

did you solve this high number of PCG iterations running DNS? I am also facing this problem. I use buoyantPimpleFoam solver (similar as your solver) running DNS for mixed convetion problem. I wanna use more than 1000 cores to get a ideal speadup. So I go with PCG, which has a better scalability than GAMG. But PCG solver just need more than 2000 iterations to obtain the tolerence of 1e-7. Compared with that, GAMG only needs two or three. Do you have a clue of that? Thank you very much.



Quote:
Originally Posted by ripperjack View Post
Hi all,

I am simulating DNS channel flows based on the buoyantBuosinesqPimpleFoam. I found that if I set the the p tolerance=1e-6 in system/fvSolution, the calculation is very slow and the problem is that the iteration No. for p is very large (more than 200 iterations!), which can be seen as follow. My time step is 0.001.
I want to speed up the simulation. One possible way is to decrease the p tolerance to 1e-5. But I want to know, if I do this, how can I ensure the continuity? I have no idea what time step continuity errors means, I check the source codes and it seems that this error also include time step value and there are sum local, global, and cumulative errors, which one is more important?? Anyway, my question is: how much of the continuity error is acceptable for a DNS simulation? Many thanks, guys!
Code:
Courant Number mean: 0.288069 max: 0.61355
DILUPBiCG:  Solving for Ux, Initial residual = 0.00630291, Final residual = 1.07118e-07, No Iterations 3
DILUPBiCG:  Solving for Uy, Initial residual = 0.0432376, Final residual = 6.85184e-07, No Iterations 3
DILUPBiCG:  Solving for Uz, Initial residual = 0.0399243, Final residual = 6.44705e-07, No Iterations 3
DILUPBiCG:  Solving for rho, Initial residual = 0.0135317, Final residual = 2.42912e-07, No Iterations 3
DICPCG:  Solving for p_rgh, Initial residual = 0.791976, Final residual = 4.79033e-05, No Iterations 79
time step continuity errors : sum local = 2.55569e-08, global = 5.12662e-20, cumulative = 1.1254e-19
DICPCG:  Solving for p_rgh, Initial residual = 0.0616171, Final residual = 9.11894e-07, No Iterations 219
time step continuity errors : sum local = 3.40954e-10, global = 5.92299e-20, cumulative = 1.7177e-19
mgg is offline   Reply With Quote

Old   October 30, 2019, 22:26
Default
  #8
Senior Member
 
Jianrui Zeng
Join Date: May 2018
Location: China
Posts: 157
Rep Power: 7
calf.Z is on a distinguished road
Quote:
Originally Posted by mgg View Post
Hallo Ping,

did you solve this high number of PCG iterations running DNS? I am also facing this problem. I use buoyantPimpleFoam solver (similar as your solver) running DNS for mixed convetion problem. I wanna use more than 1000 cores to get a ideal speadup. So I go with PCG, which has a better scalability than GAMG. But PCG solver just need more than 2000 iterations to obtain the tolerence of 1e-7. Compared with that, GAMG only needs two or three. Do you have a clue of that? Thank you very much.
Have you solved your problem? Did you compare the performance between PCG and GAMG for DNS? PCG will have better scalability when running in many cores of parallel. But is there any criteria or rule of thumb for the number of core when we should use PCG instead of GAMG? Thank you very much.
calf.Z is offline   Reply With Quote

Old   November 17, 2019, 03:31
Default
  #9
Senior Member
 
Jianrui Zeng
Join Date: May 2018
Location: China
Posts: 157
Rep Power: 7
calf.Z is on a distinguished road
I am now donging DNS channel and encountering similar problems. If I set p_rgh tolerance to 1e-6 and use GAMG. It needs more than 1000 iterations to reach. So within the maximum 1000 iterations, it cannot meet the tolerance. If I set p_rgh tolerance to 3e-6, It only needs 18 iterations to reach the tolerance.

I am confused about this problem. How the tolerance of p_rgh should be set in DNS simulations to meet the accuracy requirement? Thank you.
calf.Z is offline   Reply With Quote

Reply

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
Floating point exception error lpz_michele OpenFOAM Running, Solving & CFD 53 October 19, 2015 03:50
How to write k and epsilon before the abnormal end xiuying OpenFOAM Running, Solving & CFD 8 August 27, 2013 16:33
Upgraded from Karmic Koala 9.10 to Lucid Lynx10.04.3 bookie56 OpenFOAM Installation 8 August 13, 2011 05:03
IcoFoam parallel woes msrinath80 OpenFOAM Running, Solving & CFD 9 July 22, 2007 03:58
Could anybody help me see this error and give help liugx212 OpenFOAM Running, Solving & CFD 3 January 4, 2006 19:07


All times are GMT -4. The time now is 03:10.