CFD Online Discussion Forums

CFD Online Discussion Forums (https://www.cfd-online.com/Forums/)
-   OpenFOAM Programming & Development (https://www.cfd-online.com/Forums/openfoam-programming-development/)
-   -   Suggested unsteady, implicit solver stable with arbitrarily large time steps (https://www.cfd-online.com/Forums/openfoam-programming-development/72599-suggested-unsteady-implicit-solver-stable-arbitrarily-large-time-steps.html)

alberto February 22, 2010 02:47

I think we are automatically subscribed to all the threads where we post a comment, so everybody should be here :-D

P.S. Hi Niklas. I just noticed you got the super powers. Still using twoPhaseEuler?

djbungee February 22, 2010 02:51

ok, I see the link
 
... yes, ok, and now I have seen the link to the new location in the 'bug forum' Niklas created. Sorry for this 'off-topic' post :o

charlie February 22, 2010 04:32

Dear all,

This thread remains very interesting reading!

Just a quick remark, admittedly somewhat off topic, but underlining some of our motivation in this matter.

Quote:

Originally Posted by alberto (Post 246633)
Well, for time-resolved simulations like LES should be to have it done right, the current PISO implementation in OpenFOAM is OK. The CFL condition can be often less restrictive than the characteristic lenght scale of your flow.

For RANS, I agree with you, ...

CFL \approx 1 is indeed a commonly cited time resolution requirement for LES on a given grid, representing an approximate balance of the spatial and temporal filter sizes. I'd like to mention a valuable class of simulations involving LES where a CFL > 1 capability is important, namely those where RANS and LES are mixed in the solution domain (e.g. DES, SAS...). In such simulations, which generally push computational resources to their limits, the highest CFL numbers can occur in regions where we are not interested in LES resolution of the turbulent scales. Examples include RANS boundary layers, which can have extremely high aspect ratio cells, or when the application of structured grids leads to "unnecessarily fine" resolution in some unimportant flow regions.

Best regards,

Charlie.

niklas February 22, 2010 04:55

Quote:

Originally Posted by alberto (Post 246826)
P.S. Hi Niklas. I just noticed you got the super powers. Still using twoPhaseEuler?

nope.

In fact, the only time I was involved in that piece of code was when I was helping a new PhD student and
added the framework for the kinetic theory, so he had something to start from.
Unfortunately he quit for a much better paid job and I decided to release the rather unfinished
code to the public, hoping that someone would find it useful, since I knew I wouldn't have time
to work with it.

...and the reason I move the topic is because I see this topic as a feature enhancement and code development.

djbungee February 22, 2010 05:14

Detached-Eddy simulation
 
Dear Charlie, dear all,

that is indeed important. It now should be obvious to anyone that we both are aiming at calibrating our C_DES-constants in OpenFOAM for our hybrid RANS LES implementations. We use different background models for the Detached-Eddy Simulations we perform and we would like to achieve what we have achieved and both published with ELAN in FLOMANIA or DESider also with OpenFOAM.

For this procedure where we compute the decay of isotropic turbulence in LES mode my suggestion now is that we do not use any relaxation in OpenFOAM. I remember this also worked in ELAN - Charlie, am I right? Did we not experience that relaxation was not required for this case?

I am not sure about the more complicated cases we are aiming at where above mentioned grid configurations will be applied and where the computations are already running for a long time. There we might have to under-relax our equations. A remedy would be to slightly increase the relaxation factors for momentum only during the next days for the running computations and see whether we reach URF_U=1 without divergence.

Apart from that, we should go on looking into that issue and try to find out what we have to do to generally avoid this dependence on relaxation. One idea I had was to relax the U equation only for computing the intermediate velocities, but not manipulating the coefficients of the momentum equation that are used for calculating the pressure correction. An alternative is explicitly relaxing the intermediate velocities before the pressure correction is computed. In this case, the matrix coefficients are not manipulated. However, this did not help, but we have to check the details of what we did an check how the relaxation factors are applied to the matrix coefficients for momentum.

The results for the decay should be dominated by modeled and numerical diffusion. Thus, with all settings equal, the relaxation factors should NOT have any influence.

Best regards, Ulf.

djbungee February 23, 2010 18:45

Solution to avoid dependence on relaxation
 
1 Attachment(s)
Dear all,

my initial idea not to apply relaxation on the coefficients of the momentum equation used for computing the pressure correction was correct, but not correctly implemented at first.

Below please find a code adjustment suggested to avoid dependence of results on relaxation for the procedure listed above. The basic idea is to use tmp<fvVectorMatrix> MomPredEqn for computing the intermediate velocities with an arbitrary relaxation. With tmp<fvVectorMatrix> UEqn only the coefficients are computed that are used for pressure correction. The momentum equation is not solved for a second time. To avoid relaxation a line with UEqn().relax(1); is added before the coefficients for the pressure equation are assembled.

The attached picture demonstrates that the initial dependence on relaxation ('ORIG') that has been shown using relaxation factors of 1 and 0.5 for all equations is now completely gone ('NORELAX_ON_PRESSCORR').

Suggestions, comments and additional tests are welcome! This should also be applied, e.g., in pimpleFoam!

Best regards, Ulf.

Code:

   
.
.
.
while(
          (maxResidual > convergenceCriterion)
          || (pFinalize == 1)
          )
        {
        eqnResidual = 1;
        MaxResidualPrev=maxResidual;
        maxResidual = 0;
        IIcount++;

        tmp<fvVectorMatrix> MomPredEqn
          (
            fvm::ddt(U)
          + fvm::div(phi, U)
          + sgsModel->divDevBeff(U)
          );

        if (pFinalize != 1)
          {
            MomPredEqn().relax();
          }

        if (pFinalize == 1)
          {
            MomPredEqn().relax(1);
          }

        eqnResidual = solve
          (
          MomPredEqn() == fvc::reconstruct(-fvc::snGrad(p)*mesh.magSf())
                  ).initialResidual();
       
        maxResidual = max(eqnResidual, maxResidual);

        tmp<fvVectorMatrix> UEqn
          (
          fvm::ddt(U)
          + fvm::div(phi, U)
          + sgsModel->divDevBeff(U)
          );
        UEqn().relax(1);
       
        p.boundaryField().updateCoeffs();
       
        volScalarField rUA = 1.0/UEqn().A();
        U = rUA*UEqn().H();
        UEqn.clear();
       
        phi = (fvc::interpolate(U) & mesh.Sf())
          + fvc::ddtPhiCorr(rUA, U, phi);
       
        adjustPhi(phi, U, p);
       
        p.storePrevIter();
        scalar StoppPressCorr =1;
        int CountPressCorr =0;
        scalar PrevPressEqnResidual=1.E20;
        while ( (StoppPressCorr > convergenceCriterion) && (CountPressCorr < 10) )
.
.
.


alberto February 23, 2010 20:18

Hello all,

Quote:

The momentum equation is not solved for a second time. To avoid relaxation a line with UEqn().relax(1); is added before the coefficients for the pressure equation are assembled.
the momentum predictor is not solved a second time, but aren't you throwing away the solution obtained by the momentum predictor with the lines?

Code:

volScalarField rUA = 1.0/UEqn().A();
U = rUA*UEqn().H();
UEqn.clear();

Wouldn't this be equivalent to simply removing the first part, or, in other words, avoid solving the momentum predictor and avoiding relaxation?

The relaxation is applied to the matrix as shown here (see comments in the code):

http://foam.sourceforge.net/doc/Doxy...ce.html#l00483

Best,

alberto February 24, 2010 01:23

Hello,

I quickly run some test in a cavity case, using what Ferziger and Peric did as example.

I used pimpleFoam, with the same settings for the numerical schemes used in the pimpleFoam tutorial, but changing the time integration to backward.

Results here: http://dl.dropbox.com/u/659842/OpenF...Relaxation.pdf

I added the plots of the difference between the solutions along y. As you can see the difference is low, and then has a peak in a specific point for all the solutions. The pressure has the peak of difference at the centre of the cavity, while the velocity has the peak of difference around the minimum value of the pressure.

I want to run the same cases again with stricter convergence requirements to be sure of the results. Of course any comment is welcome :D

Best,

djbungee February 24, 2010 03:36

Dear Alberto, dear all,

let me give some details.

Quote:

Originally Posted by alberto (Post 247128)
Hello all,
the momentum predictor is not solved a second time, but aren't you throwing away the solution obtained by the momentum predictor with the lines?

No, I am not - or better, yes, it is not needed anymore. It is the same as in pimpleFoam. Having solved for U, its solution is part of UEqn().H() - so it is actually not thrown away. You need it for the pressure correction. In the first step I solve the momentum predictor / momentum equation for velocity with a guessed/previously obtained pressure field using an arbitrary relaxation. However, as relaxation is implicitly implemented, the matrix coefficients are influenced and thus using these coefficients for pressure correction you would have matrix coefficients from the momentum equation _with_ relaxation. Thererore, I generate the UEqn-coefficients a second time by tmp UEqn .... without _solving_ the equation, without relaxation (and without pressure gradient on the 'right side'). These coefficients together with the previously found solution for the U field are then used in the equations you have shown and I cited here again (cf. below).

Quote:

Originally Posted by alberto (Post 247128)
Code:

volScalarField rUA = 1.0/UEqn().A();
U = rUA*UEqn().H();
UEqn.clear();

Wouldn't this be equivalent to simply removing the first part, or, in other words, avoid solving the momentum predictor and avoiding relaxation?

Due to what I explained above I hope it has become clear that it is not the same! You could check with your testcase and I hope that the difference would vanish. Of course, it would be the same if you use no relaxation for the U-Equation (i.e. the momentum predictor). In this case you could use your solution and coefficients directly for the pressure correction.

I know, the difference might be small and dependent on the time step size. But as was explained earlier, also by Charlie, our aim is to use an arbitrarily large time step that is a result of 'physical' consideration avoiding as many errors as possible.

Completely dispensing with relaxation for the U Equation is a solution, but in many of the unsteady cases we run, this would never work. We have to relax the momentum predictor.

Best regards, Ulf.

anac February 24, 2010 11:17

Dear all,

I had to cope with some of the issues described in this thread during my PhD, when I developed an in-house code with a coupled and implicit solver. Maybe, my point of view (although I am not at all an expert on OpenFoam) could be in some way helpful.

As already mentioned, the Rhie and Chow formulation (technique also known as Momentum Interpolation), expressed in its original form, produces converged solutions which depend on the linear relaxation factor and the (both pseudo or real) time step. Although the results accuracy is not significantly affected, this technique can even fail in its main objective of avoiding pressure wiggles on colocated grids. I believe that the pressure equation used in OF can be seen as a Momentum Interpolation (I mean, flux are calculated from the Rhie and Chow procedure and introduced on the continuity equation). Furthermore, a remedy for the case of small time steps is provided throughout the "ddtPhiCorr" function (please, correct me if I am wrong). However, no corrections for linear relaxation factors are available (maybe because of its little practical importance)

I do not know if you could be interested in our paper about the momentum interpolation technique, where it is described in some detail, showing reasons for its inconsistency when unsteady problems. There, we propose a compact formulation for both unsteady problems and including linear relaxation. I could try to implement it on OF and check it. (The paper is: Ana Cubero and Norberto Fueyo, A compact momentum interpolation procedure for unsteady flows and relaxation, Numerical Heat Transfer, part B, 52, 6, 507-529.)

On the other hand, I also believe that it is good to include an inner loop bellow the real time step. However, differently from discussed above, I would be more interested in a dual time step, where you have a pseudo-time iteration bellow the real-time loop (this is usually done as preconditioning of solvers for compressible flows at low Mach numbers). I do not have a clear reason for that, but I do not feel good sensations with linear relaxation factors... By using a the dual step procedure, we would have two temporal terms, the false time step being limited by convergence and the real time step by physical temporal scales.

Thank you for reading until here,
Ana

alberto February 24, 2010 13:32

@djbungee: OK. Thanks.
You case (turbulence decay) is probably much more sensitive to this problem than what can be observed in other applications, I agree. It would be interesting, at least for me, to see a measure of the difference in the solution, and compare it with the error causes by other assumptions, to have a feeling of the relative order of magnitude.

@ana: Thanks for the reference. If you plan to code your work in OpenFOAM, feel free to poke me. I also agree with you about the two-level time stepping.

Best,

djbungee February 25, 2010 03:48

Dear Ana, dear Alberto,

yes, dual time stepping is an alternative and a more flexible and general way, but it will not further help us here. The problem regarding dependence on relaxation caused by using 'relaxed' or 'non-relaxed' coefficients has been solved and the coupling of equations on a time step works. So the 'local goal' of this small thread has been reached.

What might help to overcome the weaknesses of our segregated approach in solving our equations is a block-matrix solver.

http://powerlab.fsb.hr/ped/kturbo/Op...ringer2009.pdf

I expect that we would be able to achieve time-dependent results of equal quality with larger time steps and higher stability with that.

@Alberto,
the measure of error for the decay introduced, e.g. by the convection scheme, is visibly much larger. I mentioned earlier: The results for the decay should be dominated by modeled and numerical diffusion. With these computations we want to calibrate our modeled diffusion, i.a. a constant C_DES for a Detached-Eddy simulation for a fixed numerical scheme and procedure. This is actually only valid for the applied spatial resolution, or in other words, it is not a constant value. Therefore, we compute many different spatial resolutions. This way we get a good guess....

So for us it was important to avoid this error as there are enough other errors introduced that are not so easily corrected.

Best regards, Ulf.

alberto February 25, 2010 12:23

Quote:

Originally Posted by djbungee (Post 247319)
So for us it was important to avoid this error as there are enough other errors introduced that are not so easily corrected.

Thanks for the explanation, again. I did not mean to say the error was not important in your case, sorry if it appeared like that :)

An additional reference which I found recently, where many improvements are suggested to the standard Rhie-Chow interpolation.

S. Zhang, X. Zhao, General formulations for the Rhie-Chow interpolation, ASME Heat Transfer/Fluid Engineering Summer Conference, 2004.

They developed a formulation of Rhie-Chow independent from under-relaxation, time stepping and they give guidelines to deal with discontinuities, multiphase flows and porous zones.

Best,

braennstroem February 25, 2010 15:01

Hello Ulf,

I have some trouble understanding your lines (eps. the pFinalize part) from yesterday. How do you set pFinalize and shouldn't it be vice versa, i.e.
if (pFinalize == 1)
{
MomPredEqn().relax();
}

if (pFinalize != 1)
{
MomPredEqn().relax(1);
}

Or did I misunderstand your explanation?

Gruesse!
Fabian

djbungee February 26, 2010 02:37

Hi Fabian,

the implementation is meant to work like this, that for the loop before the _final_ loop, when convergence is 'detected', pFinalize is set to 1. After that the final loop starts as the entering condition for the do-while loop allows for the inner loop on the time step to be entered for either pFinalize==1 or 'no convergence'.

Therefore, when pFinalize==1, relaxation should be applied for neither the momentum predictor nor the coefficients for the pressure correction (the latter is the case for every loop, of course).

Actually, this might not be necessary anymore after the possibility of relaxation has been removed from the determination of the coefficients for the pressure equation.

But in case you do want to avoid relaxation in the final loop, you have to say (IF I UNDERSTAND THE CALL CORRECTLY ....)

Code:

          if (pFinalize =! 1)
          {
            MomPredEqn().relax();
          }

        if (pFinalize == 1)
          {
            MomPredEqn().relax(1);
          }

- at least in the way I have defined pFinalze (which, admittedly, is not shown in my code snippet), and in the way how I understand the .relax(1): I assumed (hopefully correctly), that the argument of '1' of .relax(1) is the relaxation factor and not a switch stating '1' = ' do relax with the given parameter' and no argument means 'do not relax' .....

Hope that this answers your question. If not, we should have a telephone call on the weekend - I prefer to write my programs Saturday night, when all our children are in bed :) - have you got time?

Have a great day ahead ...

Grüße, Ulf.

anac February 26, 2010 05:23

Dear Ulf,

I am sorry, but I still think that solutions will depend on the (pseudo or real) time step used (and not only because of the different resolution). I mean, if we solve the evolution of a flow towards the steady state, the final, converged solution will depend on the time step used.

If you look at the expression for the (predicted) flux (for an incompressible flow and using the firt-order Euler scheme):

phi = interp(U) + ddtPhiCorr = interp(U) + interp(1/A/DeltaT)*(phi.oldTime-U.oldTime), where U = H/A.

At convergence, we have phi=phi.oldTime and U=U.oldTime and then phi = interp(H/A). As the A term does include DeltaT ( A = A_steady + coef/DeltaT, right?), our predicted flux will depend on the time step.

Regarding the block matrix solver, are you using structures provided by OF for building the matrix or are you creating it by yourselves?

Best regards,
Ana

djbungee March 4, 2010 04:57

Dear Ana,

this answer I write down here is based on assumptions only, but let me give you my ideas. I am not too deep in the details of the implementation, but I worked with and performed development work in other codes, so this is how I remember it.

The flux you have mentioned is only used for the pressure correction equation. After convergence the flux in the pressure-correction equation is then used as the 'global flux' (phi -= pEqn.flux()) and the momentum is corrected by the adjusted pressure. At convergence of the _system_ of equations this correction tends to zero.

There are many possibilities on how to define the coefficients for the pressure correction equation. As I remember it there are implementations that neglect terms with neighbor coefficients in order to make the solution of the pressure equation as easy as possible, e.g. solving explicit expressions instead of solving an equation system. However, the pressure correction found goes to zero for any procedure and you get conservation of momentum and mass. Therefore, these simplifications do not play an important role as might be the case for Dt in the center coefficient A.

I might be wrong about this in general. However, we did not observe any dependence on time step size beyond the dependence on whether you resolve the physics or not. In these observations might be included what you described. Therefore, your considerations might be rather theoretical and I have the same question Alberto asked us: What is the relative error, can it be quantified? Actually, that is also the same question we were asked initially when starting this thread and the discussion about dependence on relaxation came up. We want to establish a solver with arbitrarily large, physically justified times steps. We have achieved that at least in a way that we can use larger time steps now and there is no dependence on relaxation factors left! However, when you have very small time steps by physical considerations and, e.g. when you do LES and want to resolve all time scale, our corrections to the procedure are not important. However, in this case, I would prefer a completely explicit procedure to save computation time anyway.

Of course, the observations and examinations we made tells us nothing about the amount of dependence on the time step _apart_ from resolution as you mentioned.

However, we want to compute additional test cases and describe our observations. Fabian has made a very interesting suggestion about simulating the decay of a Taylor-Green vortex where an analytical solution is available to identify the advantages of our implementation.

Regarding the block matrix solver: we are not involved, just 'observing' what is going on. Therefore I added the link.

Best regards, Ulf.

CedricVH May 11, 2010 10:47

Is there any chance that a fully implicit (PIMPLE?) solver will be available in the next release of OpenFOAM? I have a lot of cases where accuracy is not a priority, but speed is.

In Fluent I can use very large time steps (0.005 s) which give good results for my cases with lagrangian particle tracking and are super fast. In OpenFOAM, I need time steps < 1e-6 s in order to be stable.

alberto May 11, 2010 15:30

Quote:

Originally Posted by CedricVH (Post 258343)
Is there any chance that a fully implicit (PIMPLE?) solver will be available in the next release of OpenFOAM? I have a lot of cases where accuracy is not a priority, but speed is.

I probably do not understand the question, but solvers based on PIMPLE are already present and they allow relatively large time steps to be used.

Quote:

In Fluent I can use very large time steps (0.005 s) which give good results for my cases with lagrangian particle tracking and are super fast. In OpenFOAM, I need time steps < 1e-6 s in order to be stable.
What kind of simulations are you doing? Two-way coupled? One-way coupled?

Best,

CedricVH May 12, 2010 02:51

Quote:

Originally Posted by alberto (Post 258388)
I probably do not understand the question, but solvers based on PIMPLE are already present and they allow relatively large time steps to be used.



What kind of simulations are you doing? Two-way coupled? One-way coupled?

Best,

I'm using pimpleFoam with lagrangian particle tracking at this moment, but even with time steps < 1e-6 it gives me courant numbers of +10000 as the mesh is very fine (very complex shaped lower airways). I use one way coupling.

Sensitivity analysis in Fluent showed that a time step 0.005s is more than accurate enough for my purposes, but pimpleFoam does not seem to manage. That's why a implicit solver would be a nice addition.


All times are GMT -4. The time now is 13:30.