CFD Online Discussion Forums

CFD Online Discussion Forums (https://www.cfd-online.com/Forums/)
-   OpenFOAM Programming & Development (https://www.cfd-online.com/Forums/openfoam-programming-development/)
-   -   phi -= pEqn.flux() vs. linearInterpolate(U) & mesh.Sf() (https://www.cfd-online.com/Forums/openfoam-programming-development/83638-phi-peqn-flux-vs-linearinterpolate-u-mesh-sf.html)

EmadTandis May 26, 2013 11:42

thanks
your points are very helpful, specially "U is not divergence free". but there is thing that I can not understand about openFoam operators:
are these relations correct?
a)
fvc::laplacian(rUA,p)=fvc::div(rUA,fvc::grad(p)) ??
b)
does pEqn.flux() return (fvc::interpolate(rUA*fvc::grad(p)) & mesh.Sf ( based on jasak' thesis)

from my experience in openFOAM a) is not true but I know from CFD this should be true!
about b): my experience from openFOAM says me this is not be true but from 3.144 jasak it should be true.

psosnows May 26, 2013 12:50

Regarding point a)
take a look into Programmers Guide.
If you would like to expand fvc::laplacian, you would get someting like this:
fvc::laplacian(A,p) =
fvc::surfaceSum
(
fvc::interpolate(A) * fvc::snGrad(p)
);

but consider that in the code we have fvm, which is implicit and returns fvMatrix object. And also there is a difference between fvc::grad() and fvc::snGrad. The latter one returns surfaceTypeField, and the first is in fact an average of snGrad in the cell center (well, depending on the scheme).

Regarding b)
I think that this should be true. But I will not put my head on the line right now. I remember checking it in the past, but can not find my notes at the moment. I would strongly advice you to dig into the code and check the structure yourself. If you run into too much trouble, give a call ;)

Best,
Pawel

EmadTandis May 27, 2013 03:07

Actually,
fvc::laplacian(A,p)=fvc::surfaceIntegrate(fvc::int erpolate(A)*fvc::snGrad(p)*mag(mesh.Sf()))
but I mean we can take from definition of laplacian in mathematics:
laplacian(A,p)=napla.(A*napla(p))
therefore, in OpenFOAM these sentence should have same results (with some numerical error ):

fvc::laplacian(A,p)
fvc::div(A*fvc::grad(p))

but their results, based on my experiece in OF, are quite different. Why?

psosnows May 27, 2013 03:30

Hello,

yup, I missed the Sf()- for some reason I thought it was hidden in snGrad().

Regarding the semantics of laplacian- always take causion with which kind of field are you dealing with, and what is the input for the functions. The direct mathematical semantics makes sense, and gives the result which is another way of calculating the operator. At the same time, it performs few more surface-to-volume operations, thus smears the result. The corrected definition you gave is a shorter (and more precise) version of discrete representation of laplacian. What is does, is calculating the surface values of the operators inside divergence, and then using those surface fields (and Gauss theorem) it calculates the divergence at the cell center.

In case of direct mathematical representation:
First you calculate the gradient at the face centers, *
then you average it to get the values in the cell centers, *
then you interpolate the gradient again on the face centers, **
then you interpolate A to the surfaces,
and finally you calculate the divergence in the CC's

In case of OF representation, you drop the (*) points and instead of (**) you calc the gradient on the surface directly.

Hope it clarified a bit the issue.
Best,
Paweł

EmadTandis May 27, 2013 04:10

you are quite right !
Based on this points, this is numerical Error.

regarding pEqn.flux() it returns: fvc::interpolate(rUA)*fvc::snGrad(p)*mag(mesh.Sf() )
NOT fvc::interpolate(rUA)*fvc::interpolate(fvc::Grad(p )) & mesh.Sf()
that I said before. this difference is based on your points.
Thanks Very Much

thomasArk47 May 30, 2013 18:41

Hi,

ok there has been many interesting questions/answers. I just give complements because I think the basic knowledge of different peoples is very different in this post and maybe it is usefull to recall the cornerstone of the problem and its basis.

OF uses collocated scheme for solving NS equation. Means velocity and pressure are both stored in cell centers. Since incompressible NS is not a classical evolution equation (because the presence of the constraint div(U) which in turns implies that one don't have equation for pressure), this collocated approach is numerically instable in the finite volume framework. Long to explain why but the interested reader should search and study the following keymords:
collocated scheme versus staggered scheme
pressure checkboard instability
Rhie Chow stabilisation
inf-sup condition for mixed problems in finite element context
All the core reasons why one cannot use an interpolation scheme (whatever it is precise type, linear or ...) for phi field come from those notions. Not so difficult but if the reader has strictly no knowledge on numerical analysis, it will be to hard to understand in deep.

Good luck.

vkrastev October 18, 2013 12:43

Quote:

Originally Posted by psosnows (Post 430072)
Hello,


Well, in fact there is. Check out the 5th slide (or 3rd from the end).

I think some time ago there was some explanation regarding this topic, but lets do it from the top.

A nice explanation of the way N-S is solved in OpenFOAM is presented in the OF-wiki about SIMPLE algo:
http://openfoamwiki.net/index.php/Th...hm_in_OpenFOAM
PISO algo is very similar:
http://openfoamwiki.net/index.php/Th...hm_in_OpenFOAM
Of course check also sub-topics.

Now for some notes. As we all know, OF uses Eulerial approach with non-staggered meshes. This can introduce some significant numerical errors (especially while using 2nd order central difference schemes). For that reason in OF we "do not" solve directly for velocity (U), but for the fluxes (phi).
If you check the divergence of U, you will almost always find that there is a not-negligible error. At the same time phi is guaranteed to be divergence free.

Now lets take a look into the PISO algo itself. I will not cite the code but it should be easy to follow.
1) create the UEqn using the last- known phi. Note that any "XEqn" is like a black box, with a void space for the unknown. The imporatant stuff are the coeffs.
2) if you like, solve momentum predictor (this is unnecessary and can be dropped for time saving).
3) extract the semi-central coeffs from the UEqn, reverse them and call rAU. This is the famous "operator splitting".
4) pressure loop:
4.1) recalculate velocity: U = rAU * H();
4.2) recalculate fluxes: phi = interpolate(U) * S; note that the flux field is NOT divergence free;
4.3) solve for pressure using the non divergence free flux
4.4) solve it several times...
5) now, we solved for pressure with non-div-free phi. But the literature (Jasak's thesis, Issa et al.) tells us, that we can correct the fluxes using the pressure field and this way ensure div-free condition. This is the famous phi -= pEqn.flux();
6) Finally, we correct the velocity field, acquiring a good approximation of the velocity field.

Note that the solution are the three fields: U, p and phi. And the imporatant one is in fact phi not U.

Hope I managed to clear the things a bit.

Best,
Pawel

Hi Pawel,

after reading your post and most of the previous ones I think that now pressure-velocity coupling in OpenFOAM is getting much clearer to me, so thank you all (also Alberto, Santiago and the others) for the interesting explanations.
I have just one little doubt left about the momentum predictor stage: you said above that it is, in principle, unnecessary for the successful application of the correction procedure; however, both in H.Jasak's and E. De Villiers Theses, I found passages claiming that the predicted velocities are used to update the H(U) vector before calculating the "pseudo-velocities" U=rAU*H(U) and interpolating them for the non-null divergence face fluxes evaluation (the points 4.1 and 4.2 of your post). These affirmations are apparently supported by the fact that in OpenFOAM H(U) appears to be calculated right after the momentum predictor is solved (see the pisoFoam.C source below).

Code:


 if (momentumPredictor)
            {
                solve(UEqn == -fvc::grad(p));
            }

            // --- PISO loop

            for (int corr=0; corr<nCorr; corr++)
            {
                volScalarField rAU(1.0/UEqn.A());

                volVectorField HbyA("HbyA", U);
                HbyA = rAU*UEqn.H();
                surfaceScalarField phiHbyA
                (
                    "phiHbyA",
                    (fvc::interpolate(HbyA) & mesh.Sf())
                  + fvc::ddtPhiCorr(rAU, U, phi)
                );

Additionally, the predictor stage is indeed optional in the unsteady pressure-based OF solvers (PISO/PIMPLE-like), while it is not in the steady-state ones (SIMPLE-like).
So, my questions are: 1) how and where is actually used the velocity field obtained from the momentum predictor? Is it, for instance, only used for the neighbour velocities update inside H(U), whereas face fluxes phi, which are "hidden" in the convective discretization coefficients, are left "frozen" until the pressure equation is solved? 2) depending on the answer to question 1), is it really negligible the predictor stage influence on the correction procedure (or, alternatively, am I missing something in my considerations)?
Can you please comment on this? Of course comments from other sources are also appreciated

Thanks in advance

V.

santiagomarquezd October 18, 2013 16:29

Hi Vesselin, each time you do H(U) the last calculated U is used. In the first PISO loop is the velocity obtained in the momentum predictor, in case if it is not performed is the velocity of the previous time-step. From the second corrector and later the last reconstructed velocity is used. In this case the coefficients of H() are frozen.

If you use pimple solvers you can do outer loops which will renew the H() operator since a new UEqn is assembled in each outer loop

Regards.

vkrastev October 19, 2013 05:07

Hi Santiago,
and thanks a lot for your fast reply. However, just for a better comprehension, I would like to further address some the points of your response:

Quote:

Originally Posted by santiagomarquezd (Post 457734)
each time you do H(U) the last calculated U is used.

Ok, but is it the same for phi (which is included inside convection discretization coefficients, that contribute to build H)? If it is so, in the first PISO loop the phi values (and thus the discretization coefficients) should be the ones created by "createPhi.H" or coming from the previous time step/correction loop, and should remain the same from the momentum predictor (if present) to the pressure equation solution, after which they are corrected enforcing mass conservation. Thus, the second PISO loop will start with H(U) updated with the last corrected cell center velocity field (explicit correction after the pressure equation solution), but also with the last (at this stage, conservative) face fluxes phi. And so on for the next (eventual) PISO sequences. Is it right?


Quote:

Originally Posted by santiagomarquezd (Post 457734)
In the first PISO loop is the velocity obtained in the momentum predictor, in case if it is not performed is the velocity of the previous time-step.

Ok, so I guess it is implicitly assumed that the influence of the initial velocity estimate on the algorithm convergence along the single time step is small: to my opinion this is quite reasonable at run time, when values from the previous time step should be well converged, but probably it is a bit "raw" assumption at the very beginning of the run, when initial conditions are usually far from being accurate. I think this could also be the reason why in the steady-state algorithm (SIMPLE solvers) the momentum predictor is always there (values from previous "steps" are generally not good guesses, if you are not close to the global algorithm convergence to a steady-state).

Thanks once again

V.

santiagomarquezd October 19, 2013 07:57

Quote:

Originally Posted by vkrastev (Post 457772)
Ok, but is it the same for phi (which is included inside convection discretization coefficients, that contribute to build H)? If it is so, in the first PISO loop the phi values (and thus the discretization coefficients) should be the ones created by "createPhi.H" or coming from the previous time step/correction loop, and should remain the same from the momentum predictor (if present) to the pressure equation solution, after which they are corrected enforcing mass conservation. Thus, the second PISO loop will start with H(U) updated with the last corrected cell center velocity field (explicit correction after the pressure equation solution), but also with the last (at this stage, conservative) face fluxes phi. And so on for the next (eventual) PISO sequences. Is it right?

Yes, that's right. As Issa explains in his paper, he gives more importance to p-V coupling (PISO loop) than nonlinearity [fvm::div(phi,U)], so that he freezes H(U) in the whole time-step and focuses in correcting p and phi (U). As I said if you want to update phi you can use the pimple family of solvers.

Quote:

Originally Posted by vkrastev (Post 457772)
Ok, so I guess it is implicitly assumed that the influence of the initial velocity estimate on the algorithm convergence along the single time step is small: to my opinion this is quite reasonable at run time, when values from the previous time step should be well converged, but probably it is a bit "raw" assumption at the very beginning of the run, when initial conditions are usually far from being accurate. I think this could also be the reason why in the steady-state algorithm (SIMPLE solvers) the momentum predictor is always there (values from previous "steps" are generally not good guesses, if you are not close to the global algorithm convergence to a steady-state).

The hypothesis is that U^n and U^(n+1) aren't much different so that the linearization fvm::div(phi,U) is valid. In this case is usual to use the condition of Co<1, making the assumption that low dt implies minor changes in U, really the condition should be dU/dt~0, but the trick of Co works. In practice, when you are near an stationary state you can enlarge the timestep, which shows that the condition in dU/dt is the correct one.

In the case of steady solvers like simpleFoam the trick to avoid raw approximations at the begginning is to low the relaxation factors.

Regards.

vkrastev October 19, 2013 12:25

Quote:

Originally Posted by santiagomarquezd (Post 457798)
Yes, that's right. As Issa explains in his paper, he gives more importance to p-V coupling (PISO loop) than nonlinearity [fvm::div(phi,U)], so that he freezes H(U) in the whole time-step and focuses in correcting p and phi (U). As I said if you want to update phi you can use the pimple family of solvers.

After re-reading your last and previous posts and also Issa's paper and Jasak's Thesis, I think I should clarify this a bit. My previous consideration is NOT right, since H(U) is updated between every two consecutive PISO correctors (so k-1 times along the whole time step, if k is the number of PISO correctors), but ONLY in terms of the cell-centered neighbour velocities U_N, and NOT in terms of the phi contribution inside discretization coefficients (a_N, but also the owner coefficient a_P, which contributes in the construction of the HbyA vector).

So, to have a completely renewed H(U), one has to move to the next time step (or outer iteration, if a PIMPLE solver is used), at the beginning of which UEqn is ALWAYS assembled (but solved as a momentum predictor only if the option is on).

Ok, I think that now things are all in place, sorry if I made it longer than strictly necessary and thanks for your patience (and of course please correct me if you find some other inconsistencies).

Regards

V.

galap April 9, 2014 08:28

Hello,

it is hard for me to follow your interesting discussion, so I wanted to have a practical approach on the phi variable. What I thought that right after calling

phi -= pEqn.flux();

mass conversion should be valid. I tried to check this for a simpleFoam and a rhoPimpleFoam tutorial case. Since phi is given as (kg s-1) for all faces, the sum of phi for all faces of one cell should be zero.

I took into account the normal vecs orientation:

if (mesh.faceOwner()[mesh.cells()[cell_label][face_label]])

then the normal vec is pointing inside the cell. Else I multipy phi with -1, since phi is pointing towards the neighbouring cell (same in case of boundary face).

I never see a sum 0, but rather a sum of ca. 10-20% of the sum of the absolute values.

So how do I calculate the correct mass flow in and out of one arbitary cell?

galap June 12, 2014 01:50

I finally solved it, the face orientation was the problem.

May this helps anyone in the future:

Code:

volVectorField phi_V
(
    IOobject
    (
        "phi_V",
        runTime.timeName(),
        mesh,
        IOobject::NO_READ,
        IOobject::AUTO_WRITE
    ),
    (rho*U)
);       


surfaceVectorField phi_V_interpolated
(
    IOobject
    (
        "phi_V_interpolated",
        runTime.timeName(),
        mesh,
        IOobject::NO_READ,
        IOobject::AUTO_WRITE
    ),
    linearInterpolate(phi_V)
);


forAll(U, cell_label)
{
    double flux_sum_neg = 0;
    double flux_sum_pos = 0;   
   
   
    forAll(mesh.cells()[cell_label], face_label)
    {
        double sign = 0;
        double sign_phi = 0;
       
       
        if (!mesh.isInternalFace(mesh.cells()[cell_label][face_label]))
        {
            sign = 1;
            label patch_index = mesh.boundaryMesh().whichPatch(mesh.cells()[cell_label][face_label]);
            label face_index = mesh.cells()[cell_label][face_label] - mesh.boundaryMesh()[patch_index].start();
            vector normal = sign * mesh.Sf().boundaryField()[patch_index][face_index] / mesh.magSf().boundaryField()[patch_index][face_index] ;
            sign_phi = (((phi_V_interpolated.boundaryField()[patch_index][face_index] & normal) < 0) ? 1 : -1);                   
            if (sign_phi < 0) flux_sum_neg += std::abs(phi.boundaryField()[patch_index][face_index]);
            if (sign_phi > 0) flux_sum_pos += std::abs(phi.boundaryField()[patch_index][face_index]);           
        }   
       
        else
        {
           
            sign = ((mesh.faceNeighbour()[mesh.cells()[cell_label][face_label]] == cell_label) ? -1 : 1);
            vector normal = sign * mesh.Sf()[mesh.cells()[cell_label][face_label]] / mesh.magSf()[mesh.cells()[cell_label][face_label]];
            sign_phi = (((phi_V_interpolated[mesh.cells()[cell_label][face_label]] & normal) < 0) ? 1 : -1);       

            if (sign_phi < 0) flux_sum_neg += std::abs(phi[mesh.cells()[cell_label][face_label]]);
            if (sign_phi > 0) flux_sum_pos += std::abs(phi[mesh.cells()[cell_label][face_label]]);
        }
    }

    double error = (flux_sum_pos - flux_sum_neg) / flux_sum_pos * 100; //in %
   
    if (std::abs(error) > 1)
    {       
        ..
    }
   
    else
    {
        Info << "massflow into " << cell_label << " = " << flux_sum_pos * mesh.cellVolumes()[cell_label] << " kg/s" << endl;
    }   
}


Honey October 15, 2019 10:24

Quote:

Originally Posted by amirhoshangm (Post 352308)
Hi

I, simulating flow in a channel with variable porosity. so I need to change some thing in correction of flux. it means I have to multiply porosity to pressure. the flux in updated by

pEqn.flux()


could anyone let me know how I can use porosity*pressure instead of pressure in correction of flux?

Hi,


I was wondering if you have solved your problem and if possible to share it here! I am also trying to modify porousSimpleFoam in order to multiply porosity to flux and would highly appreciate to hear some help with this regard?


Regards,

Honey


All times are GMT -4. The time now is 21:14.