CFD Online Discussion Forums

CFD Online Discussion Forums (https://www.cfd-online.com/Forums/)
-   OpenFOAM Programming & Development (https://www.cfd-online.com/Forums/openfoam-programming-development/)
-   -   phi -= pEqn.flux() vs. linearInterpolate(U) & mesh.Sf() (https://www.cfd-online.com/Forums/openfoam-programming-development/83638-phi-peqn-flux-vs-linearinterpolate-u-mesh-sf.html)

santiagomarquezd January 5, 2011 21:57

phi -= pEqn.flux() vs. linearInterpolate(U) & mesh.Sf()
 
Hi all FOAMers, I'm wondering why final phi at the end of each timestep in icoFoam is calculated as:

a)
Code:

phi -= pEqn.flux();
instead of doing:

b)
Code:

phi=linearInterpolate(U) & mesh.Sf();
particularly having in mind that at this point U is already corrected by PISO.

I've checked the values of both methods and they are slightly different, maybe with a difference of 10-30%. It's interesting that, if you are running continued, phi for the next timestep is calculated like a), but if you stop running and then restart phi for the next timestep then phi is calculated following b).


It should be equal? If not, What is correct and why? Why one is used for continued runnings and the other one for restarting?

Thx in advance.

stevenvanharen January 6, 2011 05:08

Hi Santiago,

please keep in mind that -= means subtraction.

So, in step a phi is not completely recalculated, but subtracted with pEqn.flux();

For a complete understanding of the procedure for solving the Navier-Stokes equation I suggest you read pages 143 till and with 146 of Jasak's thesis:

http://powerlab.fsb.hr/ped/kturbo/Op...jeJasakPhD.pdf

Cheers,

Steven

Cyp January 6, 2011 05:33

I was wondering the same question. In my own experiment the method with linearInterpolate(U)&mesh.Sf() can yields in very erroneous results...

Regards,
Cyp

santiagomarquezd January 6, 2011 10:58

Quote:

Originally Posted by stevenvanharen (Post 289447)
Hi Santiago,

please keep in mind that -= means subtraction.

So, in step a phi is not completely recalculated, but subtracted with pEqn.flux();

For a complete understanding of the procedure for solving the Navier-Stokes equation I suggest you read pages 143 till and with 146 of Jasak's thesis:

http://powerlab.fsb.hr/ped/kturbo/Op...jeJasakPhD.pdf

Cheers,

Steven

Tnx for the answers. Yes I know that part, the question is in CyP's way, because in both cases you're trying to assemble a corrected flux, moreover, as I pointed out if you do a continued running a) method is used, but in restart b) method is used instead. The question is if the two methods are different and why and why each method have to used for restarting/continuing. As CyP mentioned using method b in continued runnings leads to wrong results.

My advisor is implementing icoFoam in MatLab and is assembling phi at the beginning of each timestep by method b), things aren't going well that way.

Regards.

stevenvanharen January 6, 2011 11:18

Hi Santiago,

Ok, but why do you think method a) is used in continued running and method b) in restart?

If I look in icoFoam I don't see any differences for restarting or not.

at line 72:

Code:

phi = (fvc::interpolate(U) & mesh.Sf())
                + fvc::ddtPhiCorr(rUA, U, phi);

and at line 89:

Code:

phi -= pEqn.flux();
Steven

santiagomarquezd January 6, 2011 11:41

Steven, thanks for the interest in the topic. When you restart or start from /0 UEqn is assembled using phi calculated by createFields.H line 43 of icoFoam.C. This is calculated using method b). This phi surfaceScalarField affects the values of H operator, U predictor, etc. used afterwards. In case of continued running UEqn is assembled using phi calculated at the end of previous timestep using method a).

That's the question.

Regards.

stevenvanharen January 6, 2011 15:08

Hi Santiago,

Ok, at start method b is used in createFields.H indeed. This is just to get the simulation started. The non-linearity of the momentum equation is lagged using phi from the previous timestep. But since phi is usually not available at start this is created in createFields.H.

But if phi is available (usual in a restart unless you delete phi) phi is read from the timefolder. See line 46 of createFields.H:

Code:

IOobject::READ_IF_PRESENT
Now during a simulation both methods are used, first:

Code:

phi = (fvc::interpolate(U) & mesh.Sf())
              + fvc::ddtPhiCorr(rUA, U, phi);

and then:

Code:

phi -= pEqn.flux();
So this is what your advisor should do as well.

This set of equations is not easy to understand. Especially if you compare to the original paper of Issa describing the PISO method. Reading the four pages in Jasak's thesis helped me a lot so I really urge you to do the same.

Steven

santiagomarquezd January 6, 2011 16:41

Aha, it's true, phi is read if it is present, I had forgotten this point, and as you said if you delete this file and approximated phi is assembled to restarting.

Respect of using both methods while running it's true too, but the phi stored for the next time is what was calculated using method a). This method resembles Eqn. 144 from Jasak thesis, which comes from Eqn.140 & Sf. Eqn. 140 is the same of Eqn. 139 but interpolated at faces.

Checking the notation is worthy to note that Eqn. 140 is written as:

a) U_f=[H(U)/a_P]_f-(1/a_P)_f (\nabla p)_f

(the last has each factor previously interpolated)

and not

b) U_f=[H(U)/a_P]_f-[(1/a_P) (\nabla p)]_f

which one would obtain applying the interpolation the each term of the RHS in Eqn. 129. Checking the code in laplacianScheme.C we have:

Code:

00108    return fvmLaplacian(tinterpGammaScheme_().interpolate(gamma)(), vf);
so that code is implemented as in Jasak's thesis (option a). Assembling phi as

phi=U_f & S.f

leads to

phi=(U_f=[H(U)/a_P]_f-[(1/a_P) (\nabla p)]_f)&Sf

which uses b) and not a), this explains the differences we've found, but it's not clear for me and my advisor why a) is correct and not b).

Regards.

stevenvanharen January 7, 2011 07:28

Quote:

Originally Posted by santiagomarquezd (Post 289531)
phi stored for the next time is what was calculated using method a).

I don't see this, what is actually saved is:

Code:

phi = (fvc::interpolate(U) & mesh.Sf())
                + fvc::ddtPhiCorr(rUA, U, phi)-pEqn.flux();

So I think still a combination of method a and b.

Concerning the difference between:

[(1/a_P) (\nabla p)]_f and (1/a_P)_f (\nabla p)_f


This is the way OpenFoam discretizes a laplacian operator (see equation 3.24 in Jasak):

(1/a_P)_f (\nabla p)_f:

What about:

[(1/a_P) (\nabla p)]_f

How would you discretize this? Since now you need [(1/a_P) (\nabla p)]_f. The program would need to use 3.26 of Jasak instead of 3.25 of Jasak. This method is thus not incorrect, but undesirable. Read section 3.3.1.3 of Jasak.

alberto January 7, 2011 11:23

Quote:

Originally Posted by santiagomarquezd (Post 289421)
Hi all FOAMers, I'm wondering why final phi at the end of each timestep in icoFoam is calculated as:

a)
Code:

phi -= pEqn.flux();
instead of doing:

b)
Code:

phi=linearInterpolate(U) & mesh.Sf();
particularly having in mind that at this point U is already corrected by PISO.

Because a) enforces the mass conservation principle, while the second does not. Remember that the pressure equation is assembled with the face fluxes, and the flux is the conservative variable, not the velocity.

You use pEqn.flux() to obtain the exact term you have to subtract from the matrix of the pEqn, in order not to introduce conservation errors.

Best,

alberto January 7, 2011 11:35

Quote:

Originally Posted by stevenvanharen (Post 289590)
Concerning the difference between:

[(1/a_P) (\nabla p)]_f and (1/a_P)_f (\nabla p)_f


This is the way OpenFoam discretizes a laplacian operator (see equation 3.24 in Jasak):

(1/a_P)_f (\nabla p)_f:

What about:

[(1/a_P) (\nabla p)]_f

How would you discretize this? Since now you need [(1/a_P) (\nabla p)]_f. The program would need to use 3.26 of Jasak instead of 3.25 of Jasak. This method is thus not incorrect, but undesirable. Read section 3.3.1.3 of Jasak.

I agree. Additionally, notice that OpenFOAM and many papers in the literature make the assumption

(a*b*c)_f = a_f * b_f * c_f

This is not true in general, and it holds only for a smooth functional. However it often simplifies the development of the solution procedure.

For example, if you take a look at compressible solvers, you find

phi = rho_f*U_f

while from the theory it should be

phi = (rho*U)_f

In practice, in many applications, the approximation is acceptable.

Best,

santiagomarquezd January 7, 2011 12:25

Hi,

Quote:

Originally Posted by stevenvanharen (Post 289590)
I don't see this, what is actually saved is:

Code:

phi = (fvc::interpolate(U) & mesh.Sf())
                + fvc::ddtPhiCorr(rUA, U, phi)-pEqn.flux();

So I think still a combination of method a and b.

yes, talking in general both methods are used, but I'm referring to final lines of each timestep in icoFoam.C, particularly:

Code:

00088                if (nonOrth == nNonOrthCorr)
00089                {
00090                    phi -= pEqn.flux();
00091                }
00092            }
00093
00094            #include "continuityErrs.H"
00095
00096            U -= rUA*fvc::grad(p);
00097            U.correctBoundaryConditions();
00098        }

once U in cells centers is updated you have U and phi ready for the next timestep, at this point you can generate phi using method b). Using gdb to debug an example you have, for phi calculated by a) (line 90):

phi = {-3.4645991399177079e-06, 3.4646001127746356e-06, -6.3903343759551748e-06, 2.9257355721740963e-06, ...}

and for phi by method b)

$62 = {-3.531581793505539e-06, 4.1013330984044824e-06, -5.9540249950781068e-06, 3.2691627424804398e-06, ...}

values are slightly different. I think the answer is was given by Alberto in #10. My advisor pointed me in the necessity of have a look at fluxes conservation.

Respect of:

Quote:

Originally Posted by stevenvanharen (Post 289590)
Concerning the difference between:

[(1/a_P) (\nabla p)]_f and (1/a_P)_f (\nabla p)_f


This is the way OpenFoam discretizes a laplacian operator (see equation 3.24 in Jasak):

(1/a_P)_f (\nabla p)_f:

What about:

[(1/a_P) (\nabla p)]_f

How would you discretize this? Since now you need [(1/a_P) (\nabla p)]_f. The program would need to use 3.26 of Jasak instead of 3.25 of Jasak. This method is thus not incorrect, but undesirable. Read section 3.3.1.3 of Jasak.

very good point, nevertheless this is the "algorithmic" reason, basically reduce the computational molecule. The theoretical problem is how to pass from r.h.s in the first line of 3.24 of Jasak to the second one. My concern was related to Alberto's explanation:

Quote:

Originally Posted by alberto (Post 289622)
Additionally, notice that OpenFOAM and many papers in the literature make the assumption

(a*b*c)_f = a_f * b_f * c_f

This is not true in general, and it holds only for a smooth functional. However it often simplifies the development of the solution procedure.

For example, if you take a look at compressible solvers, you find

phi = rho_f*U_f

while from the theory it should be

phi = (rho*U)_f

In practice, in many applications, the approximation is acceptable.

Alberto, do you have any references of those papers in order to go deeper? and another question, assuming this linearization What's the background difference between a) and b)?, or in other words Why b) don't "enforces the mass conservation principle" ? I think these are two different methods of assembling 3.144 of Jasak.

Thanks to you guys, discussion was really useful.

alberto January 7, 2011 12:44

I have seen the approximation (a*b)_f = a_f*b_f used quite often in Issa papers (I have in mind a paper on multiphase flows (Oliveira & Issa, 2003, Int. J. Num. Meth. Fluids), but also elsewhere), and in extension to the Rhie-Chow formula (S. Zhang, X. Zhao, General formulation for Rhie-Chow Interpolation, ASME Heat Transfer/FLuids Engineering Summer Conference, HT-FED04-56274, Charlotte, North Carolina, July 11-15 2004).

Keep in mind that this assumption is absolutely arbitrary. If you want to be picky, you should not make that assumption, since it introduces an approximation.

I have to say it is a very convenient approximation however, and I have used it myself in multiphase codes for a simple reason. Just consider the case where you have a phase fraction 0 < alpha < 1, and you define

phi = (alpha*rho*U)_f

In this case, if you use a reconstruction procedure when correcting the velocity, you must divide by alpha_f*rho_f. This forces you to do something (typically not very clean) when alpha is small.

If you define phi as the velocity flux:

phi = (U)_f

and then consider alpha_f*rho_f*(U)_f where you need it, like in the pressure equation, your correction step is clean, and does not have any problem when alpha tends to zero. Of course if your fields are strongly non-uniforms (shocks), you need the first approach, but in incompressible solvers or slightly compressible ones, you probably won't see a significant difference.

Finally, b) does not enforce the conservation principle because you do not enforce it on U exactly, but on phi. Your pressure equation is

div(phi) = div(U_f) = 0

not

div(U_c) = 0

being U_c the cell-centered velocity. For the same reason, in transport equations you use div(phi, T).

Best,

santiagomarquezd January 7, 2011 13:44

Excellent!!

amirhoshangm March 30, 2012 06:28

variable porosity
 
Hi

I, simulating flow in a channel with variable porosity. so I need to change some thing in correction of flux. it means I have to multiply porosity to pressure. the flux in updated by

pEqn.flux()


could anyone let me know how I can use porosity*pressure instead of pressure in correction of flux?

hz283 April 5, 2013 17:37

Hello,

I also had the problem about the pEqn.flux(). In these equations in your post, is pEqn.flux() returns the value [(1/a_P) (\nabla p)]_f&Sf?

In rhoPimpleFoam, there are two regimes for pressure: transonic and subsonic. In Transonic, the correcpsonding flux correction is:
phi == pEqn.flux()

while in subsonic regime, it is as follows
phi +=pEqn.flux

So from the above the sign is opposite. How can we justify this difference? I also check other solvers and found that some use "==" while some use "+=" . Does anybody know something about this?

Thanks.

Quote:

Originally Posted by santiagomarquezd (Post 289531)
Aha, it's true, phi is read if it is present, I had forgotten this point, and as you said if you delete this file and approximated phi is assembled to restarting.

Respect of using both methods while running it's true too, but the phi stored for the next time is what was calculated using method a). This method resembles Eqn. 144 from Jasak thesis, which comes from Eqn.140 & Sf. Eqn. 140 is the same of Eqn. 139 but interpolated at faces.

Checking the notation is worthy to note that Eqn. 140 is written as:

a) U_f=[H(U)/a_P]_f-(1/a_P)_f (\nabla p)_f

(the last has each factor previously interpolated)

and not

b) U_f=[H(U)/a_P]_f-[(1/a_P) (\nabla p)]_f

which one would obtain applying the interpolation the each term of the RHS in Eqn. 129. Checking the code in laplacianScheme.C we have:

Code:

00108    return fvmLaplacian(tinterpGammaScheme_().interpolate(gamma)(), vf);
so that code is implemented as in Jasak's thesis (option a). Assembling phi as

phi=U_f & S.f

leads to

phi=(U_f=[H(U)/a_P]_f-[(1/a_P) (\nabla p)]_f)&Sf

which uses b) and not a), this explains the differences we've found, but it's not clear for me and my advisor why a) is correct and not b).

Regards.


EmadTandis May 25, 2013 08:36

Hi
I have this problem, too. this function (pEqn.flux() ) works based on which relation in jasak's thesis?
thanks

Cyp May 25, 2013 08:59

Hi!

You can find in the following link a presentation I made regarding the details of icoFoam. I sure you will find all your answers there.
http://www.scribd.com/doc/143414962/...on-in-OpenFOAM

The slides are in French, but you could easily understand the mathematical and programming part ;)

Enjoy!

Cyp

EmadTandis May 26, 2013 04:24

Quote:

Originally Posted by Cyp (Post 429976)
Hi!

You can find in the following link a presentation I made regarding the details of icoFoam. I sure you will find all your answers there.
http://www.scribd.com/doc/143414962/...on-in-OpenFOAM

The slides are in French, but you could easily understand the mathematical and programming part ;)

Enjoy!

Cyp

Thanks
But there is no any note about pEqn.flux() overthere. I know this module return a surfaceScalarField type that must be subtracted form phi. But I don't Know Why?
phi -= pEqn.flux();

psosnows May 26, 2013 05:05

Hello,

Quote:

Originally Posted by EmadTandis (Post 430068)
But there is no any note about pEqn.flux() overthere.

Well, in fact there is. Check out the 5th slide (or 3rd from the end).

I think some time ago there was some explanation regarding this topic, but lets do it from the top.

A nice explanation of the way N-S is solved in OpenFOAM is presented in the OF-wiki about SIMPLE algo:
http://openfoamwiki.net/index.php/Th...hm_in_OpenFOAM
PISO algo is very similar:
http://openfoamwiki.net/index.php/Th...hm_in_OpenFOAM
Of course check also sub-topics.

Now for some notes. As we all know, OF uses Eulerial approach with non-staggered meshes. This can introduce some significant numerical errors (especially while using 2nd order central difference schemes). For that reason in OF we "do not" solve directly for velocity (U), but for the fluxes (phi).
If you check the divergence of U, you will almost always find that there is a not-negligible error. At the same time phi is guaranteed to be divergence free.

Now lets take a look into the PISO algo itself. I will not cite the code but it should be easy to follow.
1) create the UEqn using the last- known phi. Note that any "XEqn" is like a black box, with a void space for the unknown. The imporatant stuff are the coeffs.
2) if you like, solve momentum predictor (this is unnecessary and can be dropped for time saving).
3) extract the semi-central coeffs from the UEqn, reverse them and call rAU. This is the famous "operator splitting".
4) pressure loop:
4.1) recalculate velocity: U = rAU * H();
4.2) recalculate fluxes: phi = interpolate(U) * S; note that the flux field is NOT divergence free;
4.3) solve for pressure using the non divergence free flux
4.4) solve it several times...
5) now, we solved for pressure with non-div-free phi. But the literature (Jasak's thesis, Issa et al.) tells us, that we can correct the fluxes using the pressure field and this way ensure div-free condition. This is the famous phi -= pEqn.flux();
6) Finally, we correct the velocity field, acquiring a good approximation of the velocity field.

Note that the solution are the three fields: U, p and phi. And the imporatant one is in fact phi not U.

Hope I managed to clear the things a bit.

Best,
Pawel

EmadTandis May 26, 2013 11:42

thanks
your points are very helpful, specially "U is not divergence free". but there is thing that I can not understand about openFoam operators:
are these relations correct?
a)
fvc::laplacian(rUA,p)=fvc::div(rUA,fvc::grad(p)) ??
b)
does pEqn.flux() return (fvc::interpolate(rUA*fvc::grad(p)) & mesh.Sf ( based on jasak' thesis)

from my experience in openFOAM a) is not true but I know from CFD this should be true!
about b): my experience from openFOAM says me this is not be true but from 3.144 jasak it should be true.

psosnows May 26, 2013 12:50

Regarding point a)
take a look into Programmers Guide.
If you would like to expand fvc::laplacian, you would get someting like this:
fvc::laplacian(A,p) =
fvc::surfaceSum
(
fvc::interpolate(A) * fvc::snGrad(p)
);

but consider that in the code we have fvm, which is implicit and returns fvMatrix object. And also there is a difference between fvc::grad() and fvc::snGrad. The latter one returns surfaceTypeField, and the first is in fact an average of snGrad in the cell center (well, depending on the scheme).

Regarding b)
I think that this should be true. But I will not put my head on the line right now. I remember checking it in the past, but can not find my notes at the moment. I would strongly advice you to dig into the code and check the structure yourself. If you run into too much trouble, give a call ;)

Best,
Pawel

EmadTandis May 27, 2013 03:07

Actually,
fvc::laplacian(A,p)=fvc::surfaceIntegrate(fvc::int erpolate(A)*fvc::snGrad(p)*mag(mesh.Sf()))
but I mean we can take from definition of laplacian in mathematics:
laplacian(A,p)=napla.(A*napla(p))
therefore, in OpenFOAM these sentence should have same results (with some numerical error ):

fvc::laplacian(A,p)
fvc::div(A*fvc::grad(p))

but their results, based on my experiece in OF, are quite different. Why?

psosnows May 27, 2013 03:30

Hello,

yup, I missed the Sf()- for some reason I thought it was hidden in snGrad().

Regarding the semantics of laplacian- always take causion with which kind of field are you dealing with, and what is the input for the functions. The direct mathematical semantics makes sense, and gives the result which is another way of calculating the operator. At the same time, it performs few more surface-to-volume operations, thus smears the result. The corrected definition you gave is a shorter (and more precise) version of discrete representation of laplacian. What is does, is calculating the surface values of the operators inside divergence, and then using those surface fields (and Gauss theorem) it calculates the divergence at the cell center.

In case of direct mathematical representation:
First you calculate the gradient at the face centers, *
then you average it to get the values in the cell centers, *
then you interpolate the gradient again on the face centers, **
then you interpolate A to the surfaces,
and finally you calculate the divergence in the CC's

In case of OF representation, you drop the (*) points and instead of (**) you calc the gradient on the surface directly.

Hope it clarified a bit the issue.
Best,
Paweł

EmadTandis May 27, 2013 04:10

you are quite right !
Based on this points, this is numerical Error.

regarding pEqn.flux() it returns: fvc::interpolate(rUA)*fvc::snGrad(p)*mag(mesh.Sf() )
NOT fvc::interpolate(rUA)*fvc::interpolate(fvc::Grad(p )) & mesh.Sf()
that I said before. this difference is based on your points.
Thanks Very Much

thomasArk47 May 30, 2013 18:41

Hi,

ok there has been many interesting questions/answers. I just give complements because I think the basic knowledge of different peoples is very different in this post and maybe it is usefull to recall the cornerstone of the problem and its basis.

OF uses collocated scheme for solving NS equation. Means velocity and pressure are both stored in cell centers. Since incompressible NS is not a classical evolution equation (because the presence of the constraint div(U) which in turns implies that one don't have equation for pressure), this collocated approach is numerically instable in the finite volume framework. Long to explain why but the interested reader should search and study the following keymords:
collocated scheme versus staggered scheme
pressure checkboard instability
Rhie Chow stabilisation
inf-sup condition for mixed problems in finite element context
All the core reasons why one cannot use an interpolation scheme (whatever it is precise type, linear or ...) for phi field come from those notions. Not so difficult but if the reader has strictly no knowledge on numerical analysis, it will be to hard to understand in deep.

Good luck.

vkrastev October 18, 2013 12:43

Quote:

Originally Posted by psosnows (Post 430072)
Hello,


Well, in fact there is. Check out the 5th slide (or 3rd from the end).

I think some time ago there was some explanation regarding this topic, but lets do it from the top.

A nice explanation of the way N-S is solved in OpenFOAM is presented in the OF-wiki about SIMPLE algo:
http://openfoamwiki.net/index.php/Th...hm_in_OpenFOAM
PISO algo is very similar:
http://openfoamwiki.net/index.php/Th...hm_in_OpenFOAM
Of course check also sub-topics.

Now for some notes. As we all know, OF uses Eulerial approach with non-staggered meshes. This can introduce some significant numerical errors (especially while using 2nd order central difference schemes). For that reason in OF we "do not" solve directly for velocity (U), but for the fluxes (phi).
If you check the divergence of U, you will almost always find that there is a not-negligible error. At the same time phi is guaranteed to be divergence free.

Now lets take a look into the PISO algo itself. I will not cite the code but it should be easy to follow.
1) create the UEqn using the last- known phi. Note that any "XEqn" is like a black box, with a void space for the unknown. The imporatant stuff are the coeffs.
2) if you like, solve momentum predictor (this is unnecessary and can be dropped for time saving).
3) extract the semi-central coeffs from the UEqn, reverse them and call rAU. This is the famous "operator splitting".
4) pressure loop:
4.1) recalculate velocity: U = rAU * H();
4.2) recalculate fluxes: phi = interpolate(U) * S; note that the flux field is NOT divergence free;
4.3) solve for pressure using the non divergence free flux
4.4) solve it several times...
5) now, we solved for pressure with non-div-free phi. But the literature (Jasak's thesis, Issa et al.) tells us, that we can correct the fluxes using the pressure field and this way ensure div-free condition. This is the famous phi -= pEqn.flux();
6) Finally, we correct the velocity field, acquiring a good approximation of the velocity field.

Note that the solution are the three fields: U, p and phi. And the imporatant one is in fact phi not U.

Hope I managed to clear the things a bit.

Best,
Pawel

Hi Pawel,

after reading your post and most of the previous ones I think that now pressure-velocity coupling in OpenFOAM is getting much clearer to me, so thank you all (also Alberto, Santiago and the others) for the interesting explanations.
I have just one little doubt left about the momentum predictor stage: you said above that it is, in principle, unnecessary for the successful application of the correction procedure; however, both in H.Jasak's and E. De Villiers Theses, I found passages claiming that the predicted velocities are used to update the H(U) vector before calculating the "pseudo-velocities" U=rAU*H(U) and interpolating them for the non-null divergence face fluxes evaluation (the points 4.1 and 4.2 of your post). These affirmations are apparently supported by the fact that in OpenFOAM H(U) appears to be calculated right after the momentum predictor is solved (see the pisoFoam.C source below).

Code:


 if (momentumPredictor)
            {
                solve(UEqn == -fvc::grad(p));
            }

            // --- PISO loop

            for (int corr=0; corr<nCorr; corr++)
            {
                volScalarField rAU(1.0/UEqn.A());

                volVectorField HbyA("HbyA", U);
                HbyA = rAU*UEqn.H();
                surfaceScalarField phiHbyA
                (
                    "phiHbyA",
                    (fvc::interpolate(HbyA) & mesh.Sf())
                  + fvc::ddtPhiCorr(rAU, U, phi)
                );

Additionally, the predictor stage is indeed optional in the unsteady pressure-based OF solvers (PISO/PIMPLE-like), while it is not in the steady-state ones (SIMPLE-like).
So, my questions are: 1) how and where is actually used the velocity field obtained from the momentum predictor? Is it, for instance, only used for the neighbour velocities update inside H(U), whereas face fluxes phi, which are "hidden" in the convective discretization coefficients, are left "frozen" until the pressure equation is solved? 2) depending on the answer to question 1), is it really negligible the predictor stage influence on the correction procedure (or, alternatively, am I missing something in my considerations)?
Can you please comment on this? Of course comments from other sources are also appreciated

Thanks in advance

V.

santiagomarquezd October 18, 2013 16:29

Hi Vesselin, each time you do H(U) the last calculated U is used. In the first PISO loop is the velocity obtained in the momentum predictor, in case if it is not performed is the velocity of the previous time-step. From the second corrector and later the last reconstructed velocity is used. In this case the coefficients of H() are frozen.

If you use pimple solvers you can do outer loops which will renew the H() operator since a new UEqn is assembled in each outer loop

Regards.

vkrastev October 19, 2013 05:07

Hi Santiago,
and thanks a lot for your fast reply. However, just for a better comprehension, I would like to further address some the points of your response:

Quote:

Originally Posted by santiagomarquezd (Post 457734)
each time you do H(U) the last calculated U is used.

Ok, but is it the same for phi (which is included inside convection discretization coefficients, that contribute to build H)? If it is so, in the first PISO loop the phi values (and thus the discretization coefficients) should be the ones created by "createPhi.H" or coming from the previous time step/correction loop, and should remain the same from the momentum predictor (if present) to the pressure equation solution, after which they are corrected enforcing mass conservation. Thus, the second PISO loop will start with H(U) updated with the last corrected cell center velocity field (explicit correction after the pressure equation solution), but also with the last (at this stage, conservative) face fluxes phi. And so on for the next (eventual) PISO sequences. Is it right?


Quote:

Originally Posted by santiagomarquezd (Post 457734)
In the first PISO loop is the velocity obtained in the momentum predictor, in case if it is not performed is the velocity of the previous time-step.

Ok, so I guess it is implicitly assumed that the influence of the initial velocity estimate on the algorithm convergence along the single time step is small: to my opinion this is quite reasonable at run time, when values from the previous time step should be well converged, but probably it is a bit "raw" assumption at the very beginning of the run, when initial conditions are usually far from being accurate. I think this could also be the reason why in the steady-state algorithm (SIMPLE solvers) the momentum predictor is always there (values from previous "steps" are generally not good guesses, if you are not close to the global algorithm convergence to a steady-state).

Thanks once again

V.

santiagomarquezd October 19, 2013 07:57

Quote:

Originally Posted by vkrastev (Post 457772)
Ok, but is it the same for phi (which is included inside convection discretization coefficients, that contribute to build H)? If it is so, in the first PISO loop the phi values (and thus the discretization coefficients) should be the ones created by "createPhi.H" or coming from the previous time step/correction loop, and should remain the same from the momentum predictor (if present) to the pressure equation solution, after which they are corrected enforcing mass conservation. Thus, the second PISO loop will start with H(U) updated with the last corrected cell center velocity field (explicit correction after the pressure equation solution), but also with the last (at this stage, conservative) face fluxes phi. And so on for the next (eventual) PISO sequences. Is it right?

Yes, that's right. As Issa explains in his paper, he gives more importance to p-V coupling (PISO loop) than nonlinearity [fvm::div(phi,U)], so that he freezes H(U) in the whole time-step and focuses in correcting p and phi (U). As I said if you want to update phi you can use the pimple family of solvers.

Quote:

Originally Posted by vkrastev (Post 457772)
Ok, so I guess it is implicitly assumed that the influence of the initial velocity estimate on the algorithm convergence along the single time step is small: to my opinion this is quite reasonable at run time, when values from the previous time step should be well converged, but probably it is a bit "raw" assumption at the very beginning of the run, when initial conditions are usually far from being accurate. I think this could also be the reason why in the steady-state algorithm (SIMPLE solvers) the momentum predictor is always there (values from previous "steps" are generally not good guesses, if you are not close to the global algorithm convergence to a steady-state).

The hypothesis is that U^n and U^(n+1) aren't much different so that the linearization fvm::div(phi,U) is valid. In this case is usual to use the condition of Co<1, making the assumption that low dt implies minor changes in U, really the condition should be dU/dt~0, but the trick of Co works. In practice, when you are near an stationary state you can enlarge the timestep, which shows that the condition in dU/dt is the correct one.

In the case of steady solvers like simpleFoam the trick to avoid raw approximations at the begginning is to low the relaxation factors.

Regards.

vkrastev October 19, 2013 12:25

Quote:

Originally Posted by santiagomarquezd (Post 457798)
Yes, that's right. As Issa explains in his paper, he gives more importance to p-V coupling (PISO loop) than nonlinearity [fvm::div(phi,U)], so that he freezes H(U) in the whole time-step and focuses in correcting p and phi (U). As I said if you want to update phi you can use the pimple family of solvers.

After re-reading your last and previous posts and also Issa's paper and Jasak's Thesis, I think I should clarify this a bit. My previous consideration is NOT right, since H(U) is updated between every two consecutive PISO correctors (so k-1 times along the whole time step, if k is the number of PISO correctors), but ONLY in terms of the cell-centered neighbour velocities U_N, and NOT in terms of the phi contribution inside discretization coefficients (a_N, but also the owner coefficient a_P, which contributes in the construction of the HbyA vector).

So, to have a completely renewed H(U), one has to move to the next time step (or outer iteration, if a PIMPLE solver is used), at the beginning of which UEqn is ALWAYS assembled (but solved as a momentum predictor only if the option is on).

Ok, I think that now things are all in place, sorry if I made it longer than strictly necessary and thanks for your patience (and of course please correct me if you find some other inconsistencies).

Regards

V.

galap April 9, 2014 08:28

Hello,

it is hard for me to follow your interesting discussion, so I wanted to have a practical approach on the phi variable. What I thought that right after calling

phi -= pEqn.flux();

mass conversion should be valid. I tried to check this for a simpleFoam and a rhoPimpleFoam tutorial case. Since phi is given as (kg s-1) for all faces, the sum of phi for all faces of one cell should be zero.

I took into account the normal vecs orientation:

if (mesh.faceOwner()[mesh.cells()[cell_label][face_label]])

then the normal vec is pointing inside the cell. Else I multipy phi with -1, since phi is pointing towards the neighbouring cell (same in case of boundary face).

I never see a sum 0, but rather a sum of ca. 10-20% of the sum of the absolute values.

So how do I calculate the correct mass flow in and out of one arbitary cell?

galap June 12, 2014 01:50

I finally solved it, the face orientation was the problem.

May this helps anyone in the future:

Code:

volVectorField phi_V
(
    IOobject
    (
        "phi_V",
        runTime.timeName(),
        mesh,
        IOobject::NO_READ,
        IOobject::AUTO_WRITE
    ),
    (rho*U)
);       


surfaceVectorField phi_V_interpolated
(
    IOobject
    (
        "phi_V_interpolated",
        runTime.timeName(),
        mesh,
        IOobject::NO_READ,
        IOobject::AUTO_WRITE
    ),
    linearInterpolate(phi_V)
);


forAll(U, cell_label)
{
    double flux_sum_neg = 0;
    double flux_sum_pos = 0;   
   
   
    forAll(mesh.cells()[cell_label], face_label)
    {
        double sign = 0;
        double sign_phi = 0;
       
       
        if (!mesh.isInternalFace(mesh.cells()[cell_label][face_label]))
        {
            sign = 1;
            label patch_index = mesh.boundaryMesh().whichPatch(mesh.cells()[cell_label][face_label]);
            label face_index = mesh.cells()[cell_label][face_label] - mesh.boundaryMesh()[patch_index].start();
            vector normal = sign * mesh.Sf().boundaryField()[patch_index][face_index] / mesh.magSf().boundaryField()[patch_index][face_index] ;
            sign_phi = (((phi_V_interpolated.boundaryField()[patch_index][face_index] & normal) < 0) ? 1 : -1);                   
            if (sign_phi < 0) flux_sum_neg += std::abs(phi.boundaryField()[patch_index][face_index]);
            if (sign_phi > 0) flux_sum_pos += std::abs(phi.boundaryField()[patch_index][face_index]);           
        }   
       
        else
        {
           
            sign = ((mesh.faceNeighbour()[mesh.cells()[cell_label][face_label]] == cell_label) ? -1 : 1);
            vector normal = sign * mesh.Sf()[mesh.cells()[cell_label][face_label]] / mesh.magSf()[mesh.cells()[cell_label][face_label]];
            sign_phi = (((phi_V_interpolated[mesh.cells()[cell_label][face_label]] & normal) < 0) ? 1 : -1);       

            if (sign_phi < 0) flux_sum_neg += std::abs(phi[mesh.cells()[cell_label][face_label]]);
            if (sign_phi > 0) flux_sum_pos += std::abs(phi[mesh.cells()[cell_label][face_label]]);
        }
    }

    double error = (flux_sum_pos - flux_sum_neg) / flux_sum_pos * 100; //in %
   
    if (std::abs(error) > 1)
    {       
        ..
    }
   
    else
    {
        Info << "massflow into " << cell_label << " = " << flux_sum_pos * mesh.cellVolumes()[cell_label] << " kg/s" << endl;
    }   
}


Honey October 15, 2019 10:24

Quote:

Originally Posted by amirhoshangm (Post 352308)
Hi

I, simulating flow in a channel with variable porosity. so I need to change some thing in correction of flux. it means I have to multiply porosity to pressure. the flux in updated by

pEqn.flux()


could anyone let me know how I can use porosity*pressure instead of pressure in correction of flux?

Hi,


I was wondering if you have solved your problem and if possible to share it here! I am also trying to modify porousSimpleFoam in order to multiply porosity to flux and would highly appreciate to hear some help with this regard?


Regards,

Honey


All times are GMT -4. The time now is 21:47.